CN112818822B - Automatic identification method for damaged area of aerospace composite material - Google Patents

Automatic identification method for damaged area of aerospace composite material Download PDF

Info

Publication number
CN112818822B
CN112818822B CN202110118568.3A CN202110118568A CN112818822B CN 112818822 B CN112818822 B CN 112818822B CN 202110118568 A CN202110118568 A CN 202110118568A CN 112818822 B CN112818822 B CN 112818822B
Authority
CN
China
Prior art keywords
image
pixel
infrared
segmentation
function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110118568.3A
Other languages
Chinese (zh)
Other versions
CN112818822A (en
Inventor
黄雪刚
雷光钰
殷春
谭旭彤
罗庆
石安华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ultra High Speed Aerodynamics Institute China Aerodynamics Research and Development Center
Original Assignee
Ultra High Speed Aerodynamics Institute China Aerodynamics Research and Development Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ultra High Speed Aerodynamics Institute China Aerodynamics Research and Development Center filed Critical Ultra High Speed Aerodynamics Institute China Aerodynamics Research and Development Center
Priority to CN202110118568.3A priority Critical patent/CN112818822B/en
Publication of CN112818822A publication Critical patent/CN112818822A/en
Application granted granted Critical
Publication of CN112818822B publication Critical patent/CN112818822B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/12Computing arrangements based on biological models using genetic models
    • G06N3/126Evolutionary algorithms, e.g. genetic algorithms or genetic programming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Genetics & Genomics (AREA)
  • Physiology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Radiation Pyrometers (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an automatic identification method for a damaged area of an aerospace composite material, which comprises the following steps: extracting typical transient thermal response of each type of defect; acquiring an infrared reconstruction image; obtaining a low-quality infrared reconstruction image; obtaining weight coefficients for removing noise, retaining details and keeping the three segmentation performances of the edge; constructing an infrared image segmentation function; obtaining a weight coefficient of a target function for realizing each segmentation performance; constructing a full-pixel infrared image segmentation target function, and performing image segmentation on the reconstructed full-pixel infrared image by using a segmentation model; and realizing infrared full-pixel image segmentation on the image segmentation layer to obtain a segmented image. The method utilizes the multi-objective optimization theory to carry out defect segmentation in the infrared reconstructed image, constructs the objective function aiming at the noise problem and the edge fuzzy problem respectively to improve the segmentation precision, ensures high defect detection rate, reduces false detection rate, effectively extracts the damaged defect area in the reconstructed image, and is convenient for the quantitative research of complex defects.

Description

Automatic identification method for damaged area of aerospace composite material
Technical Field
The invention belongs to the technical field of damage detection and maintenance guarantee of aerospace aircrafts, and particularly relates to an automatic identification method for damaged areas of aerospace composite materials.
Background
With the urgent weight reduction requirement of aerospace aircrafts, light-weight structural materials with excellent mechanical properties, in particular to light-weight composite materials represented by high-strength/high-modulus carbon fiber composite materials, honeycomb structural materials and the like, increasingly become hot spots of aerospace research. Meanwhile, composite materials with special functions and purposes, such as stealth coating materials, carbon-based heat-proof materials and other functional composite materials, are widely applied in the aerospace field. However, during manufacturing, assembly or real-time use of the composite material, serious quality problems such as delamination, debonding, porosity, cracks, impact defects, etc. may be caused by improper processes, repeated cyclic stresses, external impacts, etc. For example, an aircraft is easily impacted by flying birds in the processes of taking off and landing, and a spacecraft is impacted by the ultra-high speed of micro space debris in the processes of launching and in-orbit running, so that various damages such as perforation, impact pits, delamination, peeling and the like are generated on the surface of an aerospace aircraft composite material, and the composite material structure on the surface of the aircraft is damaged or has reduced functions and failures. Therefore, in order to avoid serious accidents caused by various damage defects in the using process of aerospace composite material members, the detection of the damage defects and the quality evaluation of the composite materials are particularly critical.
The infrared thermal imaging technology has the advantages of safety, intuition, rapidness, high efficiency, large detection area, no contact and the like, plays an important role in the damage detection of aerospace composite materials, and has the following basic principle: based on the Fourier heat transfer and infrared radiation principle, when an object to be detected is subjected to external thermal excitation (irradiation of sunlight or artificial light source), the heat conduction process is influenced due to the existence of material defects and is expressed as the difference of transient temperature response of the surface of the object to be detected, and the surface temperature field response is collected through a thermal infrared imager, so that the defect states of the surface and the interior of the object to be detected are known. The data collected by the infrared imager is infrared thermal image sequence data formed by a plurality of frames of infrared thermal images, the infrared thermal image sequence data comprises temperature change information (transient thermal response curve) of each pixel point in the detected area, and the infrared thermal image sequence data is analyzed and processed to obtain a reconstructed image of the defect, so that the visual detection of the damage defect of the composite material is realized.
In order to accurately evaluate the damage defect, the target defect region and the background region in the infrared reconstructed image of the defect need to be effectively separated. Compared with the conventional natural visible light image, the infrared image has lower resolution and fuzzy edges, especially in a complex detection background, due to the existence of other heat sources in the background or the strong heat reflectivity of the material, the background area is overlapped and disordered, the contrast between the target and the background is reduced, the defect identification in the reconstructed image is seriously interfered, and the accurate extraction and type identification of the defect area are more difficult. In order to solve the above problems, the original image needs to be processed by an image segmentation algorithm, and the target region and the background region are effectively separated, so that it is seen that the correct segmentation defect becomes a key step in the target identification process. In the existing research, images are segmented by using the FCM algorithm and the improved algorithm thereof, but the segmentation problem is often oriented to a damage function, namely an objective function. On the one hand, if the requirement of keeping details is fully met, although the detection rate of the defects is improved to a certain extent, noise is also kept, and false judgment is easily caused to defect identification, so that the false detection rate is increased. On the other hand, if only the requirement of integral denoising of the image is met, the damage defects caused by the impact of the tiny fragments are small in size and large in quantity, and the tiny defects similar to the noise can be removed along with the denoising process, so that the detection rate and the detection precision of the defects are reduced. Therefore, when the conventional segmentation method is applied to the infrared reconstructed image of the defect, which is the object of the present invention, the false detection rate and the detection rate of the defect cannot be balanced, and the segmentation effect is not satisfactory. Particularly, the infrared thermal image reflects the thermal radiation information of the test piece, and is easily influenced by the environment, an imaging link and the like, so that the background noise of the obtained defect infrared reconstruction image is large. Meanwhile, due to the difference of the surface heat radiation capability of the defect area and the background area, the edge of the infrared reconstruction image with the defect is not smooth enough, the division of the edge area is not clear enough, and the image segmentation is not facilitated.
In order to reduce the false detection rate of defects, improve the detection rate, remove noise and fully reserve details, a noise elimination function and a detail reserving function are set, and in consideration of the fact that infrared images reflect the temperature difference of different regions after thermal excitation is applied, temperature change is continuous, and therefore no obvious contour division exists between the regions, an edge preserving function is introduced to achieve accurate segmentation of the defects. When a noise elimination function is set, the influence of noise pixel points on infrared image segmentation is eliminated as much as possible by setting a fuzzy factor and fully considering neighborhood information of an infrared image, but the infrared image is greatly influenced by noise, and when the noise elimination effect is poor, the situation that two similar defects are classified into one class and noise and a boundary are classified into one class can occur, so that a function for measuring the dispersion degree between the classes is introduced, the distance between the clustering centers of the classes can be flexibly adjusted, and the problem that the pixel points between different defect classes with small similarity are difficult to distinguish is solved. When a detail retaining function is set, in order to retain more defect detail information, the compactness of the segmented image is small, the separation is large, in order to enhance the tiny defect information, the correlation between the positions and the colors of a neighborhood pixel point and a center pixel point is considered, a correlation coefficient is introduced, if the correlation between the neighborhood pixel point and the center pixel point is large, the information of the pixel is considered in an objective function, and if the correlation between the neighborhood pixel point and the center pixel point is small, the information of the pixel is not considered in the objective function. When the edge retention function is set, the edge revision is carried out on the infrared image by calculating the edge pixels by utilizing the local gradient information, and the key of accurate segmentation is the influence degree of the neighborhood pixels on the central pixels, so that the influence degree of the neighborhood pixels on the central pixels is calculated based on the correlation of the gray level difference of the pixels, the correlation is large, the neighborhood pixels and the central pixels belong to the same class, and the defect edge information is enhanced by amplifying the influence of the neighborhood pixels on the membership degree of the central pixels, thereby improving the image segmentation effect.
After the objective functions for realizing the three division performances are arranged, a new problem is how to adjust the weight coefficients of the three objective functions so that the formed division objective functions have the best division performance. The method adopts a double-layer segmentation model, and the first layer obtains the weight coefficient of each objective function through a multi-objective optimization algorithm; and the second layer constructs a segmentation target function by using the obtained weight coefficient to realize infrared image segmentation.
When the weight coefficient of each objective function is solved, a multi-objective optimization problem is decomposed into a plurality of scalar subproblems through weight vectors by using a multi-objective algorithm in a processed low-quality infrared image containing complete defect information, and the weight vector component of each subproblem can reflect the importance degree of each objective function to the division of the objective function.
The invention is based on the defect detection of multi-objective optimization segmentation, uses the thermal infrared imager to record the surface temperature field change of the measured object, meets the in-situ and non-contact nondestructive detection requirements, and meets the requirements of high-precision detection and identification of complex defects by analyzing and processing the infrared thermal image sequence. The algorithm samples the infrared thermal image sequence in a mode of changing row-column step length to obtain a data set formed by a transient thermal response curve with typical temperature change characteristics, and the speed of subsequent data classification is improved. And obtaining the membership degree of each pixel point and the clustering center by using an FCM (fuzzy C-means) algorithm, classifying each transient thermal response curve in the data set by comparing the membership degree, and selecting the classified typical thermal response curve to reconstruct the infrared thermal image to obtain a defect reconstruction image. On the basis, the method further utilizes a multi-objective optimization theory to carry out defect segmentation in the infrared reconstructed image, constructs appropriate objective functions aiming at the noise problem and the edge blurring problem respectively to improve the segmentation precision, ensures high defect detection rate and reduces false detection rate, thereby effectively extracting the damaged defect area in the reconstructed image and facilitating the quantitative research of complex defects.
Disclosure of Invention
An object of the present invention is to solve at least the above problems and/or disadvantages and to provide at least the advantages described hereinafter.
To achieve these objects and other advantages in accordance with the purpose of the invention, there is provided an aerospace composite material damage region automatic identification method, including the steps of:
step one, after effective information is extracted from collected test piece infrared data, classifying the collected test piece infrared data according to defect types and extracting typical transient thermal response of each type of defects;
secondly, forming a transformation matrix by the selected typical transient thermal response to obtain an infrared reconstruction image;
step three, calculating the variation coefficient of pixels of the reconstructed infrared image with K dimensions of M multiplied by N, and sampling out the most prominent pixels by measuring the homogeneity of the neighborhood pixels and the central pixels to obtain K inferior infrared reconstructed images containing complete defect information and containing Kn pixel points;
fourthly, in the low-quality infrared reconstruction image which contains Kn pixel points and contains complete defect information and corresponds to each infrared reconstruction image obtained through processing, the segmentation performance in three aspects of noise removal, detail retention and edge maintenance is measured by utilizing multiple targets, and the weight coefficient of each segmentation performance is obtained to construct a segmentation objective function; constructing an infrared image segmentation function under the guidance of three purposes of noise removal, detail preservation and edge preservation;
step five, constructing a first layer of a double-layer segmentation model, wherein the first layer of the double-layer segmentation model is a weight coefficient determining layer, and setting a multi-objective optimization problem by adopting a multi-objective optimization algorithm to balance three set objective functions in the extracted low-quality infrared reconstruction image containing complete defect information; the method comprises the following steps of obtaining weight coefficients of objective functions for realizing each segmentation performance by using a multi-objective optimization algorithm and combining weight vectors, wherein the specific steps comprise:
s51, initializing parameters of the multi-objective optimization algorithm; acquiring Kn weight vectors which are uniformly distributed, and calculating T weight vectors which are nearest to each weight vector; uniformly sampling in a feasible space which meets a multi-objective optimization problem to generate an initial population; initializing a multi-objective optimization function; decomposing the subproblems by adopting a decomposition model based on Chebyshev; setting an external population EP as an empty set;
s52, updating individuals in the population by an evolutionary multi-objective optimization algorithm; after updating the individuals each time, taking noise elimination as preference, and adjusting the weight vector according to the preference;
step S53, selecting a trade-off solution to obtain a weight coefficient for removing noise, retaining details and keeping the edge function segmentation performance;
step six, constructing a full-pixel infrared image segmentation target function, inputting the weight coefficient obtained in the step five into a second layer of the double-layer segmentation model, wherein the second layer of the double-layer segmentation model is an image segmentation layer, and performing image segmentation on the full-pixel infrared image with the number of pixel points of M multiplied by N obtained by reconstruction by using the segmentation model;
and step seven, according to the membership degree and the clustering center updating formula which are obtained by the full-pixel infrared image segmentation target function constructed in the step six, inputting a threshold value and the maximum iteration times for stopping judgment of the algorithm, and realizing infrared full-pixel image segmentation on an image segmentation layer to obtain a segmented image of the test piece defect infrared image.
Preferably, the specific method of the first step comprises: extracting effective transient thermal response of an acquired d-dimensional infrared thermal image sequence S (m, n, wherein m and n respectively represent the m-th row and the n-th column of a three-dimensional matrix, and the third dimension represents the frame number of the infrared thermal image; and dividing the extracted effective transient thermal response into K regions according to the defect type K, and extracting typical transient thermal response which can best represent the current class defect characteristics from the divided various defect regions.
Preferably, the method for obtaining the infrared reconstructed image in the second step includes: from step to stepK d-dimensional typical transient thermal responses extracted in the first step are used for obtaining a linear change matrix H with dimensions of d multiplied by K1Converting S (m, n, y) from three-dimensional matrix into two-dimensional matrix, namely vectorizing each frame of image in the infrared thermal video, dereferencing and arranging each frame of image matrix according to rows to obtain a vector containing pixel point temperature information of one frame and using the vector as a row vector of a new matrix, and constructing a new two-dimensional matrix P (x, y)a×bA ═ d, b ═ mxn; by means of a matrix H1By linear transformation of P, i.e.
Figure BDA0002921206460000051
Wherein
Figure BDA0002921206460000052
Is a matrix H1K × d dimensional pseudo-inverse matrix of (a); and the two-dimensional image matrix O is subjected to row dereferencing to form a two-dimensional image with the size of the original image, and K infrared reconstruction images with the size of M multiplied by N are obtained.
Preferably, the infrared image segmentation function constructed in the fourth step under the guidance of three purposes of noise removal, detail preservation and edge preservation is as follows:
f4(v)=ω1·f1(v)+ω2·f2(v)+ω3·f3(v)
wherein, ω is1、ω2、ω3Weight coefficients of the three objective functions respectively;
step S41, f1(v) A single target removal function SGNS for solving the noise problem; introducing fuzzy factors into the FCM algorithm, and utilizing Euclidean distance d between pixel points in a neighborhood window of a reconstructed imageijOn the basis of determining the space constraint relationship among the pixel points, introducing an inter-class dispersion measurement function aiming at the problem that similar classes with small differences are difficult to distinguish; designed f1(v) The expression is shown as the following formula:
Figure BDA0002921206460000061
wherein Kn is the number of pixel points in the low-quality infrared image, c is the number of clusters, utiIs a pixel point xiFor the clustering center vtDegree of membership, Wi rIs a pixel xiIs a neighborhood window with center size r x rjThen the central pixel x of the infrared reconstructed imageiM ∈ [1, ∞) ] as a smoothing parameter,
Figure BDA0002921206460000062
is a pixel x in the infrared reconstructed imageiAnd the clustering center vtThe gaussian radial basis similarity measure function of (1),
Figure BDA0002921206460000063
Figure BDA0002921206460000064
is a new weighted blurring factor representing pixel xiThe jth pixel in the domain is related to the clustering center vtA weighted ambiguity factor of (a);
Figure BDA0002921206460000065
satisfy the requirement of
Figure BDA0002921206460000066
Where the spatial distance constraint ζdcSatisfy the requirements of
Figure BDA0002921206460000067
Space gray scale constraint ζgcSatisfy the requirement of
Figure BDA0002921206460000068
Wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0002921206460000069
representing a pixel xjAll pixel points in the r × r neighborhood;
Mithe ratio of the variance of (a) to the mean square,
Figure BDA00029212064600000610
εijis a neighborhood pixel point xjAnd the central pixel point xiThe value of the projection of the mean square error in the kernel space, i.e.,
Figure BDA00029212064600000611
the constant 2 is used for enhancing the inhibition effect of the neighborhood pixel point on the center pixel point; etatIs an inter-class dispersion parameter, vtThe cluster center represents the temperature mean value of the current category pixel point,
Figure BDA00029212064600000612
is the temperature mean value of all pixel points in the infrared image, function f1(v) The requirements are satisfied:
Figure BDA00029212064600000613
pixel x is obtained by Lagrange multiplier methodiWith respect to the cluster center vtDegree of membership of
Figure BDA00029212064600000614
Figure BDA00029212064600000615
Cluster center vtThe update formula is:
Figure BDA0002921206460000071
step S42, f2(v) A single target detail retention function SGDR to solve the detail retention problem; the segmentation of image pixels can be further guided by considering local spatial information of the image, the problem of edge blurring is solved, and a correlation coefficient m for measuring the positions and colors of the pixels is introducedij(ii) a Construction detail retention function f2(v) As shown in the following formula:
Figure BDA0002921206460000072
wherein Kn is the number of pixel points in the low-quality infrared image, c is the number of clusters, vtIs the center of the cluster, utiIs a pixel point xiFor the clustering center vtM ∈ [1, ∞) ] as a smoothing parameter, δiThe local spatial information is represented by a local spatial information,
Figure BDA0002921206460000073
Niis a set of pixels in a neighborhood window, x, centered on the ith pixelaIs NiThe number a of pixels in the row is,
Figure BDA0002921206460000074
representing a neighborhood pixel xiAnd a central pixel vtCorrelation of (2), note pixel xiAnd vtRespectively have a spatial coordinate of (x)im,yin)、(vtm,vtn) Gray values of g (x) respectivelyi)、g(vt) Then there is
Figure BDA0002921206460000075
λsIs the influence factor of the spatial scale of the image,
Figure BDA0002921206460000076
λgis the factor that affects the gray scale by which,
Figure BDA0002921206460000077
is given by a pixel xiThe mean gray variance of the neighborhood pixels at the center,
Figure BDA0002921206460000078
is a set of neighborhood pixels NiNumber of pixels in (1), function f2(v) The requirements are satisfied:
Figure BDA0002921206460000079
pixel x is obtained by Lagrange multiplier methodiWith respect to the cluster center vtDegree of membership of
Figure BDA00029212064600000710
Figure BDA00029212064600000711
Cluster center vtThe update formula is:
Figure BDA0002921206460000081
step S43, f3(v) A single target edge preservation function SOEM to solve the edge preservation problem; in order to obtain accurate segmentation result, an edge holding function for segmenting according to gray level is introduced into an objective function, and an amplification function A is introduced for enhancing edge informationtiAmplifying neighborhood pixel xiTo the central pixel vtInfluence of membership; constructing an edge-preserving function f3(v) As shown in the following formula:
Figure BDA0002921206460000082
wherein Kn is the number of pixel points in the low-quality infrared image, c is the clustering number, n represents the gray value of the pixel points, utiPixel point x representing gray value niAbout the current cluster center vtM ∈ [1, ∞) ] as a smoothing parameter, UnNumber of gray levels for infrared image, NnThe number of the pixel points with the gray value of n,
Figure BDA0002921206460000083
for the pixel points in the low-quality image containing Kn pixel points, the following steps are provided:
Figure BDA0002921206460000084
Niis a pixel xiA set of neighborhood pixels that is the center,
Figure BDA0002921206460000085
is a set NiThe number of the pixel points in (1),
Figure BDA0002921206460000086
g(xi) And g (x)j) Respectively representing pixel points xiAnd its neighborhood pixels xjIs determined by the gray-scale value of (a),
Figure BDA0002921206460000087
for a set of neighborhood pixels NiPixel x in (2)jAnd a central pixel xiAverage gray level difference of (1); function f3(v) The requirements are satisfied:
Figure BDA0002921206460000088
pixel x is obtained by Lagrange multiplier methodiWith respect to the cluster center vtDegree of membership of
Figure BDA0002921206460000089
Figure BDA00029212064600000810
Cluster center vtThe update formula of (2) is:
Figure BDA00029212064600000811
thereby completing the construction of the infrared image segmentation function.
Preferably, the step five of using a multi-objective optimization algorithm to obtain the weight coefficient of each objective function in the low-quality infrared reconstructed image with Kn of pixel points specifically comprises: in the low-quality infrared reconstruction image containing complete defect information is extracted, three objective functions are balanced by adopting a multi-objective optimization algorithm, and the multi-objective optimization problem is set as follows):
minF(ν)=[f1(ν),f2(v),f3(v)]T
s.t v=(v1,…,vc)T
where c is the number of classifications, v ═ v (v)1,…,vc)TA set of candidate cluster centers is represented. And decomposing the multi-objective optimization problem into a plurality of scalar subproblems by using the weight vector, wherein the component of the weight vector of each subproblem can reflect the importance degree of each objective function to the division objective function.
The specific steps of the multi-target algorithm for solving the weight coefficient of each target function in the low-quality infrared reconstructed image with the number of pixels Kn are as follows:
step S51, initializing parameters of the multi-objective optimization algorithm, and specifically comprising the following steps:
step S511, the objective function F (v) of the multi-objective optimization algorithm, and the maximum iteration number gmaxThreshold values ζ, ε; the population size Kn; the number T of weight vectors in each neighborhood;
step S512, acquiring Kn weight vectors which are uniformly distributed: lambda [ alpha ]1,…,λKnAnd calculating the nearest T weight vectors B (i) ═ i of each weight vector1,…,iT},i=1,…,Kn,
Figure BDA0002921206460000091
Is λiThe most recent T weight vectors;
step S513, uniformly sampling in a feasible space satisfying the multi-objective optimization problem to generate an initial population S1,…,sKnOrder FVi=F(si),i=1,…,Kn;
Step S514, initialization
Figure BDA0002921206460000092
Satisfying the optimal value of each objective function in the image segmentation multi-objective problem;
step S515, decomposing the subproblems by adopting a decomposition model based on Chebyshev, wherein the jth subproblem is as follows:
Figure BDA0002921206460000093
in the above-mentioned formula, the compound has the following structure,
Figure BDA0002921206460000094
is the weight vector of the jth sub-question,
Figure BDA0002921206460000095
the weight of the noise suppression function is controlled,
Figure BDA0002921206460000096
the weight of the detail-preserving function is controlled,
Figure BDA0002921206460000097
the weight of the edge-preserving function is controlled,
Figure BDA0002921206460000098
Figure BDA0002921206460000099
and
Figure BDA00029212064600000910
respectively obtaining the current optimal function values of the three functions;
step S516, setting an external population EP as an empty set;
step S52, updating the multi-objective optimization algorithm; when less than the maximum iteration number gmaxWhen the weight vector is updated once every iteration, step S521 is first performed to update the individual, and step S522 is then performed to adjust the weight vector;
step S521, updating the individuals in the population, specifically including:
step S5211, copy: randomly selecting two serial numbers k, l from B (i), and using differential evolution algorithm to select from sk,slGenerating a new solution e for the image segmentation multi-target problem;
step S5212, improvement: carrying out constraint condition processing proposed in the image segmentation multi-objective optimization problem on the e to generate e';
step S5213, update reference point f*: numerical value f of reference point*<f*(e'), then f*=f*(e');
Step S5214, updating the neighborhood solution: g is obtained according to the mathematical expression of Tchebycheffte(e'|λj,f*)≤gte(sjj,f*) J ∈ B (i), then sj=e′,,FVi=F(e′);
Step S522, adjusting the weight vector, specifically includes:
step S5221, calculating individuals in the population
Figure BDA0002921206460000101
To the current cluster center
Figure BDA0002921206460000102
The distance of (c):
Figure BDA0002921206460000103
select the U individuals that minimize Dist
Figure BDA0002921206460000104
As an ideal reference point;
step S5222, find
Figure BDA0002921206460000105
High dimensional sphere region with radius r as center
Figure BDA0002921206460000106
All individuals in
Figure BDA0002921206460000107
Computing
Figure BDA0002921206460000108
Standard deviation of all individuals within:
Figure BDA0002921206460000109
wherein the content of the first and second substances,
Figure BDA00029212064600001010
to be distributed in
Figure BDA00029212064600001011
Average of all individuals in the population, R is
Figure BDA00029212064600001012
The number of individuals in the group of individuals,
Figure BDA00029212064600001013
r is 1, …, R is distributed in
Figure BDA00029212064600001014
(ii) an individual;
step S5223, selecting standard deviation from U individuals
Figure BDA00029212064600001015
Minimum Up individuals as preference region reference points, for each of the Up individuals
Figure BDA00029212064600001016
And its corresponding weight vector lambdaUpThe following update operations are performed:
step S52231, calculating a basis weight vector:
Figure BDA00029212064600001017
wherein f is*Segmenting an optimal value of each objective function in the multi-objective problem for the image;
step S52232, find the departure in the population
Figure BDA00029212064600001018
The most distant European individuals
Figure BDA00029212064600001019
Wherein the content of the first and second substances,
Figure BDA00029212064600001020
and find its corresponding weight vector lambdam
Step S52233, calculating the generated weight vector:
λUpnew=λUp+step·λm
wherein λ isUpnew=(λUpnew1Upnew2Upnew3) Step is a set Step length parameter;
step S52234, utilizing the preference area reference point
Figure BDA0002921206460000111
And the most distant individual
Figure BDA0002921206460000112
Randomly generating a new solution
Figure BDA0002921206460000113
Comprises the following steps:
Figure BDA0002921206460000114
step S52235, generating a new individual
Figure BDA0002921206460000115
As a new clustering center: to be provided with
Figure BDA0002921206460000116
As a center, calculating the current membership according to a membership calculation formula and a clustering center calculation formula corresponding to the set three types of objective functions:
Figure BDA0002921206460000117
calculating new clustering center according to current membership
Figure BDA0002921206460000118
Step S52236, use new individual
Figure BDA0002921206460000119
Replacement of
Figure BDA00029212064600001110
Step S523, update EP: removing all vectors dominated by F (e '), and adding e ' to the EP if F (e ') is not dominated by vectors within the EP;
step S53, terminating iteration; if the termination condition g ═ g is satisfiedmaxAnd if the output EP is optimal, the image segmentation multi-target problem is enabled to reach the optimal clustering center set, and if the iteration number g is increased to g +1, the step S52 is switched to.
Preferably, the sixth step of constructing the full-pixel infrared image segmentation objective function includes the specific steps of: inputting the weight coefficient obtained in the fifth step into a second layer of the double-layer segmentation model, wherein the second layer of the double-layer segmentation model is an image segmentation layer, and performing image segmentation on the full-pixel infrared image with the number of pixel points being M multiplied by N obtained through reconstruction by using the double-layer segmentation model;
in the second layer of the double-layer segmentation model, the following optimization functions are provided for the full pixels with the number of the infrared thermal image pixel points being MxN:
Figure BDA00029212064600001111
when solving this objective function, the function f is preserved due to detail2(v) The separability measure of (1) does not contain pixel xiAbout the cluster center vtDegree of membership utiTherefore, the solution of the membership function and the clustering center under the Lagrange multiplier method is carried out on the following functions:
Figure BDA0002921206460000121
for simplicity, let M (x)i,vt)=||xi-vt||2+||δi-vt||2Then, the update formula of the membership degree is:
Figure BDA0002921206460000122
meanwhile, the updating formula of the clustering center is as follows:
Figure BDA0002921206460000123
and completing the construction of the full-pixel infrared image segmentation target function.
Preferably, the step seven of implementing infrared full-pixel image segmentation on the second layer of the double-layer segmentation model specifically comprises the following steps:
step S71, initializing iteration times t, generating an initial clustering center and calculating initial membership;
step S72, calculating the segmentation objective function under the current iteration times
Figure BDA0002921206460000124
Step S73, according to the formula
Figure BDA0002921206460000125
Updating the membership degree;
step S74, according to the formula
Figure BDA0002921206460000126
Updating the clustering center;
step S75, calculating the updated objective function
Figure BDA0002921206460000127
Step S76, if
Figure BDA0002921206460000128
Or T ═ TmaxAnd if so, ending the segmentation algorithm, dividing the pixel points into the defect regions with the maximum membership value to obtain segmented images, namely finally obtaining the segmentation result of the whole observation image for the infrared reconstruction image of the whole pixel.
The invention at least comprises the following beneficial effects: the method for automatically identifying the damaged area of the aerospace composite material obtains the step length of a transformation column by carrying out row-direction searching and comparing on the maximum value of the temperature point in the infrared thermal image sequence data, blocks the data by using the maximum value of the temperature in the transient thermal response curve to obtain the step length of the transformation row of each data block, samples by using the step length of the transformation column and the step length of the transformation row to obtain a sampling data set formed by the transient thermal response curve containing typical temperature change, and obtains the membership degree of the classification of the sampling data set by using an FCM algorithm. And classifying each transient thermal response curve in the data set by using the membership degree, and reconstructing a defect image by using the classified typical thermal response curve. And constructing a thermal image segmentation framework with double layers and multi-target optimization to realize accurate segmentation of the defects.
Meanwhile, the method for automatically identifying the damaged area of the aerospace composite material has the following beneficial effects:
(1) the double-layer multi-target optimized thermal image segmentation framework provided by the invention introduces a multi-target theory, establishes a target function respectively aiming at three target problems to be solved, and solves the segmentation problem in a targeted manner, so that the obtained segmented image is balanced among the three, and the result image obtained by segmentation has three performances of noise elimination, detail retention and edge retention. In order to select the most appropriate weight coefficient in the space, the invention combines weight vector adjustment in the process of multi-objective algorithm iterative solution, when adjusting the weight vector, because the infrared image is greatly influenced by noise, and the set detail retention function and the edge retention function are related to the detail information of defects, we hope that in the segmentation objective function, the influence of the noise elimination function is a little larger, take noise elimination as preference, adjust the weight vector based on the preference, and search in the low-quality infrared image to obtain the weight coefficient which can best reflect the importance degree of each objective function. And searching the low-quality infrared image to obtain a weight coefficient, returning to the full-pixel infrared image, and segmenting the image according to a segmentation objective function obtained by solving the weight coefficient.
(2) The double-layer segmentation model provided by the invention can solve the problem of low calculation efficiency caused by the fact that a multi-target algorithm and infrared data acquired by experiments are huge under the premise of ensuring the segmentation quality.
(3) The double-layer multi-target optimized thermal image segmentation framework provided by the invention does not need to repeatedly calculate the weight coefficients of the target functions corresponding to the noise elimination, the detail retention and the edge maintenance, and has stronger applicability.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention.
Description of the drawings:
FIG. 1 is a flow chart of an automatic identification method for damaged areas of aerospace composites according to the invention;
FIG. 2 is a PF surface map obtained after solving a multi-objective optimization problem in the embodiment of the present invention;
FIG. 3 is a TTR curve of a background region of an impact pit in an embodiment of the present invention;
FIG. 4 is an infrared reconstructed image corresponding to a TTR curve of a background region of an impact pit according to an embodiment of the present invention;
FIG. 5 is a TTR curve of a composite material impacting the interior of a pit in an embodiment of the present invention;
FIG. 6 is an infrared reconstructed image corresponding to a TTR curve inside a composite material impact pit according to an embodiment of the present invention;
FIG. 7 is a TTR curve of a composite material impacting the edge of a pit in an embodiment of the present invention;
FIG. 8 is an infrared reconstructed image corresponding to a TTR curve of a composite impact pit edge in an embodiment of the invention;
FIG. 9 is a graph of the composite impact pit edge reconstructed image defect segmentation result in accordance with an embodiment of the present invention;
FIG. 10 is a graph of the defect segmentation result of the reconstructed image inside the composite material impact pit according to the embodiment of the invention.
The specific implementation mode is as follows:
the present invention is further described in detail below with reference to the attached drawings so that those skilled in the art can implement the invention by referring to the description text.
It will be understood that terms such as "having," "including," and "comprising," as used herein, do not preclude the presence or addition of one or more other elements or groups thereof.
As shown in fig. 1: the invention discloses an automatic identification method of an aerospace composite material damage area, which comprises the following steps:
step one, after effective information is extracted from collected test piece infrared data, classifying the collected test piece infrared data according to defect types and extracting typical transient thermal response of each type of defects;
secondly, forming a transformation matrix by the selected typical transient thermal response to obtain an infrared reconstruction image;
step three, calculating the variation coefficient of pixels of the reconstructed infrared image with K dimensions of M multiplied by N, and sampling out the most prominent pixels by measuring the homogeneity of the neighborhood pixels and the central pixels to obtain K inferior infrared reconstructed images containing complete defect information and containing Kn pixel points;
fourthly, in the low-quality infrared reconstruction image which contains Kn pixel points and contains complete defect information and corresponds to each infrared reconstruction image obtained through processing, the segmentation performance in three aspects of noise removal, detail retention and edge maintenance is measured by utilizing multiple targets, and the weight coefficient of each segmentation performance is obtained to construct a segmentation objective function; constructing an infrared image segmentation function under the guidance of three purposes of noise removal, detail preservation and edge preservation;
step five, constructing a first layer of a double-layer segmentation model, wherein the first layer of the double-layer segmentation model is a weight coefficient determining layer, and setting a multi-objective optimization problem by adopting a multi-objective optimization algorithm to balance three set objective functions in the extracted low-quality infrared reconstruction image containing complete defect information; the method uses a multi-objective optimization algorithm to combine with a weight vector to obtain a weight coefficient of an objective function for realizing each segmentation performance, and comprises the following specific steps:
s51, initializing parameters of the multi-objective optimization algorithm; acquiring Kn weight vectors which are uniformly distributed, and calculating T weight vectors which are nearest to each weight vector; uniformly sampling in a feasible space which meets a multi-objective optimization problem to generate an initial population; initializing a multi-objective optimization function; decomposing the subproblems by adopting a decomposition model based on Chebyshev; setting an external population EP as an empty set;
s52, updating individuals in the population by an evolutionary multi-objective optimization algorithm; after updating the individuals each time, taking noise elimination as preference, and adjusting the weight vector according to the preference;
step S53, selecting a trade-off solution to obtain a weight coefficient for removing noise, retaining details and keeping the edge function segmentation performance;
step six, constructing a full-pixel infrared image segmentation target function, inputting the weight coefficient obtained in the step five into a second layer of the double-layer segmentation model, wherein the second layer of the double-layer segmentation model is an image segmentation layer, and performing image segmentation on the full-pixel infrared image with the number of pixel points of M multiplied by N obtained by reconstruction by using the segmentation model;
and step seven, according to the membership degree and the clustering center updating formula which are obtained by the full-pixel infrared image segmentation target function constructed in the step six, inputting a threshold value and the maximum iteration times for stopping judgment of the algorithm, and realizing infrared full-pixel image segmentation on an image segmentation layer to obtain a segmented image of the test piece defect infrared image.
In the above technical solution, the specific method of the first step includes: extracting effective transient thermal response of an acquired d-dimensional infrared thermal image sequence S (m, n, wherein m and n respectively represent the m-th row and the n-th column of a three-dimensional matrix, and the third dimension represents the frame number of the infrared thermal image; and dividing the extracted effective transient thermal response into K regions according to the defect type K, and extracting typical transient thermal response which can best represent the current class defect characteristics from the divided various defect regions.
In the above technical solution, the method for obtaining the infrared reconstructed image in the second step includes: obtaining a linear change matrix H with the dimensionality of d multiplied by K from K d-dimensional typical transient thermal responses extracted in the step one1Converting S (m, n, y) from three-dimensional matrix into two-dimensional matrix, namely vectorizing each frame of image in the infrared thermal video, dereferencing and arranging each frame of image matrix according to rows to obtain a vector containing pixel point temperature information of one frame and using the vector as a row vector of a new matrix, and constructing a new two-dimensional matrix P (x, y)a×bA ═ d, b ═ mxn; by means of a matrix H1By linear transformation of P, i.e.
Figure BDA0002921206460000161
Wherein
Figure BDA0002921206460000162
Is a matrix H1K × d dimensional pseudo-inverse matrix of (a); and the two-dimensional image matrix O is subjected to row dereferencing to form a two-dimensional image with the size of the original image, and K infrared reconstruction images with the size of M multiplied by N are obtained.
In the above technical solution, in the fourth step, the infrared reconstructed image with the defect is subjected to the problems of large background noise, weak color information of the infrared reconstructed image, and poor contrast caused by a complex energy source, an imaging link, and impurities on the surface of the test piece, so that a general segmentation method cannot obtain a good segmentation result. To realize an infrared reconstructed image containing M × N pixels, x ═ x (x)1,…,xMN) The method and the device perform accurate separation of the background area and the defect area, and utilize multi-target measurement to remove noise, retain details and maintain edges in the corresponding low-quality infrared reconstruction image containing Kn pixel points and complete defect information of each infrared reconstruction image obtained by processingAccording to the segmentation performance of three aspects, the weight coefficient of each segmentation performance is obtained to construct a segmentation objective function. The infrared image segmentation function constructed under the guidance of three purposes of noise removal, detail preservation and edge preservation is as follows:
f4(v)=ω1·f1(v)+ω2·f2(v)+ω3·f3(v)
wherein, ω is1、ω2、ω3Weight coefficients of the three objective functions respectively;
step S41, f1(v) A single target removal function SGNS for solving the noise problem; introducing fuzzy factors into the FCM algorithm, and utilizing Euclidean distance d between pixel points in a neighborhood window of a reconstructed imageijOn the basis of determining the space constraint relationship among the pixel points, introducing an inter-class dispersion measurement function aiming at the problem that similar classes with small differences are difficult to distinguish; designed f1(v) The expression is shown as the following formula:
Figure BDA0002921206460000163
wherein Kn is the number of pixel points in the low-quality infrared image, c is the clustering number, utiIs pixel point xiFor the clustering center vtThe degree of membership of (a) is,
Figure BDA0002921206460000164
is a pixel xiIs a neighborhood window with center size r x rjThen the central pixel x of the infrared reconstructed imageiM ∈ [1, ∞) ] as a smoothing parameter,
Figure BDA0002921206460000165
is a pixel x in the infrared reconstructed imageiAnd the clustering center vtThe gaussian radial basis similarity measure function of (1),
Figure BDA0002921206460000171
Figure BDA0002921206460000172
is a new weighted blurring factor representing pixel xiThe jth pixel in the domain is related to the clustering center vtA weighted ambiguity factor of (a);
Figure BDA0002921206460000173
satisfy the requirement of
Figure BDA0002921206460000174
Where the spatial distance constraint ζdcSatisfy the requirement of
Figure BDA0002921206460000175
Space gray scale constraint ζgcSatisfy the requirement of
Figure BDA0002921206460000176
Wherein the content of the first and second substances,
Figure BDA0002921206460000177
representing a pixel xjAll pixel points in the r × r neighborhood;
Mithe ratio of the variance of (a) to the mean square,
Figure BDA0002921206460000178
εijis a neighborhood pixel point xjAnd the central pixel point xiThe value of the projection of the mean square error in the kernel space, i.e.,
Figure BDA0002921206460000179
the constant 2 is used for enhancing the inhibition effect of the neighborhood pixel point on the center pixel point; etatIs an inter-class dispersion parameter, vtThe cluster center represents the temperature mean value of the current category pixel point,
Figure BDA00029212064600001710
is the temperature mean value of all pixel points in the infrared image, function f1(v) The requirements are satisfied:
Figure BDA00029212064600001711
pixel x is obtained by Lagrange multiplier methodiWith respect to the cluster center vtDegree of membership of
Figure BDA00029212064600001712
Figure BDA00029212064600001713
Cluster center vtThe update formula is:
Figure BDA00029212064600001714
step S42, f2(v) A single target detail retention function SGDR to solve the detail retention problem; the segmentation of image pixels can be further guided by considering the local spatial information of the image, the problem of edge blurring is solved, and a correlation coefficient m for measuring the pixel position and the color of the pixel is introducedij(ii) a Construction detail retention function f2(v) As shown in the following formula:
Figure BDA0002921206460000181
wherein Kn is the number of pixel points in the low-quality infrared image, c is the clustering number, vtIs the center of the cluster, utiIs a pixel point xiFor the clustering center vtM ∈ [1, ∞) ] as a smoothing parameter, δiThe local spatial information is represented by a local spatial information,
Figure BDA0002921206460000182
Niis a set of pixels in a neighborhood window, x, centered on the ith pixelaIs NiThe number a of pixels in the row is,
Figure BDA0002921206460000183
representing a neighborhood pixel xiAnd a central pixel vtCorrelation of (2), pixel xiAnd vtRespectively have a spatial coordinate of (x)im,yin)、(vtm,vtn) Gray values of g (x) respectivelyi)、g(vt) Then there is
Figure BDA0002921206460000184
λsIs the influence factor of the spatial scale of the image,
Figure BDA0002921206460000185
λgis a gray scale influence factor that is,
Figure BDA0002921206460000186
is given by a pixel xiThe mean gray variance of the centered neighborhood pixels,
Figure BDA0002921206460000187
is a set of neighborhood pixels NiNumber of pixels in (1), function f2(v) The requirements are satisfied:
Figure BDA0002921206460000188
pixel x is obtained by Lagrange multiplier methodiWith respect to the cluster center vtDegree of membership of
Figure BDA0002921206460000189
Figure BDA00029212064600001810
Cluster center vtThe update formula is:
Figure BDA00029212064600001811
step S43, f3(v) A single target edge preservation function SOEM to solve the edge preservation problem; in order to obtain accurate segmentation result, an edge holding function for segmenting according to gray level is introduced into the objective function, and in order to strengthen the edge informationIn addition, an amplification function A is introducedtiAmplifying neighborhood pixel xiTo the central pixel vtInfluence of membership; constructing an edge-preserving function f3(v) As shown in the following formula:
Figure BDA0002921206460000191
wherein Kn is the number of pixel points in the low-quality infrared image, c is the clustering number, n represents the gray value of the pixel points, utiPixel point x representing gray value niAbout the current cluster center vtM ∈ [1, ∞) ] as a smoothing parameter, UnNumber of grey levels of the infrared image, NnThe number of the pixel points with the gray value of n,
Figure BDA0002921206460000192
for the pixel points in the low-quality image containing Kn pixel points, the following steps are provided:
Figure BDA0002921206460000193
Niis a pixel xiA set of neighborhood pixels that is the center,
Figure BDA0002921206460000194
is a set NiThe number of the pixel points in (1),
Figure BDA0002921206460000195
g(xi) And g (x)j) Respectively representing pixel points xiAnd its neighborhood pixels xjIs determined by the gray-scale value of (a),
Figure BDA0002921206460000196
for a set of neighborhood pixels NiPixel x in (2)jAnd a central pixel xiAverage gray level difference of (1); function f3(v) The requirements are satisfied:
Figure BDA0002921206460000197
pixel x is obtained by Lagrange multiplier methodiWith respect to the cluster center vtDegree of membership of
Figure BDA0002921206460000198
Figure BDA0002921206460000199
Cluster center vtThe update formula of (2) is:
Figure BDA00029212064600001910
thereby completing the construction of the infrared image segmentation function.
In the above technical solution, the step five of using a multi-objective optimization algorithm to obtain the weight coefficient of each objective function in the low-quality infrared reconstructed image with Kn pixels includes the specific steps of: in the low-quality infrared reconstruction image containing complete defect information is extracted, three objective functions are balanced by adopting a multi-objective optimization algorithm, and the multi-objective optimization problem is set as follows):
minF(ν)=[f1(ν),f2(v),f3(v)]T
s.t v=(v1,…,vc)T
where c is the number of classifications, v ═ v (v)1,…,vc)TA set of candidate cluster centers is represented. And decomposing the multi-objective optimization problem into a plurality of scalar subproblems by using the weight vector, wherein the component of the weight vector of each subproblem can reflect the importance degree of each objective function to the division objective function.
The specific steps of the multi-target algorithm for solving the weight coefficient of each target function in the low-quality infrared reconstructed image with the number of pixels Kn are as follows:
step S51, initializing parameters of the multi-objective optimization algorithm, and specifically comprising the following steps:
step S511, the objective function F (v) of the multi-objective optimization algorithm, and the maximum iteration number gmaxThreshold values ζ, ε; the population size Kn; the number T of weight vectors in each neighborhood;
step S512, acquiring Kn weight vectors which are uniformly distributed: lambda [ alpha ]1,…,λKnAnd calculating the nearest T weight vectors B (i) ═ i of each weight vector1,…,iT},i=1,…,Kn,
Figure BDA0002921206460000201
Is λiThe most recent T weight vectors;
step S513, uniformly sampling in a feasible space satisfying the multi-objective optimization problem to generate an initial population S1,…,sKnOrder FVi=F(si),i=1,…,Kn;
Step S514, initialization
Figure BDA0002921206460000202
Satisfying the optimal value of each objective function in the image segmentation multi-objective problem;
step S515, decomposing the subproblems by adopting a decomposition model based on Chebyshev, wherein the jth subproblem is as follows:
Figure BDA0002921206460000203
in the above formula, the first and second carbon atoms are,
Figure BDA0002921206460000204
is the weight vector for the jth sub-question,
Figure BDA0002921206460000205
the weight of the noise suppression function is controlled,
Figure BDA0002921206460000206
the weight of the control detail retention function is,
Figure BDA0002921206460000207
the weight of the edge-preserving function is controlled,
Figure BDA0002921206460000208
Figure BDA0002921206460000209
and
Figure BDA00029212064600002010
respectively obtaining the current optimal function values of the three functions;
step S516, setting an external population EP as an empty set;
step S52, updating the multi-objective optimization algorithm; when less than the maximum iteration number gmaxWhen the weight vector is updated once every iteration, step S521 is first performed to update the individual, and step S522 is then performed to adjust the weight vector;
step S521, updating the individuals in the population, specifically including:
step S5211, copy: randomly selecting two serial numbers k, l from B (i), and using differential evolution algorithm to select sequence number sk,slGenerating a new solution e for the image segmentation multi-target problem;
step S5212, improvement: carrying out constraint condition processing proposed in the image segmentation multi-objective optimization problem on the e to generate e';
step S5213, updating the reference point f*: if the value f of the reference point*<f*(e'), then f*=f*(e');
Step S5214, updating the neighborhood solution: g is obtained according to the mathematical expression of Tchebycheffte(e'|λj,f*)≤gte(sjj,f*) J ∈ B (i), then sj=e′,,FVi=F(e′);
Step S522, adjusting the weight vector, specifically includes:
step S5221, calculating individuals in the population
Figure BDA0002921206460000211
To the current cluster center
Figure BDA0002921206460000212
The distance of (c):
Figure BDA0002921206460000213
select the U individuals that minimize Dist
Figure BDA0002921206460000214
As an ideal reference point;
step S5222, find
Figure BDA0002921206460000215
High dimensional sphere region with radius r as center
Figure BDA0002921206460000216
All individuals in
Figure BDA0002921206460000217
Computing
Figure BDA0002921206460000218
Standard deviation of all individuals within:
Figure BDA0002921206460000219
wherein the content of the first and second substances,
Figure BDA00029212064600002110
to be distributed in
Figure BDA00029212064600002111
Average of all individuals in the population, R is
Figure BDA00029212064600002112
The number of individuals in the group of individuals,
Figure BDA00029212064600002113
r is 1, …, R is distributed in
Figure BDA00029212064600002114
(ii) an individual;
step S5223, selecting standard deviation from U individuals
Figure BDA00029212064600002115
The smallest Up individuals as preference area reference points, for each of the Up individuals
Figure BDA00029212064600002116
And its corresponding weight vector lambdaUpThe following update operations are performed:
step S52231, calculating a basis weight vector:
Figure BDA00029212064600002117
wherein f is*Segmenting an optimal value of each objective function in the multi-objective problem for the image;
step S52232, find the departure in the population
Figure BDA00029212064600002118
The most distant European individuals
Figure BDA00029212064600002119
Wherein the content of the first and second substances,
Figure BDA00029212064600002120
and find its corresponding weight vector lambdam
Step S52233, calculating the generated weight vector:
λUpnew=λUp+step·λm
wherein λ isUpnew=(λUpnew1Upnew2Upnew3) Step is a set Step length parameter;
step S52234, utilizing the preference area reference point
Figure BDA00029212064600002121
And the most distant individuals
Figure BDA00029212064600002122
Randomly generating a new solution
Figure BDA00029212064600002123
Comprises the following steps:
Figure BDA00029212064600002124
step S52235, generating a new individual
Figure BDA00029212064600002125
As a new clustering center: to be provided with
Figure BDA00029212064600002126
As a center, calculating the current membership according to a membership calculation formula and a clustering center calculation formula corresponding to the set three types of objective functions:
Figure BDA0002921206460000221
calculating new clustering center according to current membership
Figure BDA0002921206460000222
Step S52236, use new individual
Figure BDA0002921206460000223
Replacement of
Figure BDA0002921206460000224
Step S523, update EP: removing all vectors dominated by F (e '), and adding e ' to the EP if F (e ') is not dominated by vectors within the EP;
step S53. Terminating the iteration; if the termination condition g ═ g is satisfiedmaxAnd if the output EP is optimal, the image segmentation multi-target problem is enabled to reach the optimal clustering center set, and if the iteration number g is increased to g +1, the step S52 is switched to.
In the above technical solution, the specific step of constructing the full-pixel infrared image segmentation objective function in the sixth step includes: inputting the weight coefficient obtained in the fifth step into a second layer of the double-layer segmentation model, wherein the second layer of the double-layer segmentation model is an image segmentation layer, and performing image segmentation on the full-pixel infrared image with the number of pixel points being M multiplied by N obtained through reconstruction by using the double-layer segmentation model;
in the second layer of the double-layer segmentation model, the following optimization functions are provided for the full pixels with the number of the infrared thermal image pixel points being MxN:
Figure BDA0002921206460000225
when solving this objective function, the function f is preserved due to detail2(v) The separability measure in (2) does not contain pixel xiAbout the cluster center vtDegree of membership utiTherefore, the membership function and the clustering center under the Lagrange multiplier method are solved for the following functions:
Figure BDA0002921206460000226
for simplicity, let M (x)i,vt)=||xi-vt||2+||δi-vt||2Then, the update formula of the membership degree is:
Figure BDA0002921206460000227
meanwhile, the updating formula of the clustering center is as follows:
Figure BDA0002921206460000231
and completing the construction of the full-pixel infrared image segmentation target function.
In the above technical solution, the step seven of implementing infrared full-pixel image segmentation on the second layer of the double-layer segmentation model specifically includes the steps of:
step S71, initializing iteration times t, generating an initial clustering center and calculating initial membership;
step S72, calculating the segmentation objective function under the current iteration times
Figure BDA0002921206460000232
Step S73, according to the formula
Figure BDA0002921206460000233
Updating the membership degree;
step S74, according to the formula
Figure BDA0002921206460000234
Updating the clustering center;
step S75, calculating the updated objective function
Figure BDA0002921206460000235
Step S76, if
Figure BDA0002921206460000236
Or T ═ TmaxAnd if so, ending the segmentation algorithm, dividing the pixel points into the defect regions with the maximum membership value to obtain segmented images, namely finally obtaining the segmentation result of the whole observation image for the infrared reconstruction image of the whole pixel.
In conclusion, the method provides a defect detection method algorithm based on multi-objective optimization segmentation. The automatic segmentation method for variable interval search is used for achieving infrared video segmentation to obtain a data set to be classified, and the data set comprises a temperature curve with typical change characteristics. The FCM algorithm obtains corresponding clusters of the data set, and soft division is carried out by utilizing the membership degree of pixel points and cluster centers, so that the reliability of classification results is improved. Each classified data subset contains corresponding temperature change characteristics. And reconstructing the infrared thermal image sequence by using the main characteristics to obtain an infrared reconstructed image of the defect, and reflecting the defect characteristics of the test piece. The result image obtained by target segmentation of the infrared reconstructed image containing the prominent defects can not only realize noise elimination, but also ensure detail retention, and the edge retention can also improve the precision of image segmentation.
Example (b):
in the present embodiment, the thermal infrared imager acquires 502 frames of images with pixel size of 512 × 640. I.e. there are 327680 temperature points in each graph, and the temperature value of each temperature point is recorded 502 times, and this time-varying temperature condition constitutes the transient thermal response TTR of the temperature point. After effective transient thermal response is extracted from the infrared thermal sequence, area division is carried out according to defect types, and typical transient thermal response is extracted from each divided area. Setting the parameter Re in extracting the effective transient thermal responseCL=0.92,
Figure BDA0002921206460000241
From the 327680 temperature points, 375 effective transient thermal responses containing complete defect information were extracted. And according to the pixel points, softening the membership degree of each class center, and dividing 185 thermal response curves, 43 thermal response curves and 147 thermal response curves into corresponding classes. Typical transient thermal response representing the defect information is extracted from each type of defect area, and the typical transient thermal response representing the three defect areas forms a matrix X1. For original two-dimensional matrix P (x, y)502×327680Performing a linear transformation using
Figure BDA0002921206460000242
Wherein the content of the first and second substances,
Figure BDA0002921206460000243
is X1Obtaining a two-dimensional image matrix O, reconstructing the two-dimensional image matrix O into two-dimensional images with the original image size of 512 multiplied by 640 according to row values, and obtaining 3 infrared reconstructed images with the size of 512 multiplied by 640, wherein the infrared defect reconstructed images and corresponding TTR curves are shown in figures 3, 4, 5, 6, 7 and 8; wherein fig. 3 and 4 are TTR curves and corresponding infrared reconstructed images of the composite impact pit background area, fig. 5 and 6 are TTR curves and corresponding infrared reconstructed images of the composite impact pit interior, and fig. 7 and 8 are TTR curves and corresponding infrared reconstructed images of the composite impact pit edge, respectively.
The TTR curves classified as shown in fig. 3, 5, and 7 can observe that the TTRs in different classifications have different differences in temperature rise rate and temperature fall rate, and the type of the expression region in the reconstructed image can be determined according to the difference and the highlighted region of the infrared reconstructed image color, and the region type of the test piece has a background region, the inside of the composite impact pit, and the edge of the composite impact pit.
The maximum algebra of the multi-objective optimization segmentation algorithm is set to be 200, and the weight vector is adjusted once every time an individual is updated based on preference. In the objective function set according to the segmentation performance, the smoothing parameter m is set to 2, and the number of clusters c is set to 3. A curved PF front surface formed spatially by the pareto optimal set is obtained as shown in fig. 2. Selecting a weighing solution from the front face of the PF, wherein the corresponding weight vector component reflects the weight coefficient of each objective function, constructing a full-pixel infrared image segmentation objective function model according to the weight coefficient, realizing image segmentation, and obtaining segmented images as shown in FIGS. 9 and 10, wherein FIG. 9 is a segmentation result of an infrared reconstructed image at the edge of a composite material impact pit, and FIG. 10 is a segmentation result of an infrared reconstructed image inside the composite material impact pit. Experimental results confirm that the function SGNSf constructed herein1(v)、SGDRf2(v) And edge preservation function SOEMf3(v) The method can respectively play roles in inhibiting noise, retaining details and keeping edges, accurately strip the defect area and the background area and realize accurate segmentation of the infrared image.
The number of apparatuses and the scale of the process described herein are intended to simplify the description of the present invention. Applications, modifications and variations of the present invention will be apparent to those skilled in the art.
While embodiments of the invention have been described above, it is not limited to the applications set forth in the description and the embodiments, which are fully applicable in various fields of endeavor to which the invention pertains, and further modifications may readily be made by those skilled in the art, it being understood that the invention is not limited to the details shown and described herein without departing from the general concept defined by the appended claims and their equivalents.

Claims (7)

1. An automatic identification method for a damaged area of an aerospace composite material is characterized by comprising the following steps:
step one, after effective information is extracted from collected test piece infrared data, classifying the test piece infrared data according to defect types and extracting typical transient thermal response of each type of defects;
secondly, forming a transformation matrix by the selected typical transient thermal response to obtain an infrared reconstruction image;
step three, calculating the variation coefficient of pixels of the reconstructed infrared image with K dimensions of M multiplied by N, and sampling out the most prominent pixels by measuring the homogeneity of the neighborhood pixels and the central pixels to obtain K inferior infrared reconstructed images containing complete defect information and containing Kn pixel points;
fourthly, in the low-quality infrared reconstruction image which contains Kn pixel points and contains complete defect information and corresponds to each infrared reconstruction image obtained through processing, the segmentation performance in three aspects of noise removal, detail retention and edge maintenance is measured by utilizing multiple targets, and the weight coefficient of each segmentation performance is obtained to construct a segmentation objective function; constructing an infrared image segmentation function under the guidance of three purposes of noise removal, detail retention and edge maintenance;
step five, constructing a first layer of a double-layer segmentation model, wherein the first layer of the double-layer segmentation model is a weight coefficient determining layer, and setting a multi-objective optimization problem by adopting a multi-objective optimization algorithm to balance three set objective functions in the extracted low-quality infrared reconstruction image containing complete defect information; the method comprises the following steps of obtaining weight coefficients of objective functions for realizing each segmentation performance by using a multi-objective optimization algorithm and combining weight vectors, wherein the specific steps comprise:
s51, initializing parameters of the multi-objective optimization algorithm; acquiring Kn weight vectors which are uniformly distributed, and calculating T weight vectors which are nearest to each weight vector; uniformly sampling in a feasible space which meets a multi-objective optimization problem to generate an initial population; initializing a multi-objective optimization function; decomposing the subproblems by adopting a decomposition model based on Chebyshev; setting an external population EP as an empty set;
s52, updating individuals in the population by an evolutionary multi-objective optimization algorithm; after updating the individuals each time, taking noise elimination as preference, and adjusting the weight vector according to the preference;
step S53, selecting a trade-off solution to obtain a weight coefficient for removing noise, retaining details and keeping the edge function segmentation performance;
step six, constructing a full-pixel infrared image segmentation target function, inputting the weight coefficient obtained in the step five into a second layer of the double-layer segmentation model, wherein the second layer of the double-layer segmentation model is an image segmentation layer, and performing image segmentation on the full-pixel infrared image with the number of pixel points of M multiplied by N obtained by reconstruction by using the segmentation model;
and step seven, according to the membership degree and the clustering center updating formula which are obtained by the full-pixel infrared image segmentation target function constructed in the step six, inputting a threshold value and the maximum iteration times for stopping judgment of the algorithm, and realizing infrared full-pixel image segmentation on an image segmentation layer to obtain a segmented image of the test piece defect infrared image.
2. The method for automatically identifying the damaged area of the aerospace composite material as claimed in claim 1, wherein the specific method in the first step comprises: extracting effective transient thermal response of an acquired d-dimensional infrared thermal image sequence S (m, n, wherein m and n respectively represent the m-th row and the n-th column of a three-dimensional matrix, and the third dimension represents the frame number of the infrared thermal image; and dividing the extracted effective transient thermal response into K regions according to the defect type K, and extracting typical transient thermal response which can best represent the current class defect characteristics from the divided various defect regions.
3. The method for automatically identifying the damaged area of the aerospace composite material as claimed in claim 2, wherein the method for obtaining the infrared reconstruction image in the second step comprises: obtaining a linear change matrix H with the dimensionality of d multiplied by K from K d-dimensional typical transient thermal responses extracted in the step one1Converting S (m, n, y) from three-dimensional matrix into two-dimensional matrix, namely vectorizing each frame of image in the infrared thermal video, dereferencing and arranging each frame of image matrix according to rows to obtain a vector containing pixel point temperature information of one frame and using the vector as a row vector of a new matrix, and constructing a new two-dimensional matrix P (x, y)a×bA ═ d, b ═ mxn; by means of a matrix H1By linear transformation of P, i.e.
Figure FDA0002921206450000021
Wherein
Figure FDA0002921206450000022
Is a matrix H1K × d dimensional pseudo-inverse matrix of (a); and the two-dimensional image matrix O is subjected to row dereferencing to form a two-dimensional image with the size of the original image, and K infrared reconstruction images with the size of M multiplied by N are obtained.
4. The automatic identification method for the damaged area of the aerospace composite material as claimed in claim 1, wherein the infrared image segmentation function constructed in the fourth step under the guidance of three purposes of noise removal, detail preservation and edge preservation is as follows:
f4(v)=ω1·f1(v)+ω2·f2(v)+ω3·f3(v)
wherein, ω is1、ω2、ω3Weights of three objective functions respectivelyA coefficient;
step S41, f1(v) A single target removal function SGNS for solving the noise problem; introducing fuzzy factors into the FCM algorithm, and utilizing Euclidean distance d between pixel points in a neighborhood window of a reconstructed imageijOn the basis of determining the space constraint relationship among the pixel points, introducing an inter-class dispersion measurement function aiming at the problem that similar classes with small differences are difficult to distinguish; designed f1(v) The expression is shown as the following formula:
Figure FDA0002921206450000031
wherein Kn is the number of pixel points in the low-quality infrared image, c is the clustering number, utiIs a pixel point xiFor the clustering center vtDegree of membership, Wi rIs a pixel xiIs a neighborhood window with center size r x rjThen the central pixel x of the infrared reconstructed imageiM ∈ [1, ∞) ] as a smoothing parameter,
Figure FDA0002921206450000032
is a pixel x in the infrared reconstructed imageiAnd the clustering center vtThe gaussian radial basis similarity measure function of (1),
Figure FDA0002921206450000033
Figure FDA0002921206450000034
is a new weighted blurring factor representing pixel xiThe jth pixel in the domain is related to the clustering center vtA weighted ambiguity factor of (a);
Figure FDA0002921206450000035
satisfy the requirement of
Figure FDA0002921206450000036
Where the spatial distance constraint ζdcSatisfy the requirement of
Figure FDA0002921206450000037
Space gray scale constraint ζgcSatisfy the requirement of
Figure FDA0002921206450000038
Wherein the content of the first and second substances,
Figure FDA0002921206450000039
representing a pixel xjAll pixel points in the r × r neighborhood;
Mithe ratio of the variance of (a) to the mean square,
Figure FDA00029212064500000310
εijis a neighborhood pixel point xjAnd the central pixel point xiThe value of the projection of the mean square error in the kernel space, i.e.,
Figure FDA00029212064500000311
the constant 2 is used for enhancing the inhibition effect of the neighborhood pixel point on the center pixel point; etatIs an inter-class dispersion parameter, vtThe cluster center represents the temperature mean value of the current category pixel point,
Figure FDA00029212064500000312
is the temperature mean value of all pixel points in the infrared image, function f1(v) The requirements are satisfied:
Figure FDA00029212064500000313
pixel x is obtained by Lagrange multiplier methodiWith respect to the cluster center vtDegree of membership of
Figure FDA00029212064500000314
Figure FDA00029212064500000315
Cluster center vtThe update formula is:
Figure FDA0002921206450000041
step S42, f2(v) A single target detail retention function SGDR to solve the detail retention problem; the segmentation of image pixels can be further guided by considering the local spatial information of the image, the problem of edge blurring is solved, and a correlation coefficient m for measuring the pixel position and the color of the pixel is introducedij(ii) a Construction detail retention function f2(v) As shown in the following formula:
Figure FDA0002921206450000042
wherein Kn is the number of pixel points in the low-quality infrared image, c is the clustering number, vtIs the center of the cluster, utiIs a pixel point xiFor the clustering center vtM ∈ [1, ∞) ] as a smoothing parameter, δiThe local spatial information is represented by a local spatial information,
Figure FDA0002921206450000043
Niis a set of pixels in a neighborhood window, x, centered on the ith pixelaIs NiThe number a of pixels in the pixel array,
Figure FDA0002921206450000044
representing a neighborhood pixel xiAnd a central pixel vtCorrelation of (2), pixel xiAnd vtRespectively have a spatial coordinate of (x)im,yin)、(vtm,vtn) Gray values of g (x) respectivelyi)、g(vt) Then there is
Figure FDA0002921206450000045
λsIs the influence of spatial scaleIn the case of a hybrid vehicle,
Figure FDA0002921206450000046
λgis the factor that affects the gray scale by which,
Figure FDA0002921206450000047
is given by a pixel xiThe mean gray variance of the centered neighborhood pixels,
Figure FDA00029212064500000411
is a set of neighborhood pixels NiNumber of pixels in (1), function f2(v) The requirements are satisfied:
Figure FDA0002921206450000048
pixel x is obtained by Lagrange multiplier methodiWith respect to the cluster center vtDegree of membership of
Figure FDA0002921206450000049
Figure FDA00029212064500000410
Cluster center vtThe update formula is:
Figure FDA0002921206450000051
step S43, f3(v) A single target edge preservation function SOEM to solve the edge preservation problem; in order to obtain accurate segmentation result, an edge holding function for segmenting according to gray level is introduced into an objective function, and an amplification function A is introduced for enhancing edge informationtiAmplifying neighborhood pixel xiTo the central pixel vtInfluence of membership; constructing an edge-preserving function f3(v) As shown in the following formula:
Figure FDA0002921206450000052
wherein Kn is the number of pixel points in the low-quality infrared image, c is the clustering number, n represents the gray value of the pixel points, utiPixel point x representing gray value niAbout the current cluster center vtM ∈ [1, ∞) ] as a smoothing parameter, UnNumber of gray levels for infrared image, NnThe number of the pixel points with the gray value of n,
Figure FDA0002921206450000053
for the pixel points in the low-quality image containing Kn pixel points, the following steps are provided:
Figure FDA0002921206450000054
Niis a pixel xiA set of neighborhood pixels that is the center,
Figure FDA00029212064500000511
is a set NiThe number of the pixel points in (1),
Figure FDA0002921206450000055
g(xi) And g (x)j) Respectively represent pixel points xiAnd its neighborhood pixels xjIs determined by the gray-scale value of (a),
Figure FDA0002921206450000056
for a set of neighborhood pixels NiPixel x in (2)jAnd a central pixel xiAverage gray level difference of (1); function f3(v) The requirements are satisfied:
Figure FDA0002921206450000057
method for obtaining pixel x by Lagrange multiplier methodiWith respect to the cluster center vtDegree of membership of
Figure FDA0002921206450000058
Figure FDA0002921206450000059
Cluster center vtThe update formula of (2) is:
Figure FDA00029212064500000510
thereby completing the construction of the infrared image segmentation function.
5. The method for automatically identifying the damaged area of the aerospace composite material as claimed in claim 1, wherein the step five of using a multi-objective optimization algorithm to obtain the weight coefficient of each objective function in the low-quality infrared reconstructed image with the number of pixels Kn comprises the following specific steps: in the low-quality infrared reconstruction image containing complete defect information is extracted, three objective functions are balanced by adopting a multi-objective optimization algorithm, and the multi-objective optimization problem is set as follows):
minF(ν)=[f1(ν),f2(v),f3(v)]T
s.t v=(v1,…,vc)T
where c is the number of classifications, v ═ v (v)1,…,vc)TRepresenting a group of candidate clustering centers, decomposing the multi-objective optimization problem into a plurality of scalar subproblems by using the weight vector, wherein the component of the weight vector of each subproblem can reflect the importance degree of each objective function to the segmentation objective function;
the specific steps of the multi-target algorithm for solving the weight coefficient of each target function in the low-quality infrared reconstructed image with the number of pixels Kn are as follows:
step S51, initializing parameters of the multi-objective optimization algorithm, and specifically comprising the following steps:
step S511, the objective function F (v) of the multi-objective optimization algorithm, and the maximum iteration number gmaxThreshold values ζ, ε; the population size Kn; the number T of weight vectors in each neighborhood;
step S512, acquiring Kn weight vectors which are uniformly distributed: lambda [ alpha ]1,…,λKnAnd calculating the nearest T weight vectors B (i) ═ i of each weight vector1,…,iT},i=1,…,Kn,
Figure FDA0002921206450000061
Is λiThe most recent T weight vectors;
step S513, uniformly sampling in feasible space satisfying the multi-objective optimization problem to generate initial population
Figure FDA0002921206450000062
Let FVi=F(si),i=1,…,Kn;
Step S514, initialization
Figure FDA0002921206450000063
Satisfying the optimal value of each objective function in the image segmentation multi-objective problem;
step S515, decomposing the subproblems by adopting a decomposition model based on Chebyshev, wherein the jth subproblem is as follows:
Figure FDA0002921206450000064
in the above formula, the first and second carbon atoms are,
Figure FDA0002921206450000065
is the weight vector for the jth sub-question,
Figure FDA0002921206450000066
the weight of the noise suppression function is controlled,
Figure FDA0002921206450000067
the weight of the control detail retention function is,
Figure FDA0002921206450000068
controlling the weight of the edge-preserving function, f1 *
Figure FDA0002921206450000071
And
Figure FDA0002921206450000072
respectively obtaining the current optimal function values of the three functions;
step S516, setting an external population EP as an empty set;
step S52, updating the multi-objective optimization algorithm; when less than the maximum iteration number gmaxWhen the weight vector is updated once every iteration, step S521 is first performed to update the individual, and step S522 is then performed to adjust the weight vector;
step S521, updating the individuals in the population, specifically including:
step S5211, copy: randomly selecting two serial numbers k, l from B (i), and using differential evolution algorithm to select from sk,slGenerating a new solution e for the image segmentation multi-target problem;
step S5212, improvement: carrying out constraint condition processing proposed in the image segmentation multi-objective optimization problem on the e to generate e';
step S5213, update reference point f*: if the value f of the reference point*<f*(e'), then f*=f*(e');
Step S5214, updating the neighborhood solution: g is obtained according to the mathematical expression of Tchebycheffte(e'|λj,f*)≤gte(sjj,f*) J ∈ B (i), then sj=e′,FVi=F(e′);
Step S522, adjusting the weight vector, specifically includes:
step S5221, calculating individuals in the population
Figure FDA0002921206450000073
To the current cluster center
Figure FDA0002921206450000074
The distance of (c):
Figure FDA0002921206450000075
select the U individuals that minimize Dist
Figure FDA0002921206450000076
As an ideal reference point;
step S5222, find
Figure FDA0002921206450000077
High dimensional sphere region with radius r as center
Figure FDA0002921206450000078
All individuals in
Figure FDA0002921206450000079
Computing
Figure FDA00029212064500000710
Standard deviation of all individuals within:
Figure FDA00029212064500000711
wherein the content of the first and second substances,
Figure FDA00029212064500000712
to be distributed at
Figure FDA00029212064500000713
Average of all individuals in the group, R is
Figure FDA00029212064500000714
The number of individuals in the group of individuals,
Figure FDA00029212064500000715
r is 1, …, R is distributed in
Figure FDA00029212064500000716
(ii) an individual;
step S5223, selecting standard deviation from U individuals
Figure FDA00029212064500000717
The smallest Up individuals as preference area reference points, for each of the Up individuals
Figure FDA00029212064500000718
And its corresponding weight vector lambdaUpThe following update operations are performed:
step S52231, calculating a basis weight vector:
Figure FDA00029212064500000719
wherein f is*Segmenting an optimal value of each objective function in the multi-objective problem for the image;
step S52232, find the departure in the population
Figure FDA00029212064500000720
The most distant European individuals
Figure FDA00029212064500000721
Wherein the content of the first and second substances,
Figure FDA0002921206450000081
and find its corresponding weight vector lambdam
Step S52233, calculating the generated weight vector:
λUpnew=λUp+step·λm
wherein λ isUpnew=(λUpnew1Upnew2Upnew3) Step is a set Step length parameter;
step S52234, utilizing the preference area reference point
Figure FDA0002921206450000082
And the most distant individuals
Figure FDA0002921206450000083
Randomly generating a new solution
Figure FDA0002921206450000084
Comprises the following steps:
Figure FDA0002921206450000085
step S52235, generating a new individual
Figure FDA0002921206450000086
As a new clustering center: to be provided with
Figure FDA0002921206450000087
As a center, calculating the current membership according to a membership calculation formula and a clustering center calculation formula corresponding to the set three types of objective functions:
Figure FDA0002921206450000088
calculating new clustering center according to current membership
Figure FDA0002921206450000089
Step S52236, use new individual
Figure FDA00029212064500000810
Replacement of
Figure FDA00029212064500000811
Step S523, update EP: removing all vectors dominated by F (e '), and adding e ' to the EP if F (e ') is not dominated by vectors within the EP;
step S53, terminating iteration; if the termination condition g ═ g is satisfiedmaxAnd if the output EP is optimal, the image segmentation multi-target problem is enabled to reach the optimal clustering center set, and if the iteration number g is increased to g +1, the step S52 is switched to.
6. The aerospace composite material damage region automatic identification method according to claim 1, wherein the sixth step of constructing a full-pixel infrared image segmentation objective function includes the specific steps of: inputting the weight coefficient obtained in the fifth step into a second layer of the double-layer segmentation model, wherein the second layer of the double-layer segmentation model is an image segmentation layer, and performing image segmentation on the full-pixel infrared image with the number of pixel points of M multiplied by N obtained by reconstruction by using the double-layer segmentation model;
in the second layer of the double-layer segmentation model, the following optimization functions are provided for the full pixels with the number of the infrared thermal image pixel points being MxN:
Figure FDA00029212064500000812
when solving this objective function, the function f is preserved due to detail2(v) The separability measure of (1) does not contain pixel xiAbout the cluster center vtDegree of membership utiTherefore, the membership function and the clustering center under the Lagrange multiplier method are solved for the following functions:
Figure FDA0002921206450000091
for simplicity, let M (x)i,vt)=||xi-vt||2+||δi-vt||2Then, the update formula of the membership degree is:
Figure FDA0002921206450000092
meanwhile, the updating formula of the clustering center is as follows:
Figure FDA0002921206450000093
and completing the construction of the full-pixel infrared image segmentation target function.
7. The aerospace composite material damage region automatic identification method according to claim 1, wherein the seventh step of achieving infrared full-pixel image segmentation on the second layer of the double-layer segmentation model specifically comprises the following steps:
step S71, initializing iteration times t, generating an initial clustering center and calculating initial membership;
step S72, calculating the segmentation objective function under the current iteration number
Figure FDA0002921206450000094
Step S73, according to the formula
Figure FDA0002921206450000095
Updating the membership degree;
step S74, according to the formula
Figure FDA0002921206450000096
Updating the clustering center;
step S75, calculating the updated objective function
Figure FDA0002921206450000097
Step S76, if
Figure FDA0002921206450000101
Or T ═ TmaxAnd if so, ending the segmentation algorithm, dividing the pixel points into the defect regions with the maximum membership value to obtain segmented images, namely finally obtaining the segmentation result of the whole observation image for the infrared reconstruction image of the whole pixel.
CN202110118568.3A 2021-01-28 2021-01-28 Automatic identification method for damaged area of aerospace composite material Active CN112818822B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110118568.3A CN112818822B (en) 2021-01-28 2021-01-28 Automatic identification method for damaged area of aerospace composite material

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110118568.3A CN112818822B (en) 2021-01-28 2021-01-28 Automatic identification method for damaged area of aerospace composite material

Publications (2)

Publication Number Publication Date
CN112818822A CN112818822A (en) 2021-05-18
CN112818822B true CN112818822B (en) 2022-05-06

Family

ID=75859916

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110118568.3A Active CN112818822B (en) 2021-01-28 2021-01-28 Automatic identification method for damaged area of aerospace composite material

Country Status (1)

Country Link
CN (1) CN112818822B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113537236B (en) * 2021-06-21 2023-04-21 电子科技大学 Quantitative identification method for defect of thermal diffusion effect for infrared detection of spacecraft damage
CN113506292B (en) * 2021-07-30 2022-09-20 同济大学 Structure surface crack detection and extraction method based on displacement field
CN113763368B (en) * 2021-09-13 2023-06-23 中国空气动力研究与发展中心超高速空气动力研究所 Multi-type damage detection characteristic analysis method for large-size test piece
CN114862879B (en) * 2022-07-05 2022-09-27 深圳科亚医疗科技有限公司 Method, system and medium for processing images containing physiological tubular structures
CN116907677B (en) * 2023-09-15 2023-11-21 山东省科学院激光研究所 Distributed optical fiber temperature sensing system for concrete structure and measuring method thereof
CN117576488B (en) * 2024-01-17 2024-04-05 海豚乐智科技(成都)有限责任公司 Infrared dim target detection method based on target image reconstruction

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102692429A (en) * 2011-03-24 2012-09-26 中国科学院沈阳自动化研究所 Method for automatic identification and detection of defect in composite material
CN110895814A (en) * 2019-11-30 2020-03-20 南京工业大学 Intelligent segmentation method for aero-engine hole detection image damage based on context coding network
CN111652252A (en) * 2020-06-11 2020-09-11 中国空气动力研究与发展中心超高速空气动力研究所 Ultrahigh-speed impact damage quantitative identification method based on ensemble learning
CN112016627A (en) * 2020-09-04 2020-12-01 中国空气动力研究与发展中心超高速空气动力研究所 Visual detection and evaluation method for micro-impact damage of on-orbit spacecraft
CN112037211A (en) * 2020-09-04 2020-12-04 中国空气动力研究与发展中心超高速空气动力研究所 Damage characteristic identification method for dynamically monitoring small space debris impact event
CN112233099A (en) * 2020-10-21 2021-01-15 中国空气动力研究与发展中心超高速空气动力研究所 Reusable spacecraft surface impact damage characteristic identification method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10254907B4 (en) * 2002-11-25 2008-01-03 Siemens Ag Process for surface contouring of a three-dimensional image

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102692429A (en) * 2011-03-24 2012-09-26 中国科学院沈阳自动化研究所 Method for automatic identification and detection of defect in composite material
CN110895814A (en) * 2019-11-30 2020-03-20 南京工业大学 Intelligent segmentation method for aero-engine hole detection image damage based on context coding network
CN111652252A (en) * 2020-06-11 2020-09-11 中国空气动力研究与发展中心超高速空气动力研究所 Ultrahigh-speed impact damage quantitative identification method based on ensemble learning
CN112016627A (en) * 2020-09-04 2020-12-01 中国空气动力研究与发展中心超高速空气动力研究所 Visual detection and evaluation method for micro-impact damage of on-orbit spacecraft
CN112037211A (en) * 2020-09-04 2020-12-04 中国空气动力研究与发展中心超高速空气动力研究所 Damage characteristic identification method for dynamically monitoring small space debris impact event
CN112233099A (en) * 2020-10-21 2021-01-15 中国空气动力研究与发展中心超高速空气动力研究所 Reusable spacecraft surface impact damage characteristic identification method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Sparse superpixel unmixing for exploratory analysis of CRISM hyperspectral images;David R. Thompson et al.;《2009 First Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing》;20091016;1-4 *
多分类器融合的光学遥感图像目标识别算法;姬晓飞 等;《计算机技术与发展》;20191130;第29卷(第11期);52-56 *

Also Published As

Publication number Publication date
CN112818822A (en) 2021-05-18

Similar Documents

Publication Publication Date Title
CN112818822B (en) Automatic identification method for damaged area of aerospace composite material
CN112819775B (en) Segmentation and reinforcement method for damage detection image of aerospace composite material
CN111598887B (en) Spacecraft defect detection method based on LVQ-GMM algorithm and multi-objective optimization segmentation algorithm
CN112884716B (en) Method for strengthening characteristics of ultra-high-speed impact damage area
CN110210463B (en) Precise ROI-fast R-CNN-based radar target image detection method
CN108537102B (en) High-resolution SAR image classification method based on sparse features and conditional random field
CN112784847B (en) Segmentation and identification method for ultra-high-speed impact damage detection image
Xie et al. SRUN: Spectral regularized unsupervised networks for hyperspectral target detection
CN113392931B (en) Hyperspectral open set classification method based on self-supervision learning and multitask learning
Liu et al. BraggNN: fast X-ray Bragg peak analysis using deep learning
Panati et al. Feature relevance evaluation using grad-CAM, LIME and SHAP for deep learning SAR data classification
CN113538331A (en) Metal surface damage target detection and identification method, device, equipment and storage medium
CN111652252A (en) Ultrahigh-speed impact damage quantitative identification method based on ensemble learning
Devi et al. Change detection techniques–A survey
CN114332444B (en) Complex star sky background target identification method based on incremental drift clustering
CN112183237A (en) Automatic white blood cell classification method based on color space adaptive threshold segmentation
CN109558803B (en) SAR target identification method based on convolutional neural network and NP criterion
Choi et al. Rain-type classification from microwave satellite observations using deep neural network segmentation
Reghukumar et al. Vision based segmentation and classification of cracks using deep neural networks
CN113781445A (en) Multi-region complex damage defect feature extraction fusion method
Guofeng et al. Image segmentation of thermal waving inspection based on particle swarm optimization fuzzy clustering algorithm
Chen et al. Remote aircraft target recognition method based on superpixel segmentation and image reconstruction
CN112906713B (en) Aerospace composite material damage visualization feature extraction method
CN113763368B (en) Multi-type damage detection characteristic analysis method for large-size test piece
CN112819778B (en) Multi-target full-pixel segmentation method for aerospace material damage detection image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant