CN112946081A - Ultrasonic imaging method based on defect multi-feature intelligent extraction and fusion - Google Patents
Ultrasonic imaging method based on defect multi-feature intelligent extraction and fusion Download PDFInfo
- Publication number
- CN112946081A CN112946081A CN202110187998.0A CN202110187998A CN112946081A CN 112946081 A CN112946081 A CN 112946081A CN 202110187998 A CN202110187998 A CN 202110187998A CN 112946081 A CN112946081 A CN 112946081A
- Authority
- CN
- China
- Prior art keywords
- matrix
- feature
- signal
- fusion
- dimensional
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N29/00—Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
- G01N29/04—Analysing solids
- G01N29/06—Visualisation of the interior, e.g. acoustic microscopy
- G01N29/0654—Imaging
- G01N29/069—Defect imaging, localisation and sizing using, e.g. time of flight diffraction [TOFD], synthetic aperture focusing technique [SAFT], Amplituden-Laufzeit-Ortskurven [ALOK] technique
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N29/00—Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
- G01N29/44—Processing the detected response signal, e.g. electronic circuits specially adapted therefor
- G01N29/4454—Signal recognition, e.g. specific values or portions, signal events, signatures
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N29/00—Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
- G01N29/44—Processing the detected response signal, e.g. electronic circuits specially adapted therefor
- G01N29/4481—Neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2135—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/254—Fusion techniques of classification results, e.g. of results related to same input data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N2291/00—Indexing codes associated with group G01N29/00
- G01N2291/02—Indexing codes associated with the analysed material
- G01N2291/028—Material parameters
- G01N2291/0289—Internal structure, e.g. defects, grain size, texture
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Pathology (AREA)
- Immunology (AREA)
- Evolutionary Biology (AREA)
- Biochemistry (AREA)
- Computing Systems (AREA)
- Analytical Chemistry (AREA)
- Chemical & Material Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Signal Processing (AREA)
- Biomedical Technology (AREA)
- Acoustics & Sound (AREA)
- Medical Informatics (AREA)
- Investigating Or Analyzing Materials By The Use Of Ultrasonic Waves (AREA)
Abstract
The invention provides an ultrasonic imaging method based on defect multi-feature intelligent extraction and fusion, which comprises the following steps: converting the obtained three-dimensional A-scanning signal matrix of the object to be detected into a two-dimensional signal matrix; analyzing the two-dimensional signal matrix obtained by conversion by utilizing principal component analysis, and extracting k features with the most discrimination in the signals; using the extracted k features as input training of a neural network to obtain a multi-feature fusion classifier; performing signal identification by using a trained classifier so as to obtain a multi-feature fusion identification result matrix, and shaping the identification result matrix according to a signal arrangement mode in an original three-dimensional A-scanning signal matrix so as to form an image; the invention intelligently extracts the defect characteristic information in the acquired signal matrix by using the dimensionality reduction machine learning algorithm and performs fusion imaging by using the extracted characteristic information, so that abundant defect information is fused in imaging, the signal-to-noise ratio of defect imaging can be greatly improved, and the method is suitable for C-scan imaging of any ultrasonic detection technology.
Description
Technical Field
The invention belongs to the technical field of ultrasonic nondestructive testing, and particularly relates to an ultrasonic imaging method based on defect multi-feature intelligent extraction and fusion.
Background
With the great development of computer technology and digital image processing technology, the ultrasonic nondestructive testing technology can present defect forms in an intuitive image form, provides more accurate and intuitive testing results, and is widely applied to the field of medical diagnosis and industrial testing at present. Based on different ultrasonic detection technologies and scanning modes, the ultrasonic imaging modes mainly include C-scan imaging, B-scan imaging, D-scan imaging, S-scan imaging, and the like, and these imaging modes are all realized by acquired waveform signals (a-scan signals). As shown in FIG. 5, an A-scan signal from a titanium alloy test block with two inclusion defects embedded therein is shown, the A-scan signal is obtained after filtering, and two echoes in the A-scan signal represent two defects. The C-scan image is obtained by visualizing the amplitude of all A-scans in the scanning surface at the same time, the display can show the horizontal projection condition of the defects in the workpiece, and the outlines and the severity of the defects can be clearly and visually seen through the image. As shown in fig. 6, a C-scan image of the titanium alloy test block inclusion defect of fig. 5 is shown.
The existing ultrasonic imaging method based on defect multi-feature intelligent extraction and fusion mainly adopts the characteristic quantity of the amplitude of an A-scanning signal to show the abnormality of a defect in an image. However, when the amplitude of the defect signal is close to that of the interference (noise) signal, especially for a tiny defect, a strong noise background exists in the C-scan image, and many false defects occur at the same time, which easily causes erroneous judgment and missed detection. Moreover, while the amplitude of the a-scan signal is changed by the defect, the waveform, phase and the like of the a-scan signal are also changed, and specific characteristics of the a-scan signal need researchers to fully understand the action mechanism of the ultrasound and the defect.
Therefore, there is a need in the art to develop an algorithm scheme capable of intelligently extracting and fusing defect features for ultrasound imaging.
Disclosure of Invention
The invention aims to provide an ultrasonic imaging method based on defect multi-feature intelligent extraction and fusion aiming at the defects in the prior art, which can intelligently extract defect feature information in signals and perform fusion imaging by using the extracted feature information.
In order to solve the technical problems, the invention adopts the following technical scheme:
an ultrasonic imaging method based on defect multi-feature intelligent extraction and fusion comprises the following steps:
step S1, converting the obtained three-dimensional A-scanning signal matrix of the object to be detected into a two-dimensional signal matrix;
step S2, analyzing the two-dimensional signal matrix obtained by conversion by utilizing principal component analysis, and extracting k features with the most distinguishing degree in the signals;
step S3, training the extracted k features as input of a neural network to obtain a multi-feature fusion classifier;
and step S4, performing signal recognition by using the trained classifier to obtain a multi-feature fusion recognition result matrix, and shaping the recognition result matrix according to a signal arrangement mode in the original three-dimensional A-scanning signal matrix to form an image.
Further, in the step S1, the signal conversion process is: scanning the object to be detected according to a preset sequence through the probe to acquire a three-dimensional A-scanning signal matrix S (x, y, n), further converting the three-dimensional A-scanning signal matrix S (x, y, n) into a two-dimensional signal matrix S (m, n), wherein x represents the number of transverse scanning points of the probe, y represents the number of longitudinal scanning points of the probe, m is x y represents the total number of signals, and n represents the signal sampling length.
Further, the step S2 includes:
step S21, Signal matrixDecentralization: the amplitude of an element in the matrix minus the average of the column in which it is located, i.e.Obtaining a decentralized matrix
Step S23, solving a feature value and a feature vector of C:wherein E ═ E1,e2,…,en]A feature vector matrix of C, ei(i ═ 1,2, …, n) is the eigenvector of C, λi(i ═ 1,2, …, n) is the characteristic value of C;
step S24, converting lambdai(i-1, 2, …, n) are arranged in descending order, and the eigenvectors e corresponding to the first k eigenvalues are selectedmax1,emax2,…,emaxkThe feature extraction matrix EXT ═ e is composedmax1,emax2,…,emaxk]Wherein k is<n;
Step S25, according to the feature extraction matrix, k features with the highest discrimination of all the signals are extracted to obtain a feature matrix F of the signal matrixm×k,Fm×k=Sm×n*EXTn×k。
Further, in the step S3, the process of training the multi-feature fusion classifier includes:
Step S32, network configuration: the neural network is a fully-connected neural network with three layers of k-h-1, wherein k represents the number of input neurons, h represents the number of hidden layer neurons, 1 represents the number of output neurons, k is determined by the extracted feature number, and the hidden layer h is determined by the final training result; a modified linear unit with an activation function of relu (x) max (0, x) for the neural network;
step S33, network training: setting the initial value of learning rate to 0.01 and the attenuation coefficient of learning rate to 0.99, and adopting a regularized loss function to carry out MSEregIs defined as:wherein, wjIs a weight, γ is a hyper-parameter that can be set manually; MSE is the mean square error, defined as:wherein the content of the first and second substances,is the target value and y is the predicted value of the neural network based on the input data.
Further, in the step S32, a moving average is adopted for all training variables in the neural network.
Compared with the prior art, the invention has the beneficial effects that: the method utilizes a dimensionality reduction machine learning algorithm to extract the characteristics of the acquired signal matrix; then, training to obtain a multi-feature fusion classifier based on the neural network by taking the feature data as the input of the neural network; finally, identifying the signal by using the trained multi-feature fusion classifier, and imaging according to the identification result; compared with the traditional imaging mode, the method has the advantages that abundant defect information is fused in the imaging, the signal to noise ratio of the defect imaging can be greatly improved, and the method is suitable for C-scan imaging of any ultrasonic detection technology.
Drawings
Fig. 1 is a flowchart of an ultrasonic imaging method based on defect multi-feature intelligent extraction and fusion in an embodiment of the present invention.
FIG. 2 is a block and defect layout diagram according to an embodiment of the present invention.
Fig. 3 is an imaging diagram of an ultrasound imaging method based on defect multi-feature intelligent extraction and fusion in an embodiment of the present invention.
Fig. 4 is a prior art maximum amplitude imaging plot.
FIG. 5 is an A-scan representation of two inclusions in a prior art titanium alloy coupon.
FIG. 6 is a C-scan representation of two inclusions in a prior art titanium alloy test block.
Detailed Description
The invention will be further described with reference to examples of embodiments shown in the drawings.
As shown in fig. 1, the embodiment discloses an ultrasonic imaging method based on defect multi-feature intelligent extraction and fusion, which can intelligently extract defect feature information in a signal and perform fusion imaging by using the extracted feature information. The ultrasonic imaging method based on defect multi-feature intelligent extraction and fusion comprises the following steps:
and step S1, converting the obtained three-dimensional A-scanning signal matrix of the object to be detected into a two-dimensional signal matrix.
Specifically, in step S1, the probe scans the object to be detected in a predetermined order to acquire a corresponding three-dimensional a-scan signal matrix S (x, y, n), and further converts the three-dimensional a-scan signal matrix S (x, y, n) into a two-dimensional signal matrix S (m, n), where x represents the number of probe transverse scanning points, y represents the number of probe longitudinal scanning points, m-x y represents the total number of signals, and n represents the signal sampling length.
And step S2, analyzing the two-dimensional signal matrix obtained by conversion by utilizing principal component analysis, and extracting k features with the most distinguishing degree in the signals.
In step S2, Principal Component Analysis (PCA), which is a kind of dimension reduction machine learning algorithm, can perform effective analysis and extraction operations on the two-dimensional signal matrix through principal component analysis. The specific analysis and extraction process comprises:
step S21, Signal matrixDecentralization: the amplitude of an element in the matrix minus the average of the column in which it is located, i.e.Obtaining a decentralized matrix
Step S23, solving a feature value and a feature vector of C:wherein E ═ E1,e2,…,en]A feature vector matrix of C, ei(i ═ 1,2, …, n) is the eigenvector of C, λi(i-1, 2, …, n) is a characteristic value of C.
Step S24, converting lambdai(i-1, 2, …, n) are arranged in descending order, and the eigenvectors e corresponding to the first k eigenvalues are selectedmax1,emax2,…,emaxkThe feature extraction matrix EXT ═ e is composedmax1,emax2,…,emaxk]Wherein k is<n;
Step S25, according to the feature extraction matrix, k features with the highest discrimination of all the signals are extracted to obtain a feature matrix F of the signal matrixm×kI.e. Fm×k=Sm×n*EXTn×k。
And step S3, training the extracted k features as input of a neural network to obtain the multi-feature fusion classifier.
In step S3, the process of training the multi-feature fusion classifier includes:
Step S32, network configuration: the neural network is a fully-connected neural network with three layers of k-h-1, wherein k represents the number of input neurons, h represents the number of hidden layer neurons, 1 represents the number of output neurons, k is determined by the extracted feature number, and the hidden layer h is determined by the final training result; the activation function of the neural network is a modified linear unit of relu (x) max (0, x). Here, for good robustness of the encoder, a moving average is used for all training variables in the neural network.
Step S33, network training: setting the initial value of learning rate to 0.01 and the attenuation coefficient of learning rate to 0.99, and adopting a regularized loss function to carry out MSEregIs defined as:wherein, wjIs a weight, γ is a hyper-parameter that can be set manually; MSE is the mean square error, defined as:wherein the content of the first and second substances,is the target value and y is the predicted value of the neural network based on the input data.
Step S4, performing signal recognition by using the trained classifier to obtain a multi-feature fusion recognition result matrix, wherein the recognition result matrix consists of 0 and 1, 0 represents a normal signal, and 1 represents a defect signal; and further shaping the recognition result matrix according to a signal arrangement mode in the original three-dimensional A-scanning signal matrix S (x, y, n) so as to image the recognition result matrix.
According to the ultrasonic imaging method based on defect multi-feature intelligent extraction and fusion, feature extraction is carried out on the acquired signal matrix by using a dimensionality reduction machine learning algorithm; then, training to obtain a multi-feature fusion classifier based on the neural network by taking the feature data as the input of the neural network; finally, identifying the signal by using the trained multi-feature fusion classifier, and imaging according to the identification result; compared with the traditional imaging mode, the imaging of the embodiment integrates abundant defect information, can greatly improve the signal-to-noise ratio of defect imaging, and is suitable for C-scan imaging of any ultrasonic detection technology.
The following detailed description is presented with specific applications and tests:
in step S1, data is acquired: the test piece was a stainless steel thin plate having an average surface roughness of 12 μm and having dimensions of 30mm by 5 mm. 2 x 6(2 rows and 6 columns) micropore defects are processed on the surface of the test block, and the specific layout and related parameters are shown in fig. 2. And (2) carrying out grid scanning on the three-dimensional signal matrix S by adopting 2MHz laser ultrasound, wherein the scanning range is 6.24mm multiplied by 19.2mm, all defects are covered, the step length is 0.048mm, each grid point records an A-scanning signal, the sampling depth is 1000, and finally, a 130 multiplied by 400 multiplied by 1000 three-dimensional signal matrix S (x, y, n) is obtained. Further, the three-dimensional signal matrix S (x, y, n) is converted into a two-dimensional signal matrix S (m, n) according to the scan parameters.
In step S2, features of the signal are extracted using the PCA algorithm.
(1) As the defect signal appears at about 200 sampling points, 1-250 sampling points of the A scanning signal are intercepted to obtain a 130 multiplied by 400 multiplied by 250 three-dimensional signal matrix Ssec。
(2) Will SsecShaped as a two-dimensional matrix of 52000 x 250.
(3) Will SsecInputting the number of the extracted features into a PCA algorithm, setting the number of the extracted features to be 3, and obtaining a feature matrix F52000×3。
In step S3, the network trains: randomly extracting 2/3 characteristic data for network training, and using the rest data as tests, wherein the network structure is 3-30-1, the number of training rounds is 6000, the initial learning rate is 0.01, and the learning rate attenuation coefficient is 0.99.
In step S4, the trained classifier is used to perform multi-feature fusion recognition on the signal, so as to obtain a classification result matrix.
The ultrasonic imaging method based on defect multi-feature intelligent extraction and fusion of the embodiment is practically applied and tested through the above process, as shown in fig. 3, the result of the classification result matrix visualization is displayed, and it can be seen from the figure that: the defects are well revealed without any noise. In order to better highlight the advantages of the present invention, compared with the maximum amplitude method imaging result shown in fig. 4, it can be seen that there is a strong noise background in fig. 4, the defect of a smaller line is almost submerged in the background, and many false defects occur at the same time, which easily causes false judgment and missing detection.
The protective scope of the present invention is not limited to the above-described embodiments, and it is apparent that various modifications and variations can be made to the present invention by those skilled in the art without departing from the scope and spirit of the present invention. It is intended that the present invention cover the modifications and variations of this invention provided they come within the scope of the appended claims and their equivalents.
Claims (5)
1. An ultrasonic imaging method based on defect multi-feature intelligent extraction and fusion is characterized by comprising the following steps:
step S1, converting the obtained three-dimensional A-scanning signal matrix of the object to be detected into a two-dimensional signal matrix;
step S2, analyzing the two-dimensional signal matrix obtained by conversion by utilizing principal component analysis, and extracting k features with the most distinguishing degree in the signals;
step S3, training the extracted k features as input of a neural network to obtain a multi-feature fusion classifier;
and step S4, performing signal recognition by using the trained classifier to obtain a multi-feature fusion recognition result matrix, and shaping the recognition result matrix according to a signal arrangement mode in the original three-dimensional A-scanning signal matrix to form an image.
2. The ultrasonic imaging method based on defect multi-feature intelligent extraction and fusion of claim 1,
in step S1, the signal conversion process is: scanning the object to be detected according to a preset sequence through the probe to acquire a three-dimensional A-scanning signal matrix S (x, y, n), further converting the three-dimensional A-scanning signal matrix S (x, y, n) into a two-dimensional signal matrix S (m, n), wherein x represents the number of transverse scanning points of the probe, y represents the number of longitudinal scanning points of the probe, m is x y represents the total number of signals, and n represents the signal sampling length.
3. The ultrasonic imaging method based on defect multi-feature intelligent extraction and fusion of claim 1,
the step S2 includes:
step S21, Signal matrixDecentralization: the amplitude of an element in the matrix minus the average of the column in which it is located, i.e.Obtaining a decentralized matrix
Step S23, solving a feature value and a feature vector of C:wherein E ═ E1,e2,…,en]A feature vector matrix of C, ei(i ═ 1,2, …, n) is the eigenvector of C, λi(i ═ 1,2, …, n) is the characteristic value of C;
step S24, converting lambdai(i-1, 2, …, n) are arranged in descending order, and the eigenvectors e corresponding to the first k eigenvalues are selectedmax1,emax2,…,emaxkThe feature extraction matrix EXT ═ e is composedmax1,emax2,…,emaxk]Wherein k is<n;
Step S25, according to the feature extraction matrix, k features with the highest discrimination of all the signals are extracted to obtain a feature matrix F of the signal matrixm×k,Fm×k=Sm×n*EXTn×k。
4. The ultrasonic imaging method based on defect multi-feature intelligent extraction and fusion of claim 1,
in step S3, the process of training the multi-feature fusion classifier includes:
Step S32, network configuration: the neural network is a fully-connected neural network with three layers of k-h-1, wherein k represents the number of input neurons, h represents the number of hidden layer neurons, 1 represents the number of output neurons, k is determined by the extracted feature number, and the hidden layer h is determined by the final training result; a modified linear unit with an activation function of relu (x) max (0, x) for the neural network;
step S33, network training: setting the initial value of learning rate to 0.01 and the attenuation coefficient of learning rate to 0.99, and adopting a regularized loss function to carry out MSEregIs defined as:wherein, wjIs a weight, γ is a hyper-parameter that can be set manually; MSE is the mean square error, defined as:wherein the content of the first and second substances,is the target value and y is the predicted value of the neural network based on the input data.
5. The ultrasonic imaging method based on defect multi-feature intelligent extraction and fusion of claim 4,
in step S32, a moving average is used for all training variables in the neural network.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110187998.0A CN112946081B (en) | 2021-02-09 | 2021-02-09 | Ultrasonic imaging method based on intelligent extraction and fusion of defect multiple features |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110187998.0A CN112946081B (en) | 2021-02-09 | 2021-02-09 | Ultrasonic imaging method based on intelligent extraction and fusion of defect multiple features |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112946081A true CN112946081A (en) | 2021-06-11 |
CN112946081B CN112946081B (en) | 2023-08-18 |
Family
ID=76244175
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110187998.0A Active CN112946081B (en) | 2021-02-09 | 2021-02-09 | Ultrasonic imaging method based on intelligent extraction and fusion of defect multiple features |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112946081B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114047259A (en) * | 2021-10-28 | 2022-02-15 | 深圳市比一比网络科技有限公司 | Method for detecting multi-scale steel rail damage defects based on time sequence |
CN114414658A (en) * | 2022-01-11 | 2022-04-29 | 南京大学 | Laser ultrasonic detection method for microcrack depth on metal surface |
CN115166032A (en) * | 2022-05-23 | 2022-10-11 | 东南大学 | Device and method for detecting cracks of fan blade |
CN116429911A (en) * | 2023-06-13 | 2023-07-14 | 中国科学院合肥物质科学研究院 | Intelligent identification method based on fusion of defect pulse signals and images |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105426889A (en) * | 2015-11-13 | 2016-03-23 | 浙江大学 | PCA mixed feature fusion based gas-liquid two-phase flow type identification method |
CN105548359A (en) * | 2016-01-13 | 2016-05-04 | 东北林业大学 | Wood hole defect ultrasonic detection and feature extraction method |
CN111855802A (en) * | 2020-07-28 | 2020-10-30 | 武汉大学 | Defect visualization imaging method for eliminating laser ultrasonic traveling wave |
CN111855803A (en) * | 2020-07-28 | 2020-10-30 | 武汉大学 | Laser ultrasonic high signal-to-noise ratio imaging method for manufacturing micro defects by metal additive |
US20200345330A1 (en) * | 2017-11-24 | 2020-11-05 | Chison Medical Technologies Co., Ltd. | Method for optimizing ultrasonic imaging system parameter based on deep learning |
CN112014478A (en) * | 2020-08-28 | 2020-12-01 | 武汉大学 | Defect echo blind extraction self-adaptive method submerged in ultrasonic grass-shaped signal |
-
2021
- 2021-02-09 CN CN202110187998.0A patent/CN112946081B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105426889A (en) * | 2015-11-13 | 2016-03-23 | 浙江大学 | PCA mixed feature fusion based gas-liquid two-phase flow type identification method |
CN105548359A (en) * | 2016-01-13 | 2016-05-04 | 东北林业大学 | Wood hole defect ultrasonic detection and feature extraction method |
US20200345330A1 (en) * | 2017-11-24 | 2020-11-05 | Chison Medical Technologies Co., Ltd. | Method for optimizing ultrasonic imaging system parameter based on deep learning |
CN111855802A (en) * | 2020-07-28 | 2020-10-30 | 武汉大学 | Defect visualization imaging method for eliminating laser ultrasonic traveling wave |
CN111855803A (en) * | 2020-07-28 | 2020-10-30 | 武汉大学 | Laser ultrasonic high signal-to-noise ratio imaging method for manufacturing micro defects by metal additive |
CN112014478A (en) * | 2020-08-28 | 2020-12-01 | 武汉大学 | Defect echo blind extraction self-adaptive method submerged in ultrasonic grass-shaped signal |
Non-Patent Citations (10)
Title |
---|
M. MENG等: "Ultrasonic signal classification and imaging system for composite materials via deep convolutional neural networks", NEUROCOMPUTING, vol. 257, pages 128 - 135, XP085064055, DOI: 10.1016/j.neucom.2016.11.066 * |
周伟;陆玉稳;: "卷积神经网络的方程组求解", 计算机与网络, no. 22, pages 69 - 72 * |
王智: "基于BP网络的钢轨缺陷信号实验分析", 《电子制作》 * |
王智: "基于BP网络的钢轨缺陷信号实验分析", 《电子制作》, 31 May 2013 (2013-05-31), pages 31 - 32 * |
王鹤学: "基于BP神经网络的钢轨超声波探伤技术研究及应用", 《中国优秀硕士学位论文全文数据库 (工程科技Ⅱ辑)》 * |
王鹤学: "基于BP神经网络的钢轨超声波探伤技术研究及应用", 《中国优秀硕士学位论文全文数据库 (工程科技Ⅱ辑)》, 30 June 2019 (2019-06-30), pages 39 - 57 * |
荣佑珍: "基于机器学习的焊接接头力学性能预测研究", 《中国优秀硕士学位论文全文数据库 工程科技I辑》 * |
荣佑珍: "基于机器学习的焊接接头力学性能预测研究", 《中国优秀硕士学位论文全文数据库 工程科技I辑》, 29 February 2020 (2020-02-29), pages 15 - 22 * |
路智欣等: "基于优化MFCC参数的GIS 绝缘缺陷声纹诊断模型", 《电子技术与软件工程》 * |
路智欣等: "基于优化MFCC参数的GIS 绝缘缺陷声纹诊断模型", 《电子技术与软件工程》, 31 December 2020 (2020-12-31), pages 224 - 225 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114047259A (en) * | 2021-10-28 | 2022-02-15 | 深圳市比一比网络科技有限公司 | Method for detecting multi-scale steel rail damage defects based on time sequence |
CN114047259B (en) * | 2021-10-28 | 2024-05-10 | 深圳市比一比网络科技有限公司 | Method for detecting multi-scale steel rail damage defects based on time sequence |
CN114414658A (en) * | 2022-01-11 | 2022-04-29 | 南京大学 | Laser ultrasonic detection method for microcrack depth on metal surface |
CN114414658B (en) * | 2022-01-11 | 2024-04-09 | 南京大学 | Laser ultrasonic detection method for depth of microcracks on metal surface |
CN115166032A (en) * | 2022-05-23 | 2022-10-11 | 东南大学 | Device and method for detecting cracks of fan blade |
CN115166032B (en) * | 2022-05-23 | 2024-04-19 | 东南大学 | Device and method for detecting cracks of fan blades |
CN116429911A (en) * | 2023-06-13 | 2023-07-14 | 中国科学院合肥物质科学研究院 | Intelligent identification method based on fusion of defect pulse signals and images |
CN116429911B (en) * | 2023-06-13 | 2023-09-01 | 中国科学院合肥物质科学研究院 | Intelligent identification method based on fusion of defect pulse signals and images |
Also Published As
Publication number | Publication date |
---|---|
CN112946081B (en) | 2023-08-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112946081B (en) | Ultrasonic imaging method based on intelligent extraction and fusion of defect multiple features | |
Medak et al. | Automated defect detection from ultrasonic images using deep learning | |
CN112232400B (en) | Stainless steel weld ultrasonic defect detection method based on depth feature fusion | |
Masnata et al. | Neural network classification of flaws detected by ultrasonic means | |
Polikar et al. | Frequency invariant classification of ultrasonic weld inspection signals | |
JP5595281B2 (en) | Non-destructive inspection for pipes, especially during production or in the finished state | |
CN110363746B (en) | Ultrasonic nondestructive detection signal classification method based on convolutional neural network | |
Posilović et al. | Flaw detection from ultrasonic images using YOLO and SSD | |
JP2011506992A5 (en) | ||
Gao et al. | Damage characterization using CNN and SAE of broadband Lamb waves | |
CN107563364A (en) | The discriminating conduct of the fingerprint true and false and fingerprint identification method based on sweat gland | |
Mei et al. | Visual geometry group-UNet: deep learning ultrasonic image reconstruction for curved parts | |
Kesharaju et al. | Ultrasonic sensor based defect detection and characterisation of ceramics | |
Posilović et al. | Deep learning-based anomaly detection from ultrasonic images | |
Xu et al. | Ultrasonic signal enhancement for coarse grain materials by machine learning analysis | |
Medak et al. | Deep learning-based defect detection from sequences of ultrasonic B-scans | |
CN115861226A (en) | Method for intelligently identifying surface defects by using deep neural network based on characteristic value gradient change | |
Kim et al. | Automated data evaluation in phased-array ultrasonic testing based on A-scan and feature training | |
CN112465924A (en) | Rapid medical image reconstruction method based on multi-feature fusion | |
CN112302061A (en) | Intelligent rapid interpretation method for integrity detection signal of low-strain foundation pile | |
Sutcliffe et al. | Automatic defect recognition of single-v welds using full matrix capture data, computer vision and multi-layer perceptron artificial neural networks | |
Ramuhalli et al. | Multichannel signal processing methods for ultrasonic nondestructive evaluation | |
CN114778689A (en) | Defect detection and identification system and method for high-density polyethylene pipeline hot-melt butt joint | |
CN113919396A (en) | Vibration signal and image characteristic machine tool cutter wear state monitoring method based on semi-supervised learning | |
Bae et al. | Classification of ultrasonic weld inspection data using principal component analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |