GB2610449A - Efficient high-resolution non-destructive detecting method based on convolutional neural network - Google Patents

Efficient high-resolution non-destructive detecting method based on convolutional neural network Download PDF

Info

Publication number
GB2610449A
GB2610449A GB2200388.3A GB202200388A GB2610449A GB 2610449 A GB2610449 A GB 2610449A GB 202200388 A GB202200388 A GB 202200388A GB 2610449 A GB2610449 A GB 2610449A
Authority
GB
United Kingdom
Prior art keywords
neural network
convolutional neural
carrying
edge
under detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB2200388.3A
Other versions
GB2610449B (en
GB2610449A8 (en
Inventor
Kan Yan
Fan Xin
Shan Yimeng
Xuan Shanyong
Zhang Ping
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Publication of GB2610449A publication Critical patent/GB2610449A/en
Publication of GB2610449A8 publication Critical patent/GB2610449A8/en
Application granted granted Critical
Publication of GB2610449B publication Critical patent/GB2610449B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/36Detecting the response signal, e.g. electronic circuits specially adapted therefor
    • G01N29/38Detecting the response signal, e.g. electronic circuits specially adapted therefor by time filtering, e.g. using time gates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/04Analysing solids
    • G01N29/043Analysing solids in the interior, e.g. by shear waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/04Analysing solids
    • G01N29/06Visualisation of the interior, e.g. acoustic microscopy
    • G01N29/0654Imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/22Details, e.g. general constructional or apparatus details
    • G01N29/26Arrangements for orientation or scanning by relative movement of the head and the sensor
    • G01N29/262Arrangements for orientation or scanning by relative movement of the head and the sensor by electronic orientation or focusing, e.g. with phased arrays
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/44Processing the detected response signal, e.g. electronic circuits specially adapted therefor
    • G01N29/4481Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2291/00Indexing codes associated with group G01N29/00
    • G01N2291/02Indexing codes associated with the analysed material
    • G01N2291/023Solids
    • G01N2291/0234Metals, e.g. steel
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2291/00Indexing codes associated with group G01N29/00
    • G01N2291/02Indexing codes associated with the analysed material
    • G01N2291/025Change of phase or condition
    • G01N2291/0258Structural degradation, e.g. fatigue of composites, ageing of oils
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2291/00Indexing codes associated with group G01N29/00
    • G01N2291/04Wave modes and trajectories
    • G01N2291/042Wave modes
    • G01N2291/0421Longitudinal waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2291/00Indexing codes associated with group G01N29/00
    • G01N2291/04Wave modes and trajectories
    • G01N2291/044Internal reflections (echoes), e.g. on walls or defects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2291/00Indexing codes associated with group G01N29/00
    • G01N2291/10Number of transducers
    • G01N2291/106Number of transducers one or more transducer arrays
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2291/00Indexing codes associated with group G01N29/00
    • G01N2291/26Scanned objects
    • G01N2291/263Surfaces
    • G01N2291/2638Complex surfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Pathology (AREA)
  • Immunology (AREA)
  • General Health & Medical Sciences (AREA)
  • Biochemistry (AREA)
  • Analytical Chemistry (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Quality & Reliability (AREA)
  • Investigating Or Analyzing Materials By The Use Of Ultrasonic Waves (AREA)

Abstract

A high-resolution non-destructive detecting method includes the steps: transmitting a plane wave with a deflection angle of 0 to a workpiece under detection through an ultrasonic phased array, collecting scattered echo data of the transmitted wave, carrying out time domain filtering on the echo data by using an FIR filter to filter out random noise in signals; carrying out ultrasonic imaging based on a convolutional neural network algorithm, specifically including carrying out pre-processing according to obtained scattered echo signals, then taking pre-processed signals as an input of a convolutional neural network, and imaging the workpiece to obtain a coarse scan image; and carrying out defect edge detection based on a Sobel operator, specifically including carrying out edge extraction on a bright spot in a final imaging result by utilizing a Canny operator, to obtain position, shape and size range information of a defect.

Description

Efficient High-resolution Non-destructive Detecting Method Based on Convolutional Neural Network
Technical Field
[0001] The present invention relates to the technical field of non-destructive detection, and in particular to an efficient high-resolution non-destructive detecting method based on a convolutional neural network.
Background
[0002] On the basis of not damaging the properties of materials, it is very difficult to detect tiny defects of materials with complex geometries. Especially, non-destructive detection for metal materials is an extremely important quality control technique. Taking GH4169 alloy as an example, it is widely applied to key components such as an aero engine turbine disc, a compressor drum, and a cartridge receiver. Catastrophic consequences may he caused even if tiny defects and fatigue damage exist in these key components, so it is very important to carry out high-precision non-destructive detection on these components. Ultrasonic detection is one of the most widely applied detecting methods among all non-destructive detecting methods due to the advantages of being low in cost, rapid in detection, non-destructive to material performance and the like, and is an indispensable detecting means especially in the industrial fields of aviation, ships, nuclear industry and the like. With increasing requirements for detecting reliability in industrial inspection, faster detecting speed, higher detecting precision and more accurate defect description are also required for ultrasonic detection, so that how to improve the above-described performance of ultrasonic in terms of non-destructive detection has attracted more and more attention and becomes a research hotspot.
[0003] Beijing Goldwind Smart Energy Technology Co., Ltd. proposed a method and device for detecting internal damage of a workpiece (Method and device for detecting internal damage of workpieces. Publication No.: CN107505395A). The method includes: acquiring field echo data of an ultrasonic wave from a surface of a target workpiece to a bottom of the workpiece through the interior of the workpiece and returning to the surface of the workpiece from the bottom of the workpiece; and determining an internal damage condition of the target workpiece according to a comparison result of the field echo data of the target workpiece and reference data. According to the method, the detecting efficiency and the detecting precision are improved, a special ultrasonic flaw detector is not needed in the scheme, thus the detecting operation is simple, and the detecting cost is reduced. The method has the problems that reference data of the target workpiece needs to be obtained in advance, so that the method is not universal; and meanwhile, the method can only detect whether damage exists or not and the approximate position of the damage, and the damage cannot be accurately located. [0004] General Electric Company proposed a scheme (Methods of non-destructive testing and IS ultrasonic inspection of composite materials. US Patent Publication No.: US20170199160A1), which includes the following steps: firstly locating an ultrasonic transducer with respect to a workpiece under detection, then collecting B-scan data of the workpiece under detection from at least one B-scan, and collecting C-scan data of the workpiece under detection from at least one C-scan; removing random noise and coherent noise of the data according to a predetermined geometric shape of the workpiece under detection to obtain filtered data; and finally generating a V-scan image and thus determining a plurality of damage indexes of the workpiece under detection. The method has the problems that the geometrical shape of the workpiece under detection needs to be determined in advance, and B-scan and C-scan need to be carried out on the workpiece under detection, so that rapid detection cannot be achieved.
Summary
[0005] The present invention solves the problems of low non-destructive defect detecting speed and poor detecting precision of a workpiece under detection, so that a quality control technology in industrial production is improved. The present invention provides an efficient high-resolution non-destructive detecting method based on a convolutional neural network, and provides the following technical scheme.
[0006] An efficient high-resolution non-destructive detecting method based on a convolutional neural network includes the following steps: [0007] Step 1: transmitting a plane wave with a deflection angle of 0 to a workpiece under detection through an ultrasonic phased array, collecting scattered echo data of the transmitted plane wave, and carrying out time domain filtering on the echo data by using an FIR filter to filter out random noise in signals.
[0008] Step 2: carrying out ultrasonic imaging based on a convolutional neural network algorithm, specifically including carrying out preprocessing according to the obtained scattered echo signals, then taking the preprocessed signals as an input of a convolutional neural network, and imaging the workpiece under detection to obtain a coarse scan image of the workpiece under detection.
[0009] Step 3: carrying out defect edge detection based on a Sobel operator, specifically including carrying out edge extraction on a bright spot in a final imaging result by utilizing a Canny operator, so as to obtain position information, shape information and size range information of a defect.
[0010] Preferably, a preprocessing process in the step2 specifically includes: [0011] Step 2.1: establishing a rectangular coordinate system with the geometric center of the ultrasonic phased array as an origin of coordinates, determining position coordinates of the center of each array element of the ultrasonic phased array in the coordinate system, and carrying out grid division on an imaging plane of the workpiece under detection; [0012] calculating a distance 4,..7) of the transmitted plane wave to a grid center with the coordinates of (X(e1),.).2(f "pl(/))) =z(,,i) (1) [0013] obtaining a transmitting distance matrix D of a grid center in the imaging plane corresponding to a transmitted signal by using formula (1): D = (2).
[0014] N), and N, are the number of divided grids in the x direction and the z direction; [0015] Step21:calcWaflnga distance duc, ) between a k-th receiving array element with the center coordinates of (4,YoZk) and a grid center with the coordinates of (x i(x -KJ' + 02( +(zu,n (3) [0016] obtaining a receiving distance matrix 14 of a grid center in the imaging plane corresponding to the k-th receiving array element by using formula (3): (4).
[0017] when the workpiece under detection is an isotropic medium, the propagation speed of an ultrasonic wave in the workpiece under detection is consistent, the propagation speed of the ultrasonic wave in the workpiece under detection being c, and thus obtaining propagation time To n of the plane wave after being transmitted, passing through each grid center in the imaging plane and then being received by the k-th receiving array element: Tw,k) = (D + 4)/ c (5) [0018] a linear interpolation principle is needed to obtain the ultrasonic echo signal intensity at the position of any grid center, and the basic principle of linear interpolation is as follows: f (x,) (;) (x) = (x0) + * - (x x0) -xo [0019] determining the amplitude of a pulse echo signal at time t, a discrete sampling time of an echo signal being smaller than time t but closest to time t is to and the corresponding signal amplitude is a(to), a discrete sampling time of an echo signal being larger than time t but closest to time t is to + At, At is a sampling time interval of the discrete echo signals and the corresponding signal amplitude is c(t 0 + At) , the amplitude of a pulse echo signal at time t is aft), and then carrying out calculation according to the linear interpolation principle: a(1)-a(10) a(to + -a(10) -to At (6) (7) [0020] obtaining an ultrasonic echo signal amplitude matrix Ak of the grid center in the imaging plane corresponding to the k-th receiving array element: a(k11) a(A) J,) a (k,2,Ni) a(k,A1,1) [0021] Preferably, grid division density is set to be 1/mm2, and position coordinates of the grid center in the coordinate system are determined.
[0022] Preferably, a training process of the convolutional neural network in the step 2 includes the following steps: utilizing yell-to represent a real image of the interior of the workpiece under detection, utilizing A e N"' to represent signals obtained after preprocessing signals received by N array elements of a transducer array.
[0023] In an image reconstruction process, y is estimated by A through a certain function; utilizing (A; 19) to represent a beamforming function of the convolutional neural network, where 6 represents the parameter of the neural network, the objective of image reconstruction by using the convolutional neural network is to find the optimal parameter 0* such that the error between an estimated image and the real image y is minimal, and a function IS relationship used is expressed as follows: 04 = arg min L(y, AN (A,8)) (9) [0024] L(YjAA,(40)) represents a loss function of the error between the estimated image and the real image. Ak = (8)
[0025] A structural similarity (MS-SS/M)-based function is selected as the loss function, and calculation of SSIM between a real pixel and an estimated pixel of the /-th row and j-th column is as follows: - +1.1 2o- +C (10) \+-go +C a +62 +C 2 [0026] 6 and Care scalar parameters selected based on experience for improving the computational stability of loss, u an u-are mean values of the neighborhood pixels of and 52 correspondingly, and o-2 and c are variances of the neighborhood pixels of you) and Si, correspondingly, a represents covariances the neighborhood pixels of yo") and:12i, j). The value of SS/M varies from -1 to 1, and SS/M=1 indicates full correlation 10 between the two images, so that the SS/M-based loss function is defined as: Lsszki (S',./2) = 1- SMAI(Yo),-R) (11) AiNz J-1 [0027] The two images are compared in the form of the loss function, and the loss function is as follows: f(logy,logP) = min L(log y, w + log j,;) (12) Ii [0028] w represents a positive weight factor for zooming the estimated image.
[0029] A differential of the SS/M value with respect to w is calculated for the real pixel and the estimated pixel of the i-th row and j-th column in the form of formula (12): 0 -aw SSIM (log y + log) - 4,u,ogy @ iolo"g,y,,r, 7,1og12oy(, (ul2o g)-, 04 + ulog) , 2), (cog) ± cgs)01gy 0 (11' )2)2( ) [0030] 64 and Care ignored, and the optimal weight w is obtained by solving for all pixels: 0= -SSW (log yo. , + log S'om) (14) [0031] A structure of the convolutional neural network is defined after the loss function of the convolutional neural network is defined. The convolutional neural network includes M repeated convolution blocks, and each convolution block includes a 2D convolution layer, a batch normalization layer and a rectified linear unit activation layer.
[0032] After the structure of the convolutional neural network is defined, the convolutional neural network is trained by utilizing simulation data and real data of the workpiece under detection to obtain the optimal parameter in each convolutional block, thereby obtaining the trained convolutional neural network.
[0033] Through the pre-processed echo data of the workpiece under detection and the trained convolutional neural network, coarse scan imaging of the interior of the workpiece under detection is achieved.
[0034] Preferably, the step 3 specifically includes: [0035] Step 3.1: carrying out Gaussian filtering on a final imaging result, and carrying out discrete sampling and normalization on a Gaussian surface, normalization refers to that the sum of all elements of a convolution kernel is 1, and a Gaussian filtering template K with the standard deviation of a =1.4, and the size of 5 x 5 is as follows: 2 4 5 4 2 4 9 12 9 4 K 5 12 15 12 5 (15). =
4 9 12 9 4 2 4 5 4 2 [0036] Step 3.2: calculating gradient magnitude for the Sobel operator: and 0 direction by utilizing the Sobel operator, Sir = -1 0 1 (16) S, = -1 0 1 ( 1 7) . [0037] The gradient magnitude is calculated -1 0 1 (18) . G(x, 1 1 0 0 -1 -1 -1 by: = JS +.52 [0038] The gradient direction is calculated by: R(x, y)= aittan(Si, / (19) . [0039] Step 3.3: in order to obtain the edge of the width of a single pixel, carrying out non-maximum suppression by the Sobel operator on the amplitude matrix of the image, firstly classifying the gradient direction R (x,y) into four angles (0 to 45,45 to 90, 90 to 135, 135 to 180) according to the principle of proximity, obtaining two point pairs (gi, g2) and (g3, ga) closest to a gradient vector among other eight corresponding points in eight neighborhoods of a point, respectively comparing the gradient amplitude of the point with ga, g2, g3 and g4, under the condition that the gradient amplitude is smaller than any one of gi, g2, g3 and gL, the amplitude at the point is 0, otherwise, the point is considered as a potential edge, reserving the amplitude, and finally, utilizing a double-threshold method for detection.
[0040] Pixels remained after applying non-maximum suppression more accurately represent the actual edge in the image, filtering the edge pixels by weak gradient values, meanwhile, reserving the edge pixels with high gradient values, namely, obtaining accurate defect edge information of the workpiece under detection by selecting high and low thresholds, and then accurately calculating the position information and the size information of an internal defect of the object under detection by utilizing edge extraction information.
[0041] Preferably, when the gradient value of the pixel is higher than the high threshold, the pixel is marked as a strong edge pixel; when the gradient value of the edge pixel is smaller than the high threshold and larger than the low threshold, the edge pixel is marked as a weak edge pixel; and when the gradient value of the edge pixel is smaller than the low threshold, the edge pixel is suppressed.
[0042] The present invention has the following beneficial effects.
[0043] The method includes the following steps: firstly, transmitting the plane wave once by utilizing the ultrasonic phased array, receiving echo data by all array elements of the phased array at the same time, then preprocessing the echo data to obtain a target area pixel matrix corresponding to N array elements of the transducer array, taking the preprocessed matrix as an input signal of the trained convolutional neural network, and finally outputting the imaging result of the target area through layer-by-layer calculation. According to the method, the imaging speed of the algorithm is greatly improved, and the resolution of the final imaging result can be effectively improved due to the fact that the number of transmitting is far smaller than that of a full matrix capturing mode and that of a coherent plane wave imaging mode, and meanwhile the strong computing power of a neural network is utilized. At last, edge extraction is carried out on the image output by the convolutional neural network by utilizing the Sobel operator to obtain position information and size information of an internal defect of an object under detection, thereby realizing high-accuracy representation of the defect.
Brief Description of Figures
[0044] FIG. 1 is a flow chart of an efficient high-resolution non-destructive detecting method based on a convolutional neural network; [0045] FIG. 2 is a schematic diagram of ultrasonic plane wave transmission; and [0046] FIG. 3 is a structure diagram of a convolutional neural network.
Detailed Description
[0047] The present invention will be described in detail below in combination with specific embodiments.
[0048] Embodiment 1: [0049] As shown in FIG. 1 to FIG. 3, the present invention provides an efficient high-resolution non-destructive detecting method based on a convolutional neural network. The method specifically includes the following steps.
[0050] Step 1: ultrasonic plane wave signal transmission and echo receiving [0051] A plane wave with a deflection angle of 0 is transmitted to a workpiece under detection through an ultrasonic phased array, then scattered echo data of the transmitted plane wave is collected, and time domain filtering is carried out on the echo data by using an FIR filter to filter out random noise in signals.
[0052] Step 2: ultrasonic imaging based on a convolutional neural network algorithm [0053] Preprocessing is carried out according to the scattered echo signals obtained in step 1, then the preprocessed signals are taken as an input of a convolutional neural network, and the workpiece under detection is imaged to obtain a coarse scan image of the workpiece under detection.
[0054] A preprocessing process of the scattered echo data includes: firstly, establishing a rectangular coordinate system with the geometric center of the ultrasonic phased array as an origin of coordinates, determining position coordinates of the center of each array element of the ultrasonic phased array in the coordinate system, and carrying out grid division on an imaging plane of the workpiece under detection. Since the workpiece under detection is coarsely scanned, grid division density is set to be 1/mm2, and position coordinates of each grid center in the coordinate system are determined. A distance (410,1) of the transmitted plane wave IS to a grid center with the coordinates of (In) is further calculated: (, 1) (1) tom [0055] A transmitting distance matrix Dot a grid center in the imaging plane corresponding to a transmitted signal is obtained by using formula (1): D = (2) [0056] N" and N" are the number of divided grids in the x direction and the z direction.
[0057] Then a distance d between a k-th receiving array element with the center coordinates of (Xic,M,20 and a grid center with the coordinates of cicom,yo zom) is calculated: 2 -.1,02 (Z01) (3) [0058] A receiving distance matrix 14 of a grid center in the imaging plane corresponding to the k-th receiving array element is obtained by using formula (3): Di( = (4) [0059] Assuming that the workpiece under detection is an isotropic medium, the propagation speed of an ultrasonic wave in the workpiece under detection is consistent, the propagation speed of the ultrasonic wave in the workpiece under detection is c, and thus propagation time of the plane wave after being transmitted, passing through each grid center in the imaging plane and then being received by the k-th receiving array element can be obtained: Tin) = (D) * (5) [0060] Since the collected ultrasonic echo signals are discrete signals, it is necessary to use a linear interpolation principle to obtain the ultrasonic echo signal intensity at any grid center, the basic principle of linear interpolation is as follows: (x)-(ro) + (x3 - (6) [0061] The amplitude of a pulse echo signal at time t is solved. A discrete sampling time of an echo signal being smaller than time t but closest to time t is to, and the corresponding signal amplitude is a(to). A discrete sampling time of an echo signal being larger than time t but closest to time t is to +At, At is a sampling time interval of the discrete echo signals, and the corresponding signal amplitude is a(to+ . The amplitude of a pulse echo signal at time t is a(t), and then calculation is carried out according to the linear interpolation principle: 41)-410 _a(10 + AO -(A0) (7) t At [0062] By utilizing such a relationship, an ultrasound echo signal amplitude matrix Ak of the grid center in the imaging plane corresponding to the k-th receiving array element can be obtained: a(k))) a(k,N,) a11,1) a(k-,N)1) a(t, ) [0063] And thus, preprocessing of the ultrasonic echo data is completed.
[0064] A training process of the convolutional neural network includes the following steps: y ER\ cvs is utilized to represent a real image of the interior of the workpiece under detection.
[0065] A E \ is utilized to represent signals obtained after preprocessing signals received by N array elements of a transducer array. In an image reconstruction process, y is estimated by A through a certain function. OA is utilized to represent a beamforming* function of the convolutional neural network, 6 represents the parameter of the neural network, the objective of image reconstruction by using the convolutional neural network is to = (8) find the optimal parameter 0* such that the error between an estimated image Li and the real image y is minimal, and a function relationship is expressed as follows: Ot = arg m L (y, (A. 0)) (9) [0066] L(y,.1(A,(9)) represents a loss function of the error between the estimated image and the real image.
[0067] For the neural network, the loss function affects the training process of the network. A structural similarity (MS-SS/M)-based function is selected as the loss function. Calculation of SS/M between a real pixel and an estimated pixel of i-th row and j-th column is as follows: 2o- +C Y(,.;)*VC,J) 2 u + p: + C, +a +C21 Ar.3, [0068] (, and C, are scalar parameters selected based on experience for improving the calculation stability of loss, u and 1.1, are mean values of neighborhood pixels of)2_ and 5;w) correspondingly, and a' and 0-2 are variances of the neighborhood pixels of y ( and correspondingly, a represents covariances of the neighborhood pixels of yod, and)',J). The value of SS/M varies from -1 to 1, and SS/M=1 indicates full correlation between the two images, so that the SS/M-based loss function is defined as: LIssivrt
IH AcA T
[0069] Because the estimated image obtained from the echo data and the real image have different units, the principle is unclear when comparison is carried out on the two images.
-2,u \ (10) (1;) (ii) (11)
Meanwhile, a standard loss function is sensitive to normalization, and therefore, a new form of loss function is proposed to compare the two images, the new form of loss function is as follows: (log y,log _13) = min L(log y, w + log)) (12) [0070] w represents a positive weight factor for zooming the estimated image.
[0071] A differential of the SS/M value with respect to w is calculated for the real pixel and the estimated pixel of the i-th row and j-th column in the form of formula (12) is as follows: aCi i2og SASIMOog y(1,)),j) + I oo-b) = +0_2 (.45-logy, )2) ± 012 ± (11, +nip)2), (13) og P.7) [0072] CI and (2T are ignored, and the optimal weight w is obtained by solving for all pixels: 0 = -SSTAI(log + log 5') (14) 1=1 1=1 DIV [0073] After the loss function of the convolutional neural network is defined, a structure of the convolutional neural network is defined. The convolutional neural network includes M repeated convolution blocks, and each convolution block includes a 2D convolution layer, a batch normalization layer and a rectified linear unit activation layer.
[0074] After the structure of the convolutional neural network is defined, the convolutional neural network is trained by utilizing simulation data and real data of the workpiece under detection to obtain the optimal parameter in each convolutional block, thereby obtaining the trained convolutional neural network.
[0075] At last, through the pre-processed echo data of the workpiece under detection and the trained convolutional neural network, coarse scan imaging of the interior of the workpiece under detection is achieved.
[0076] Step 3: defect edge detection based on a Sobel operator [0077] When the ultrasonic wave is propagated in the workpiece under detection, the echo signals may be generated when the ultrasonic wave meets a defect of the workpiece under detection, and then a bright spot may be presented at the position of the defect in a final image, so that edge extraction is carried out on the bright spot in the final imaging result by utilizing a Canny operator, so as to obtain position information, shape information and size range information of the defect. Firstly, Gaussian filtering is carried out on the final imaging result, and the main effect is to filter out part of high-frequency noise without losing main edge information of the image. Gaussian filtering is to carry out convolution on an image by using a two-dimensional Gaussian kernel of a certain size. The Gaussian kernel is a discrete approximation of a continuous Gaussian function, and is generally obtained by carrying out discrete sampling and normalization on a Gaussian surface. The normalization refers to that the sum of all elements of the convolution kernel is 1. A Gaussian filtering template K with the standard deviation of a =1.4 and the size of 5x5 is as follows: 2 4 5 4 2 4 9 12 9 4 1 5 12 15 12 5 (15)
K = -
4 9 12 9 4 2 4 5 4 2 [0078] Then gradient magnitude and direction are calculated by utilizing the Sobel operator, for the Sobel operator: -1 0 1 St. = -1 0 1 (16) -1 0 1_ 1 1 1 = 0 0 0 (17) _-1 -1 -1 [0079] The gradient magnitude is calculated by: G(x = + .5; (18) [0080] The gradient direction is calculated by: R(x, y)= arctan(S /S) (19) [0081] In order to obtain the edge of the width of a single pixel, non-maximum suppression is carried out by the Sobel operator on the amplitude matrix of the image. Firstly, the gradient direction R (x,y) is classified into four angles (0 to 45,45 to 90, 90 to 135, 135 to 180) according to the principle of proximity. Then two point pairs (gi,g2) and (g3,g4) closest to a gradient vector among other eight points in eight neighborhoods of a point are obtained. The gradient amplitude of the point is compared with gi, g2, g3 and g4 respectively. Under the condition that the gradient amplitude is smaller than any one of gi, g2, g3 and g4, the amplitude at the point is 0; and otherwise, the point is considered as a potential edge, the amplitude thereof is reserved, and finally, a double-threshold method is used for detection. The pixels remained after applying non-maximum suppression can more accurately represent the actual edge in the image, but some edge pixels caused by noise and color changes still exist, and in order to solve these spurious correspondences, the edge pixels should be filtered through weak gradient values, while the edge pixels with high gradient values are retained, that is to be realized by selecting high and low thresholds. Under the condition that the pixel gradient value is higher than the high threshold, the pixel is marked as a strong edge pixel. Under the condition that the gradient value of the edge pixel is smaller than the high threshold and greater than the low threshold, the edge pixel is marked as a weak edge pixel. Under the condition that the gradient value of the edge pixel is smaller than the low threshold, the pixel is suppressed. Therefore, accurate defect edge information of the workpiece under detection can be obtained, and then the position information and the size information of the internal defect of the object under detection can be accurately calculated by utilizing edge extraction information.
[0082] The object under detection is an aluminum alloy test block which is made of an isotropic material. An ultrasonic phased array with 64 array elements and the center frequency of 5 MHz is used for detecting. The spacing of the array elements of the phased array is 0.5 mm. Firstly, all array elements are simultaneously excited by utilizing an ultrasonic phased array control IS system, the ultrasonic plane wave with the deflection angle of 0 degree is transmitted, and then a transmit-receive switch is switched to enable all array elements to simultaneously receive reflection echoes from the internal boundary of the object under detection. Grid division is carried out according to the actual size of the object under detection. Propagation time of the plane wave after being transmitted, passing through each grid and then being received by each array element of the phased array is calculated, and combined with the actual echo signals to calculate the pixel value of each array element echo signal mapped to a target detection area by utilizing a linear interpolation algorithm. The pixel value of the target area corresponding to each array element obtained through preprocessing is used as an input signal of the convolutional neural network, and an imaging result of the target area is obtained through layer-by-layer calculation. Finally, the imaging result is processed by utilizing an edge extraction algorithm based on a Sobel operator to obtain the edge information of the internal structure of the object under detection. The position information and the size information of the internal defect of the object under detection can be accurately calculated according to the edge information, and efficient high-resolution non-destructive detection of the object under detection is achieved.
[0083] According to the efficient high-resolution non-destructive detecting method for the defects based on the convolutional neural network provide by the present invention, the echo data collected by single plane wave transmission is utilized to calculate and output the high-resolution imaging result through the convolutional neural network, so that the efficiency of a detection algorithm is effectively improved, the detecting time is shortened, and meanwhile, the position and size information of the internal defect of the object under detection are IS accurately obtained through the edge extraction algorithm based on the Sobel operator.
[0084] The above is only an optimal implementation of the efficient high-resolution nondestructive detecting method for the defects based on the convolutional neural network. The protection range of the efficient high-resolution non-destructive detecting method for the defects based on the convolutional neural network is not limited to the above embodiments, and all technical schemes based on the concept belong to the protection range of the present invention. It should be noted that those skilled in the art may make variations and improvements without departing from the principle of the present invention, and all these variations and improvements fall within the protection scope of the present invention.

Claims (6)

  1. What is claimed is: 1. An efficient high-resolution non-destructive detecting method based on a convolutional neural network, characterized by comprising the following steps: step 1: transmitting a plane wave with a deflection angle of 0 to a workpiece under detection through an ultrasonic phased array, collecting scattered echo data of the transmitted plane wave, and carrying out time domain filtering on the echo data by using an FIR filter to filter out random noise in signals; step 2: carrying out ultrasonic imaging based on a convolutional neural network algorithm, specifically comprising carrying out preprocessing according to obtained scattered echo signals, then taking the preprocessed signals as an input of a convolutional neural network, and imaging the workpiece under detection to obtain a coarse scan image of the workpiece under detection; and step 3: carrying out defect edge detection based on a Sobel operator, specifically comprising carrying out edge extraction on a bright spot in a final imaging result by utilizing a IS Canny operator, so as to obtain position information, shape information and size range information of a defect.
  2. 2. The efficient high-resolution non-destructive detecting method based on the convolutional neural network according to claim 1, characterized in that a preprocessing process in the step 2 specifically comprises: step 2.1: establishing a rectangular coordinate system with the geometric center of the ultrasonic phased array as an origin of coordinates, determining position coordinates of the center of each array element of the ultrasonic phased array in the coordinate system, and carrying out grid division on an imaging plane of the workpiece under detection; calculating a distance of the transmitted plane wave to a grid center with the coordinates of (v(,,),n,,), (1) obtaining a transmitting distance matrix D of a grid center in the imaging plane corresponding to a transmitted signal by using formula (1): D= (2) wherein Nand Al, are the number of divided grids in the x direction and the z direction; step 2.2: calculating a distance clut, between a k-th receiving array element with the center coordinates of (xkoik,:k) and a grid center with the coordinates of = NKr( - (3) obtaining a receiving distance matrix 14 of a grid center in the imaging plane corresponding to the k-th receiving array element by using formula (3): (4) *-* when the workpiece under detection is an isotropic medium, the propagation speed of an ultrasonic wave in the workpiece under detection is consistent, the propagation speed of the ultrasonic wave in the workpiece under detection is c, and thus obtaining propagation time Tok) of the plane wave after being transmitted, passing through each grid center in the imaging plane and then being received by the k-th receiving array element: ",k) =(D-F Ilk)/ c (5) a linear interpolation principle is needed to obtain the ultrasonic echo signal intensity at the position of any grid center, and the basic principle of linear interpolation is as follows: (x)= (;) + (xi) -*f (x xo) (6) Xt determining the amplitude of a pulse echo signal at time t, a discrete sampling time of an echo signal being smaller than time t but closest to time t is to and the corresponding signal amplitude is a(to), a discrete sampling time of an echo signal being larger than time t but closest to time t is to + At, At is a sampling time interval of the discrete echo signals and the corresponding signal amplitude is 0(10 + , the amplitude of a pulse echo signal at time t is a(t), and then carrying out calculation according to the linear interpolation principle: a(t)-a(10) _ct(to+ 41)-a(t0) (7) t -to At IS obtaining an ultrasonic echo signal amplitude matrix Ak of the grid center in the imaging plane corresponding to the k-th receiving array element: a(k,2.11 a(k,2"Vi) Ak = (8).
  3. 3. The efficient high-resolution non-destructive detecting method for the defects based on the convolutional neural network according to claim 2, characterized in that grid division density is set to be 1/mm2, and position coordinates of each grid center in the coordinate system are determined.
  4. 4. The efficient high-resolution non-destructive detecting method for the defects based on the convolutional neural network according to claim 3, characterized in that a training process of the convolutional neural network in the step 2 comprises the following steps: utilizing y ER\ v2 to represent a real image of the interior of the workpiece under detection, utilizing A E to represent signals obtained after preprocessing signals received by N array elements of a transducer array; in an image reconstruction process, y is estimated by A through a certain function; utilizing f(A;(9) to represent a beamforming function of the convolutional neural network, wherein Orepresents the parameter of the neural network, the objective of image reconstruction by using the convolutional neural network is to find the optimal parameter 0" such that the error between an estimated image and the real image y is minimal, and a functional relationship used is expressed as follows: 04 = arg min L(y, f" ( 1,0)) (9) wherein L(Y,,i(419)) represents a loss function of the error between the estimated image and the real image; selecting a structural similarity (MS-SS/M)-based function as the loss function, calculation of SSIM between a real pixel and an estimated pixel of the /-th row and j-th column is as follows: )- + * 2c +L, (10) +-Yo +C a +62 +C ", 2 wherein (and (,are scalar parameters selected based on experience for improving the computational stability of loss, tr. and u, are mean values of neighborhood pixels of and 5,J correspondingly, and a' and o-2 are variances of the neighborhood pixels of and)correspondingly, a represents covariances of the neighborhood pixels of yo,j) and Sio,j); the value of SS/M varies from -1 to 1, and SS/M=1 indicates full correlation between the two images, so that the SS/M-based loss function is defined as: L.sisvm(y, 52) - A, ISSA1 (vIH(LIP-Om comparing the two images in the form of the loss function, the loss function is as follows: r(log y, log) = min L(log y, w + log)) (12) wherein w represents a positive weight factor for zooming the estimated image; IS a differential of the SS/M value with respect to w is calculated for the real pixel and the estimated pixel of the i-th row and j-th column in the form of formula (12): (tr +2/ )) 87 02 -SSLV/(log yo, , ± log - 2:.12 loeyt, Clogyy 2 2 (13) og ^ 13W (Crlogyn ± Cr; ' ) ± + + 22iogsti) wherein C2 and C2 are ignored, and the optimal weight w is obtained by solving for all pixels: 0= -SSW (log yo. , + log (14) j=1 D14' defining the loss function of the convolutional neural network, and then defining a structure of the convolutional neural network, wherein the convolutional neural network comprises M repeated convolution blocks, and each convolution block comprises a 2D convolution layer, a batch normalization layer and a rectified linear unit activation layer; after the structure of the convolutional neural network is defined, training the convolutional neural network by utilizing simulation data and real data of the workpiece under detection to obtain the optimal parameter in each convolutional block, thereby obtaining the trained convolutional neural network; and through the pre-processed echo data of the workpiece under detection and the trained convolutional neural network, achieving coarse scan imaging of the interior of the workpiece under detection.
  5. 5. The efficient high-resolution non-destructive detecting method for the defects based on the convolutional neural network according to claim 4, characterized in that the step 3 specifically comprises: step 3.1: carrying out Gaussian filtering on a final imaging result, and carrying out discrete sampling and normalization on a Gaussian surface, normalization referring to that the sum of all elements of a convolution kernel is 1, and a Gaussian filtering template K with the standard deviation of o-=1.4, and the size of 5x5 being as follows: 2 4 5 4 2 4 9 12 9 4 K 5 12 15 12 5 (15) = 4 9 12 9 4 2 4 5 4 2 step 3.2: calculating gradient magnitude and direction by utilizing the Sobel operator, for the Sobel operator: -1 0 1 Si= -1 0 1 -1 0 1_ 1 1 0 0 0 -1 -1 -1 the gradient magnitude being calculated by: G(x, y) = + the gradient direction being calculated by: R(x,y)=arctan(S, / Sc.) I0 step 3.3: in order to obtain the edge of the width of a single pixel, carrying out non-maximum suppression by the Sobel operator on the amplitude matrix of the image, firstly classifying the gradient direction R (x,y) into four angles (0 to 45,45 to 90, 90 to 135, 135 to 180) according to the principle of proximity, obtaining two point pairs (gi, g2) and (g3, g4) closest to a gradient vector among other eight corresponding points in eight neighborhoods of a point, respectively comparing the gradient amplitude of the point with gi, g2, g3 and g4, under the condition that the gradient amplitude is smaller than any one of gi, g2, g3 and gt, the amplitude (16) (17) (18) (19) at the point is 0, otherwise, the point is considered as a potential edge, reserving the amplitude, and finally, utilizing a double-threshold method for detection; pixels remained after applying non-maximum suppression more accurately represent the actual edge in the image, filtering the edge pixels by weak gradient values, meanwhile, reserving the edge pixels with high gradient values, namely, obtaining accurate defect edge information of the workpiece under detection by selecting high and low thresholds, and then accurately calculating the position information and the size information of an internal defect of the object under detection by utilizing edge extraction information.
  6. 6. The efficient high-resolution non-destructive detecting method for the defects based on the convolutional neural network according to according to claim 5, characterized in that when the gradient value of the pixel is higher than the high threshold, the pixel is marked as a strong edge pixel; when the gradient value of the edge pixel is smaller than the high threshold and larger than the low threshold, the edge pixel is marked as a weak edge pixel; and when the gradient value of the edge pixel is smaller than the low threshold, the edge pixel is suppressed.
GB2200388.3A 2021-09-06 2022-01-13 Non-destructive defect edge detecting method based on convolutional neural network Active GB2610449B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111039459.9A CN113888471B (en) 2021-09-06 2021-09-06 High-efficiency high-resolution defect nondestructive testing method based on convolutional neural network

Publications (3)

Publication Number Publication Date
GB2610449A true GB2610449A (en) 2023-03-08
GB2610449A8 GB2610449A8 (en) 2023-04-19
GB2610449B GB2610449B (en) 2023-09-20

Family

ID=79008317

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2200388.3A Active GB2610449B (en) 2021-09-06 2022-01-13 Non-destructive defect edge detecting method based on convolutional neural network

Country Status (2)

Country Link
CN (1) CN113888471B (en)
GB (1) GB2610449B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116205908A (en) * 2023-04-27 2023-06-02 阳谷新太平洋电缆有限公司 Cable coaxiality visual detection method based on convolutional neural network
CN116309510A (en) * 2023-03-29 2023-06-23 清华大学 Numerical control machining surface defect positioning method and device
CN116342589A (en) * 2023-05-23 2023-06-27 之江实验室 Cross-field scratch defect continuity detection method and system
CN116448760A (en) * 2023-03-21 2023-07-18 上海华维可控农业科技集团股份有限公司 Agricultural intelligent monitoring system and method based on machine vision
CN116692015A (en) * 2023-08-07 2023-09-05 中国空气动力研究与发展中心低速空气动力研究所 Online ice shape measuring method based on ultrasonic imaging
CN116754467A (en) * 2023-07-04 2023-09-15 深圳市耀杰橡胶制品有限公司 Evaluation method for ageing performance of natural rubber
CN116758077A (en) * 2023-08-18 2023-09-15 山东航宇游艇发展有限公司 Online detection method and system for surface flatness of surfboard
CN116776647A (en) * 2023-08-21 2023-09-19 深圳市鑫冠亚科技有限公司 Performance prediction method and system for composite nickel-copper-aluminum heat dissipation bottom plate
CN116838114A (en) * 2023-07-06 2023-10-03 同创华建集团有限公司 Steel construction and curtain intelligent monitoring system based on data analysis
CN117420209A (en) * 2023-12-18 2024-01-19 中国机械总院集团沈阳铸造研究所有限公司 Deep learning-based full-focus phased array ultrasonic rapid high-resolution imaging method
CN117748507A (en) * 2024-02-06 2024-03-22 四川大学 Distribution network harmonic access uncertainty assessment method based on Gaussian regression model
CN117805247A (en) * 2023-12-29 2024-04-02 广东融创高科检测鉴定有限公司 Ultrasonic detection method and system for concrete defects

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115811682B (en) * 2023-02-09 2023-05-12 杭州兆华电子股份有限公司 Loudspeaker distortion analysis method and device based on time domain signals

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109239206A (en) * 2018-06-20 2019-01-18 诸暨市逍遥管道科技有限公司 The supersonic detection method of defect inspection auxiliary electric fusion joint intelligence phased array
CN111060601A (en) * 2019-12-27 2020-04-24 武汉武船计量试验有限公司 Weld ultrasonic phased array detection data intelligent analysis method based on deep learning
CN111912910A (en) * 2020-08-12 2020-11-10 上海核工程研究设计院有限公司 Intelligent identification method for polyethylene pipeline hot-melt weld joint hybrid ultrasonic scanning defects
CN113777166A (en) * 2021-09-06 2021-12-10 哈尔滨工业大学 High-resolution defect nondestructive testing method based on combination of ultrasonic plane wave imaging and time reversal operator

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102573142B1 (en) * 2015-04-01 2023-08-31 베라소닉스, 인코포레이티드 Method and system for coded excitation imaging by impulse response estimation and retrospective acquisition
US10161910B2 (en) * 2016-01-11 2018-12-25 General Electric Company Methods of non-destructive testing and ultrasonic inspection of composite materials
CN107204021B (en) * 2017-04-25 2020-10-16 中国科学院深圳先进技术研究院 Ultrasonic imaging method based on Gaussian function probe response model and compressed sensing
US11957515B2 (en) * 2018-02-27 2024-04-16 Koninklijke Philips N.V. Ultrasound system with a neural network for producing images from undersampled ultrasound data
CN110146521B (en) * 2019-06-17 2020-10-09 电子科技大学 Pipeline surface corrosion defect detection method and device based on microwave nondestructive detection
CN111007151A (en) * 2019-12-30 2020-04-14 华东理工大学 Ultrasonic phased array rapid full-focusing imaging detection method based on defect pre-positioning
CN112528731B (en) * 2020-10-27 2024-04-05 西安交通大学 Plane wave beam synthesis method and system based on dual regression convolutional neural network
CN112669401B (en) * 2020-12-22 2022-08-19 中北大学 CT image reconstruction method and system based on convolutional neural network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109239206A (en) * 2018-06-20 2019-01-18 诸暨市逍遥管道科技有限公司 The supersonic detection method of defect inspection auxiliary electric fusion joint intelligence phased array
CN111060601A (en) * 2019-12-27 2020-04-24 武汉武船计量试验有限公司 Weld ultrasonic phased array detection data intelligent analysis method based on deep learning
CN111912910A (en) * 2020-08-12 2020-11-10 上海核工程研究设计院有限公司 Intelligent identification method for polyethylene pipeline hot-melt weld joint hybrid ultrasonic scanning defects
CN113777166A (en) * 2021-09-06 2021-12-10 哈尔滨工业大学 High-resolution defect nondestructive testing method based on combination of ultrasonic plane wave imaging and time reversal operator

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116448760B (en) * 2023-03-21 2023-10-20 上海华维可控农业科技集团股份有限公司 Agricultural intelligent monitoring system and method based on machine vision
CN116448760A (en) * 2023-03-21 2023-07-18 上海华维可控农业科技集团股份有限公司 Agricultural intelligent monitoring system and method based on machine vision
CN116309510A (en) * 2023-03-29 2023-06-23 清华大学 Numerical control machining surface defect positioning method and device
CN116309510B (en) * 2023-03-29 2024-03-22 清华大学 Numerical control machining surface defect positioning method and device
CN116205908A (en) * 2023-04-27 2023-06-02 阳谷新太平洋电缆有限公司 Cable coaxiality visual detection method based on convolutional neural network
CN116342589A (en) * 2023-05-23 2023-06-27 之江实验室 Cross-field scratch defect continuity detection method and system
CN116342589B (en) * 2023-05-23 2023-08-22 之江实验室 Cross-field scratch defect continuity detection method and system
CN116754467A (en) * 2023-07-04 2023-09-15 深圳市耀杰橡胶制品有限公司 Evaluation method for ageing performance of natural rubber
CN116754467B (en) * 2023-07-04 2024-03-08 深圳市耀杰橡胶制品有限公司 Evaluation method for ageing performance of natural rubber
CN116838114B (en) * 2023-07-06 2024-01-23 同创华建集团有限公司 Steel construction and curtain intelligent monitoring system based on data analysis
CN116838114A (en) * 2023-07-06 2023-10-03 同创华建集团有限公司 Steel construction and curtain intelligent monitoring system based on data analysis
CN116692015B (en) * 2023-08-07 2023-09-29 中国空气动力研究与发展中心低速空气动力研究所 Online ice shape measuring method based on ultrasonic imaging
CN116692015A (en) * 2023-08-07 2023-09-05 中国空气动力研究与发展中心低速空气动力研究所 Online ice shape measuring method based on ultrasonic imaging
CN116758077B (en) * 2023-08-18 2023-10-20 山东航宇游艇发展有限公司 Online detection method and system for surface flatness of surfboard
CN116758077A (en) * 2023-08-18 2023-09-15 山东航宇游艇发展有限公司 Online detection method and system for surface flatness of surfboard
CN116776647B (en) * 2023-08-21 2024-01-16 深圳市鑫冠亚科技有限公司 Performance prediction method and system for composite nickel-copper-aluminum heat dissipation bottom plate
CN116776647A (en) * 2023-08-21 2023-09-19 深圳市鑫冠亚科技有限公司 Performance prediction method and system for composite nickel-copper-aluminum heat dissipation bottom plate
CN117420209A (en) * 2023-12-18 2024-01-19 中国机械总院集团沈阳铸造研究所有限公司 Deep learning-based full-focus phased array ultrasonic rapid high-resolution imaging method
CN117420209B (en) * 2023-12-18 2024-05-07 中国机械总院集团沈阳铸造研究所有限公司 Deep learning-based full-focus phased array ultrasonic rapid high-resolution imaging method
CN117805247A (en) * 2023-12-29 2024-04-02 广东融创高科检测鉴定有限公司 Ultrasonic detection method and system for concrete defects
CN117748507A (en) * 2024-02-06 2024-03-22 四川大学 Distribution network harmonic access uncertainty assessment method based on Gaussian regression model
CN117748507B (en) * 2024-02-06 2024-05-03 四川大学 Distribution network harmonic access uncertainty assessment method based on Gaussian regression model

Also Published As

Publication number Publication date
GB2610449B (en) 2023-09-20
CN113888471B (en) 2022-07-12
GB2610449A8 (en) 2023-04-19
CN113888471A (en) 2022-01-04

Similar Documents

Publication Publication Date Title
GB2610449A (en) Efficient high-resolution non-destructive detecting method based on convolutional neural network
Merazi-Meksen et al. Mathematical morphology for TOFD image analysis and automatic crack detection
Doctor et al. SAFT—the evolution of a signal processing technology for ultrasonic testing
US7995829B2 (en) Method and apparatus for inspecting components
CN111122700B (en) Method for improving laser ultrasonic SAFT defect positioning speed
CN110208806B (en) Marine radar image rainfall identification method
CN112098526B (en) Near-surface defect feature extraction method for additive product based on laser ultrasonic technology
CN111855803B (en) Laser ultrasonic high signal-to-noise ratio imaging method for manufacturing micro defects by metal additive
Merazi Meksen et al. Automatic crack detection and characterization during ultrasonic inspection
US20030101820A1 (en) Acoustic microscope
US20080092661A1 (en) Methods and system for ultrasound inspection
CN111007151A (en) Ultrasonic phased array rapid full-focusing imaging detection method based on defect pre-positioning
Osman Automated evaluation of three dimensional ultrasonic datasets
Laroche et al. Fast non-stationary deconvolution of ultrasonic beamformed images for nondestructive testing
CN113777166A (en) High-resolution defect nondestructive testing method based on combination of ultrasonic plane wave imaging and time reversal operator
CN108872390A (en) A kind of supersonic guide-wave composite imaging method based on instantaneous phase
CN111047547B (en) Combined defect quantification method based on multi-view TFM
CN114487115B (en) High-resolution defect nondestructive testing method based on combination of Canny operator and ultrasonic plane wave imaging
CN115248436A (en) Imaging sonar-based fish resource assessment method
de Moura et al. Surface estimation via analysis method: A constrained inverse problem approach
CN105675731B (en) Array is the same as hair, the detection signal enhancing method with receipts ultrasonic probe
CN109459451A (en) A kind of metal inside testing of small cracks method based on ray contrast
WO2015166003A1 (en) Method for the non-destructive testing of a workpiece by means of ultrasound and device therefor
Voon et al. Gradient-based Hough transform for the detection and characterization of defects during nondestructive inspection
CN108122226B (en) Method and device for detecting glass defects

Legal Events

Date Code Title Description
732E Amendments to the register in respect of changes of name or changes affecting rights (sect. 32/1977)

Free format text: REGISTERED BETWEEN 20230302 AND 20230308