CN118090909A - Automatic generation method of C-scan image and storage medium - Google Patents

Automatic generation method of C-scan image and storage medium Download PDF

Info

Publication number
CN118090909A
CN118090909A CN202410527506.1A CN202410527506A CN118090909A CN 118090909 A CN118090909 A CN 118090909A CN 202410527506 A CN202410527506 A CN 202410527506A CN 118090909 A CN118090909 A CN 118090909A
Authority
CN
China
Prior art keywords
scan image
matrix
deep learning
learning model
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410527506.1A
Other languages
Chinese (zh)
Inventor
李苏畅
晏江华
田旻昊
尤晓庆
王晓杰
宋雨蒙
王文斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Automobile Information Technology Tianjin Co ltd
Original Assignee
China Automobile Information Technology Tianjin Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Automobile Information Technology Tianjin Co ltd filed Critical China Automobile Information Technology Tianjin Co ltd
Priority to CN202410527506.1A priority Critical patent/CN118090909A/en
Publication of CN118090909A publication Critical patent/CN118090909A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/04Analysing solids
    • G01N29/06Visualisation of the interior, e.g. acoustic microscopy
    • G01N29/0609Display arrangements, e.g. colour displays
    • G01N29/0645Display representation or displayed parameters, e.g. A-, B- or C-Scan
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/44Processing the detected response signal, e.g. electronic circuits specially adapted therefor
    • G01N29/4481Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2291/00Indexing codes associated with group G01N29/00
    • G01N2291/02Indexing codes associated with the analysed material
    • G01N2291/023Solids

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Immunology (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Pathology (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • Data Mining & Analysis (AREA)
  • Signal Processing (AREA)
  • Acoustics & Sound (AREA)
  • Investigating Or Analyzing Materials By The Use Of Ultrasonic Waves (AREA)

Abstract

The invention relates to the technical field of computer simulation, and discloses an automatic generation method of a C-scan image and a storage medium. The method pre-trains a deep learning model; debugging parameters of a flaw detector, and tightly attaching a probe of the flaw detector to the surface of a material to be detected for scanning; inputting real ultrasonic data into the deep learning model to obtain a matrix of T multiplied by m multiplied by 4; and taking the time length of data scanning in the matrix as a horizontal axis and the number of sound beams as a vertical axis, generating a C-scan image at the defect position, and obtaining the C-scan image at the defect position through a deep learning model without setting gate parameters on the basis of real ultrasonic data.

Description

Automatic generation method of C-scan image and storage medium
Technical Field
The invention relates to the technical field of deep learning, in particular to an automatic generation method of a C-scan image and a storage medium.
Background
The nondestructive inspection instrument is used for detecting the bonding defects of the vehicle in the production and manufacturing process by exciting the chip to emit ultrasonic waves, such as the defect of the front and rear windshield glass, the bonding of the sealing strip of the vehicle door, the bonding of bubbles and the like.
In the prior art, the most direct display form of ultrasonic data is an A-scan image, but the A-scan image is only a curve image, and the response of the result is not intuitive, so that in the detection process, a C-scan image is most commonly applied. The C-scan image is a condition that the bonding surface is observed through the overlooking view angle, is very visual and easy to understand, and is very convenient for observing and counting the bonding condition. However, the generation of the C-scan image requires a corresponding gate setting, i.e., selecting the start point and the end point of the gate, so that the waveform and the range displayed at the defect position are selected, and the adhesion condition of the adhesive tape and the plate can be clearly seen. Otherwise, a contrast-specific, easily detectable C-scan image cannot be generated.
Currently, there is no effective, automated solution to the problem of C-scan gate setup, and more so, it is manually and empirically solved, even by skilled inspectors, requiring about 2.5 hours per gate setup, wasting a lot of time.
In view of this, the present invention has been made.
Disclosure of Invention
In order to solve the technical problems, the invention provides an automatic generation method of a C-scan image and a storage medium, which are used for obtaining the C-scan image at a defect position through a deep learning model without setting gate parameters on the basis of real ultrasonic data.
The embodiment of the invention provides a method for automatically generating a C-scan image, which comprises the following steps:
pre-training a deep learning model;
Debugging parameters of a flaw detector, and tightly attaching a probe of the flaw detector to the surface of a material to be detected for scanning;
Acquiring real ultrasonic data generated in a scanning process, wherein a gate of the real ultrasonic data is not arranged; the real ultrasonic data is T multiplied by m multiplied by (n+p) multiplied by q, wherein T is the time length of data scanning, m is the number of sound beams, n is the sound wave wavelength, p is the transmission check bit data, and q is a constant parameter related to the data transmission type;
inputting the real ultrasonic data into the deep learning model to obtain a matrix of T multiplied by m multiplied by 4; 4 represents R, G, B, alpha four channels;
And taking the time length of data scanning in the matrix as a horizontal axis and the number of sound beams as a vertical axis, and generating a C-scan image at the defect position.
An embodiment of the present invention provides a computer-readable storage medium storing a program or instructions that cause a computer to execute the steps of the method for automatically generating C-scan images according to any of the embodiments.
The embodiment of the invention has the following technical effects: the C-scan image at the defect position can be automatically generated, so that complicated gate setting work is avoided, and the execution efficiency of ultrasonic detection can be greatly improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a method for automatically generating a C-scan image according to an embodiment of the present invention;
FIG. 2 is a C-scan image generated in accordance with the method of the present embodiment;
FIG. 3 is a C-scan image generated by a method of manually setting a shutter;
Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below. It will be apparent that the described embodiments are only some, but not all, embodiments of the invention. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the invention, are within the scope of the invention.
The automatic generation method of the C-scan image for the flaw detector provided by the embodiment of the invention is mainly suitable for an application scene of directly obtaining the C-scan image based on real ultrasonic data without setting gate parameters. The automatic generation method of the C-scan image provided by the embodiment of the invention can be executed by electronic equipment such as a computer, and the electronic equipment can be integrated in a flaw detector or can be independent of the flaw detector.
Example 1
Fig. 1 is a flowchart of a method for automatically generating a C-scan image according to an embodiment of the present invention. Referring to fig. 1, the method for automatically generating a C-scan image specifically includes:
S110, training a deep learning model in advance.
The present embodiment is not limited to the type of the deep learning model, and may be a neural network model or the like.
The trained deep learning model is used for inputting real ultrasonic data and outputting a matrix capable of generating a C-scan image. The structure and training process of the deep learning model will be described in embodiment two.
The trained deep learning model may be stored in a memory of the electronic device for use.
S120, debugging parameters of the flaw detector, and tightly attaching a probe of the flaw detector to the surface of the material to be detected for scanning.
The flaw detector is first connected to a power supply when the apparatus is used, and the power supply is ensured to be stable. Checking whether the probe is installed correctly or not, and checking whether the probe is intact or not. And then according to the requirements of detection scenes, debugging parameters, namely setting ultrasonic emission chips in the flaw detector during each scanning, wherein the ultrasonic emission chips comprise wavelength, frame length, encoder resolution, stepping axis type, stepping axis resolution, time resolution, material sound velocity, frequency, gain, pulse width, scanning type and the like. Once these parameters are modified, the previous scan is not effective and the material to be inspected needs to be scanned again. Note that the parameters herein do not include gate parameters.
And (3) the probe is closely attached to the surface of the material to be detected, and scanning is carried out along the area to be detected. The probe emits ultrasonic waves, and when a wave encounters a defect in the material, a portion of the wave is reflected back. The probe receives the reflected wave and converts it into electrical signals, which are converted into real ultrasound data.
S130, acquiring real ultrasonic data generated in the scanning process.
The gate of the real ultrasonic data is not set, and in fact, the position of the gate is not required to be set all the time in the embodiment, but the gate is self-learned through a deep learning model, and the data of the gate position is used for generating a C-scan image.
The real ultrasonic data is T×m× (n+p) ×q, wherein T is the duration of data scanning, because the C-scan nature is composed of a number of frames of data, m is the number of sound beams, n is the wavelength of sound waves, and p is the transmission check bit data for alignment during transmission of the checksum segment. q is a constant parameter related to the data transmission type, typically 2.
Since the shutter is not set, the real ultrasonic data is full data, including all depth scan data of the material defect location and all depth scan data of the perfect location.
S140, inputting the real ultrasonic data into the deep learning model to obtain a matrix of T multiplied by m multiplied by 4.
4 Represents R, G, B, alpha channels, which are three primary colors and transparency, respectively, T is the duration of data scanning, and m is the number of sound beams.
The deep learning model generates a matrix of dimensions t×m× (n+p) ×q into a matrix of dimensions t×m×4 through a matrix dimension conversion operation, convolution, and pooling operation.
And S150, taking the time length of data scanning in the matrix as a horizontal axis and the number of sound beams as a vertical axis, and generating a C-scan image at the defect position.
The C-scan image comprises T multiplied by m pixel points, and the color and the transparency of each pixel point are determined according to the data of 4 channels at the corresponding positions. The generated C-scan image is displayed on the screen of the flaw detector so that the user sees the C-scan image at the flaw location.
Fig. 2 is a C-scan image generated by the method according to the present embodiment, and fig. 3 is a C-scan image generated by the method of manually setting a shutter. It can be seen that the quality of fig. 2 is already very close to that of fig. 3, and the method provided by this embodiment has a better technical effect.
The embodiment has the following technical effects: the C-scan image at the defect position can be automatically generated, so that complicated gate setting work is avoided, and the execution efficiency of ultrasonic detection can be greatly improved.
Example two
On the basis of the above embodiment, the present embodiment defines the result of the deep learning model and the training process.
Optionally, the deep learning model includes an input layer, a convolution layer, a pooling layer, and an output layer. The input layer is used for converting real ultrasonic data into dimensionality so as to adapt to the channel number of the convolution layer; the convolution layer and the pooling layer are used for respectively carrying out convolution and pooling operations on input data; the output layer is used to convert the output matrix of the pooling layer into dimensions t×m×4.
In a specific embodiment, the matrix shape is changed to the [ T,106,28,28] form by a tensor. Reshape api provided by pytorch framework, the reason for this is to facilitate the convolution operation. the tensor. Reshape function changes the dimensions of the matrix by changing the data index. Then a two-dimensional convolution network with 106 input channels, 1 output channels, 3 convolution kernels, 1 step length and 1 filling is passed through one layer. The input matrix is then restored to the size of T,49,4, where m=49, by a max pooling layer with a pooling core size of 14, and finally by a tensor. Reshape api provided by the pytorch framework.
In training the deep learning model, first, sample and label are collected: matrix labels of the C-scan image at the defect location and the ultrasound data samples are acquired. In actual operation, a plurality of sample wafers for adhesion test are manufactured in a manual gluing mode. Ultrasonic data of these coupons were acquired using an ultrasonic acquisition device. For an ultrasonic acquisition data, the total data obtained is a matrix with the size of T×49× (832+16) ×2, wherein 2 is a constant parameter related to the data transmission type, 49 represents the number of sound beams, 832 represents the wavelength of sound waves, these are preset parameters of ultrasonic waves, and the other numbers are default settings. The real data amount required in the data is Tx49×832×2, and the extra data is mainly used for alignment when transmitting check sum segments. The inviting expert provides a C-scan image of the specimen at the location of the defect. For one piece of ultrasonic data, the size of the obtained C-scan image is t×49×4 as a matrix label of the C-scan image at the defective position.
And then inputting the ultrasonic data sample into a deep learning model to be trained to obtain a matrix T multiplied by 49 multiplied by 4 of the C-scan image.
Then, calculating a loss function of a matrix of the C-scan image relative to the matrix label, wherein the loss function is obtained according to an L1 loss function and an L2 loss function; the L1 loss function and the L2 loss function are obtained according to the difference between the matrix of the C-scan image and the channel value of the matrix label at the same position. See formula:
Loss=αLoss_L1+Loss_L2(1)
alpha is an adjustment parameter, N is the total number of positions in the image, Representing a position,/>Representing all the positions in the image,For C-scan image at/>Channel value of dot,/>For matrix labeling at/>Channel value of the dot.
Finally, the parameters of the deep learning model are updated to minimize the loss function, the derivative of each neuron is optimally calculated using a back propagation method, and the model parameters are modified by the opposite direction of neuron transfer, e.g., the magnitude and direction of the change in each parameter is obtained according to a conventional gradient descent method.
The inventor finds that in the process of multiple training, part of the input ultrasonic full data is data for checking and aligning, and the value of the corresponding neuron should be zero after training. In the context of this scenario, a neuron weight of zero for data used for checksum alignment is an abundant condition that minimizes the loss function. This is because the loss function used in this scheme uses the difference in channel values at the same location between the two graphs. In the process of generating images, the data used for checking and aligning are obviously irrelevant to the finally generated images, if the neuron weights corresponding to the data are not zero, the data affect the C-scan generated by the model, and the affected C-scan is definitely not closer to the C-scan of our targets than the unaffected C-scan because the target C-scan is not affected by the data. The model loss at this time must be large. During the actual training process, the inventor finds that the parameters of the deep learning model do have a certain number of 0 s. This also demonstrates the correctness of the model provided by the invention.
In this embodiment, if the parameter of the deep learning model has a part 0, the deep learning model is verified.
Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention. As shown in fig. 4, electronic device 400 includes one or more processors 401 and memory 402.
The processor 401 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities and may control other components in the electronic device 400 to perform desired functions.
Memory 402 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random Access Memory (RAM) and/or cache memory (cache), and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. One or more computer program instructions may be stored on the computer readable storage medium that can be executed by the processor 401 to implement the method for automatically generating C-scan images for a flaw detector of any of the embodiments of the present invention described above and/or other desired functions. Various content such as initial arguments, thresholds, etc. may also be stored in the computer readable storage medium.
In one example, the electronic device 400 may further include: an input device 403 and an output device 404, which are interconnected by a bus system and/or other forms of connection mechanisms (not shown). The input device 403 may include, for example, a keyboard, a mouse, and the like. The output device 404 may output various information to the outside, including early warning prompt information, braking force, etc. The output device 404 may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, etc.
Of course, only some of the components of the electronic device 400 that are relevant to the present invention are shown in fig. 4 for simplicity, components such as buses, input/output interfaces, etc. are omitted. In addition, electronic device 400 may include any other suitable components depending on the particular application.
In addition to the methods and apparatus described above, embodiments of the present invention may also be a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform the steps of the method for automatically generating C-scan images of a flaw detector provided by any of the embodiments of the present invention.
The computer program product may write program code for performing operations of embodiments of the present invention in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present invention may also be a computer-readable storage medium, on which computer program instructions are stored, which when executed by a processor, cause the processor to perform the steps of the method for automatically generating C-scan images of a flaw detector provided by any embodiment of the present invention.
The computer readable storage medium may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may include, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to limit the scope of the present application. As used in this specification, the terms "a," "an," "the," and/or "the" are not intended to be limiting, but rather are to be construed as covering the singular and the plural, unless the context clearly dictates otherwise. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method or apparatus that includes the element.
It should also be noted that the positional or positional relationship indicated by the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. are based on the positional or positional relationship shown in the drawings, are merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the apparatus or element in question must have a specific orientation, be constructed and operated in a specific orientation, and thus should not be construed as limiting the present invention. Unless specifically stated or limited otherwise, the terms "mounted," "connected," and the like are to be construed broadly and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the essence of the corresponding technical solutions from the technical solutions of the embodiments of the present invention.

Claims (6)

1. The automatic generation method of the C-scan image is characterized by comprising the following steps:
pre-training a deep learning model;
Debugging parameters of a flaw detector, and tightly attaching a probe of the flaw detector to the surface of a material to be detected for scanning;
Acquiring real ultrasonic data generated in a scanning process, wherein a gate of the real ultrasonic data is not arranged; the real ultrasonic data is T multiplied by m multiplied by (n+p) multiplied by q, wherein T is the time length of data scanning, m is the number of sound beams, n is the sound wave wavelength, p is the transmission check bit data, and q is a constant parameter related to the data transmission type;
inputting the real ultrasonic data into the deep learning model to obtain a matrix of T multiplied by m multiplied by 4; 4 represents R, G, B, alpha four channels;
And taking the time length of data scanning in the matrix as a horizontal axis and the number of sound beams as a vertical axis, and generating a C-scan image at the defect position.
2. The method of claim 1, wherein the deep learning model comprises an input layer, a convolution layer, a pooling layer, and an output layer;
The input layer is used for converting real ultrasonic data into dimensionality so as to adapt to the channel number of the convolution layer;
the output layer is used for converting the output matrix of the pooling layer into dimensions of T multiplied by m multiplied by 4.
3. The method of claim 1, wherein the pre-training a deep learning model comprises:
Acquiring an ultrasonic data sample and a matrix label of a C-scan image at a defect position;
inputting an ultrasonic data sample into a deep learning model to be trained to obtain a matrix of a C-scan image;
calculating a loss function of a matrix of the C-scan image relative to the matrix label;
parameters of the deep learning model are updated to minimize the loss function.
4. A method according to claim 3, wherein calculating a loss function of the matrix of the C-scan image relative to the matrix label comprises:
the loss function is obtained according to an L1 loss function and an L2 loss function;
the L1 loss function and the L2 loss function are obtained according to the difference between the matrix of the C-scan image and the channel value of the matrix label at the same position.
5. The method of claim 4, wherein after updating parameters of the deep learning model to minimize the loss function, further comprising:
If the parameters of the deep learning model are present in part 0, the deep learning model is validated.
6. A computer-readable storage medium storing a program or instructions that cause a computer to execute the steps of the C-scan image automatic generation method according to any one of claims 1 to 5.
CN202410527506.1A 2024-04-29 2024-04-29 Automatic generation method of C-scan image and storage medium Pending CN118090909A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410527506.1A CN118090909A (en) 2024-04-29 2024-04-29 Automatic generation method of C-scan image and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410527506.1A CN118090909A (en) 2024-04-29 2024-04-29 Automatic generation method of C-scan image and storage medium

Publications (1)

Publication Number Publication Date
CN118090909A true CN118090909A (en) 2024-05-28

Family

ID=91153578

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410527506.1A Pending CN118090909A (en) 2024-04-29 2024-04-29 Automatic generation method of C-scan image and storage medium

Country Status (1)

Country Link
CN (1) CN118090909A (en)

Similar Documents

Publication Publication Date Title
JP4944892B2 (en) Method and apparatus for performing security inspection on liquid articles using radiation
CN108844978B (en) Novel method for detecting internal defects of honeycomb
US11474076B2 (en) Acoustic model acoustic region of influence generation
US20030058991A1 (en) Digital radioscopic testing system patent
JP7424289B2 (en) Information processing device, information processing method, information processing system, and program
CN111902716A (en) Ultrasonic inspection device, method, program, and ultrasonic inspection system
CN106198759A (en) Ultrasound probe device for detecting performance and method
CN115760837A (en) Crystal quality evaluation method and system based on deep neural network
CN116630263A (en) Weld X-ray image defect detection and identification method based on deep neural network
CN116524313A (en) Defect detection method and related device based on deep learning and multi-mode image
CN118090909A (en) Automatic generation method of C-scan image and storage medium
CN114414577B (en) Method and system for detecting plastic products based on terahertz technology
JPWO2020175693A1 (en) Ultrasonic flaw detector
JP2013020444A (en) Image processing apparatus
CN111736157B (en) PPI data-based prediction method and device for nowcasting
CN110609083A (en) Method for detecting internal defects of thin three-dimensional woven laminated plate composite material test piece based on ultrasonic phased array
CN112053375A (en) Method and equipment for predicting prediction based on improved network convolution model
CN113331789A (en) Imaging method of tumor cell growth detection system
JP2000046762A (en) Sample evaluating method and apparatus
US7848561B2 (en) Determining capability of an on-line sensor
CN115452948B (en) Intelligent detection method and system for internal defects of rectangular-section wood member
CN117783288A (en) Device and method for detecting fit degree of connecting rod bushing and bottom hole
JPS6153564A (en) Surface image detector by ultrasonic flaw detection
JPH09264882A (en) Method and equipment for determining flaw of material to be inspected
CN117783274A (en) Intelligent ultrasonic nondestructive testing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination