CN117710379B - Nondestructive testing model construction method, nondestructive testing device and medium - Google Patents

Nondestructive testing model construction method, nondestructive testing device and medium Download PDF

Info

Publication number
CN117710379B
CN117710379B CN202410166766.0A CN202410166766A CN117710379B CN 117710379 B CN117710379 B CN 117710379B CN 202410166766 A CN202410166766 A CN 202410166766A CN 117710379 B CN117710379 B CN 117710379B
Authority
CN
China
Prior art keywords
feature map
feature
convolution
map
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410166766.0A
Other languages
Chinese (zh)
Other versions
CN117710379A (en
Inventor
游小超
王灿
刘浩洲
付明磊
张文安
丁丁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Lingxi Robot Intelligent Technology Co ltd
Zhejiang University of Technology ZJUT
Original Assignee
Hangzhou Lingxi Robot Intelligent Technology Co ltd
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Lingxi Robot Intelligent Technology Co ltd, Zhejiang University of Technology ZJUT filed Critical Hangzhou Lingxi Robot Intelligent Technology Co ltd
Priority to CN202410166766.0A priority Critical patent/CN117710379B/en
Publication of CN117710379A publication Critical patent/CN117710379A/en
Application granted granted Critical
Publication of CN117710379B publication Critical patent/CN117710379B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30121CRT, LCD or plasma display
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)

Abstract

The application discloses a nondestructive testing model construction method, a nondestructive testing device and a medium, comprising the following steps: based on the acquired thermal image of the target detection object, adopting a feature extraction network model to obtain a feature image; performing multi-scale feature communication processing based on the feature map to obtain a multi-scale feature communication map; carrying out multidimensional feature pooling processing based on the feature map to obtain multidimensional pooling features; based on the multi-scale feature communication graph and the multi-dimensional pooling features, a mask prediction model is adopted to obtain a prediction mask; based on the prediction mask, the multidimensional pooling features and the feature map after the pre-convolution, training by adopting a continuous cognitive model until a preset condition is met, and obtaining a trained nondestructive testing model. The nondestructive testing model is used for detecting the defects of the target testing object with high precision, and is particularly suitable for nondestructive testing of complex workpieces.

Description

Nondestructive testing model construction method, nondestructive testing device and medium
Technical Field
The present application relates to the field of non-destructive testing technologies, and in particular, to a method for constructing a non-destructive testing model, a non-destructive testing method, a non-destructive testing device, and a medium.
Background
Currently, with the continuous improvement of computer hardware and graphics card computing power and the continuous update of deep learning theory, a vision-based deep neural network is fully developed. Therefore, the deep neural network technology starts to gradually replace the traditional image processing technology and is applied to product surface defect detection. As an emerging research direction in the machine learning field, the goal of the deep neural network is to guide the machine to be more intelligent, and finally to realize completely intelligent artificial intelligence. The development trend complements the increase of hardware performance, and provides a more efficient and accurate solution for detecting the surface defects of the product.
The infrared thermal wave nondestructive testing technology is a rapid and effective nondestructive testing technology based on heat conduction and infrared radiation theory, and has outstanding advantages in the aspects of testing and characterization of surface defects of materials. Due to the advantages of non-contact, high sensitivity, high spatial resolution and the like, the infrared thermal wave nondestructive testing technology becomes an important method for detecting material defects and damages. However, when detecting complex workpieces, the conventional infrared thermal wave nondestructive detection technology still has the problem that the identification accuracy is insufficient and the detection efficiency may be reduced.
Disclosure of Invention
The application aims to provide a nondestructive testing model construction method, a nondestructive testing device and a medium, which at least solve the problems that the conventional infrared thermal wave nondestructive testing technology still has insufficient identification precision and possibly reduces the detection efficiency when detecting complex workpieces in the related technology.
The first aspect of the application provides a method for constructing a nondestructive testing model, which comprises the following steps:
based on the acquired thermal image of the target detection object, adopting a feature extraction network model to obtain a feature image;
Performing multi-scale feature communication processing based on the feature map to obtain a multi-scale feature communication map; carrying out multidimensional feature pooling processing based on the feature map to obtain multidimensional pooling features; based on the multi-scale feature communication graph and the multi-dimensional pooling features, a mask prediction model is adopted to obtain a prediction mask;
Based on the prediction mask, the multidimensional pooling features and the feature map after the pre-convolution, training by adopting a continuous cognitive model until a preset condition is met, and obtaining a trained nondestructive testing model.
In one embodiment, the feature map includes a first feature map, a second feature map, a third feature map, and a fourth feature map;
The feature extraction network model comprises a first embedded block, a second embedded block, a third embedded block and a fourth embedded block which are sequentially connected;
The first embedded block, the second embedded block, the third embedded block and the fourth embedded block all comprise initial convolution blocks and STB basic blocks, wherein the initial convolution blocks are formed by stacking convolution kernel sizes of 1X 1 convolution layers, 3X 3 convolution layers and 1X 1 convolution layers, and the STB basic blocks comprise a multi-head attention mechanism;
The obtaining a feature map by adopting a feature extraction network model based on the obtained thermal image map of the target detection object comprises the following steps:
according to the thermal image, extracting features through the first embedded block to obtain a first feature image;
according to the first feature map, feature extraction is carried out through the second embedded block, and a second feature map is obtained;
According to the second feature map, feature extraction is carried out through the third embedded block, and a third feature map is obtained;
and carrying out feature extraction through the fourth embedded block according to the third feature map to obtain a fourth feature map.
In one embodiment, each convolution layer in the initial convolution block is followed by a BN layer and a ReLU activation function.
In one embodiment, the performing the multi-scale feature communication processing based on the feature map to obtain a multi-scale feature communication map includes:
performing image merging processing according to the first feature map, the second feature map, the third feature map and the fourth feature map, and determining a first merged feature map, a second merged feature map, a third merged feature map and a fourth merged feature map which correspond to the first feature map, the second feature map, the third feature map and the fourth feature map;
Performing autocorrelation processing according to the first combined feature map, the second combined feature map, the third combined feature map and the fourth combined feature map, and determining a correlation map;
and carrying out multi-scale feature communication processing according to the correlation diagram to obtain the multi-scale feature communication diagram.
In one embodiment, the performing multidimensional feature pooling processing based on the feature map to obtain multidimensional pooled features includes:
performing image merging processing according to the first feature map, the second feature map, the third feature map and the fourth feature map, and determining a first merged feature map, a second merged feature map, a third merged feature map and a fourth merged feature map which correspond to the first feature map, the second feature map, the third feature map and the fourth feature map;
Respectively carrying out convolution processing according to the first merging feature map, the second merging feature map, the third merging feature map and the fourth merging feature map to obtain a first convolution feature map, a second convolution feature map, a third convolution feature map and a fourth convolution feature map which are corresponding to each other;
performing tensor stitching according to the first convolution feature map, the second convolution feature map, the third convolution feature map and the fourth convolution feature map to obtain stitching tensors;
and carrying out multidimensional feature pooling processing according to the splicing tensor to obtain the multidimensional pooling features.
In one embodiment, the mask prediction model includes 11×1 convolutional layer with softmax and fupscale modules, wherein the fupscale modules include 31×1 convolutional layers and 3 bilinear upsampling layers f×2, the 1×1 convolutional layers being further followed by a ReLU activation function.
In one embodiment, the total loss function of the continuous cognitive model is:
Wherein, As a total loss function,/>For the cross entropy loss function, y i is a preset label image, p i is a predicted value of a continuous cognitive model,/>The loss function is pooled for the multidimensional features, P dhw is the multidimensional pooling result, y dhw is the position corresponding to the defect feature,/>For the strip pooling loss function, P dh is the strip pooling result, y dh is the position corresponding to the defect feature, d, h and w are the depth, height and width of the multi-scale feature communication graph respectively, and lambda is a constant parameter.
A second aspect of the present application provides a nondestructive testing method, comprising:
acquiring a target thermal image of a target detection object;
and determining a target detection result of the target detection object by using the nondestructive detection model constructed by the nondestructive detection model construction method based on the target thermal image.
A third aspect of the present application provides a nondestructive testing apparatus comprising:
A laser for generating infrared laser light to irradiate the target detection object;
the infrared imager is used for receiving an infrared thermal wave signal reflected by the target detection object and converting the infrared thermal wave signal into an electric signal;
the water cooler is used for cooling heat generated by the laser;
the upper computer is used for carrying out nondestructive detection according to the electric signals;
The system comprises a memory and one or more processors, wherein executable codes are stored in the memory, and the one or more processors are used for realizing the nondestructive testing method when executing the executable codes.
A fourth aspect of the present application provides a computer-readable storage medium having stored thereon a program which, when executed by a processor, implements the above-described non-destructive inspection method.
The nondestructive testing model construction method, the nondestructive testing device and the medium provided by the embodiment of the application have at least the following technical effects.
Obtaining a feature map by acquiring a thermal image and adopting a feature extraction network model; performing multi-scale feature communication processing based on the feature map to obtain a multi-scale feature communication map; carrying out multidimensional feature pooling processing based on the feature map to obtain multidimensional pooling features; based on the multi-scale feature communication graph and the multi-dimensional pooling features, a mask prediction model is adopted to obtain a prediction mask; based on the prediction mask, the multidimensional pooling features and the feature map after the pre-convolution, training is performed by adopting a continuous cognitive model until a preset condition is met, and a trained nondestructive testing model is obtained. The nondestructive testing model is used for detecting the defects of the target testing object with high precision, and is particularly suitable for nondestructive testing of complex workpieces.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the other features, objects, and advantages of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
FIG. 1 is a schematic flow chart of a method for constructing a nondestructive testing model according to an embodiment of the present application;
FIG. 2 is a schematic flow chart of obtaining a feature map according to an embodiment of the present application;
FIG. 3 is a schematic flow chart of a multi-scale feature communication diagram according to an embodiment of the present application;
FIG. 4 is a schematic flow chart of obtaining multidimensional pooling features according to an embodiment of the present application;
FIG. 5 is a schematic flow chart of a nondestructive testing method according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a nondestructive testing method according to an embodiment of the present application;
fig. 7 is a schematic diagram of an internal structure of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of at least one exemplary embodiment is merely exemplary in nature and is in no way intended to limit the application, its application, or uses. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It is apparent that the drawings in the following description are only some examples or embodiments of the present application, and it is possible for those of ordinary skill in the art to apply the present application to other similar situations according to these drawings without inventive effort. Moreover, it should be appreciated that while such a development effort might be complex and lengthy, it would nevertheless be a routine undertaking of design, fabrication, or manufacture for those of ordinary skill having the benefit of this disclosure, and thus should not be construed as having the benefit of this disclosure.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is to be expressly and implicitly understood by those of ordinary skill in the art that the described embodiments of the application can be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms used herein should be given the ordinary meaning as understood by one of ordinary skill in the art to which this application belongs. The terms "a," "an," "the," and similar referents in the context of the application are not to be construed as limiting the quantity, but rather as singular or plural. The terms "comprising," "including," "having," and any variations thereof, are intended to cover a non-exclusive inclusion; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to only those steps or elements but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. The terms "connected," "coupled," and the like in connection with the present application are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The term "plurality" as used herein means two or more. "and/or" describes an association relationship of an association object, meaning that there may be three relationships, e.g., "a and/or B" may mean: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship. The terms "first," "second," "third," and the like, as used herein, are merely distinguishing between similar objects and not representing a particular ordering of objects.
The embodiment of the application provides a nondestructive testing model construction method, a nondestructive testing device and a medium.
In a first aspect, an embodiment of the present application provides a method for constructing a nondestructive testing model, and fig. 1 is a schematic flow chart of the method for constructing a nondestructive testing model, as shown in fig. 1, where the method includes the following steps:
Step S101, based on the acquired thermal image of the target detection object, a feature image is obtained by adopting a feature extraction network model.
In one embodiment, the feature map includes a first feature map, a second feature map, a third feature map, and a fourth feature map;
The feature extraction network model comprises a first embedded block, a second embedded block, a third embedded block and a fourth embedded block which are sequentially connected.
The first embedded block, the second embedded block, the third embedded block and the fourth embedded block all comprise an initial convolution block and an STB basic block, wherein the initial convolution block is formed by stacking convolution kernel sizes of 1×1 convolution layer, 3×3 convolution layer and 1×1 convolution layer, and the STB basic block comprises a multi-head attention mechanism. Each convolution layer in the initial convolution block is followed by a BN layer and a ReLU activation function.
Specifically, the initial convolution block in the embedded block performs lifting operation in the channel dimension, and plays a role in dimension reduction and dimension lifting. The number of channels of the feature map can be adjusted by the convolution layer in the initial convolution block so as to better fuse the information among the channels, and the network can learn more complex and abstract feature information. Each convolution layer may capture features of different scales, thereby extracting more spatial detail features. The hierarchical characteristic information can improve the accuracy and the robustness of the model. In addition, through the learning of the initial convolution block, the mask prediction model can have a certain prior knowledge of the spatial structure and shape, so that the position and shape of the defect of the target detection object can be predicted better. Such a structure prior may help the model locate and segment the target more accurately.
The multi-headed attentiveness mechanism includes:
query (Query): the query matrix Q i=XiWi q is obtained by multiplying the feature X i with the query vector W i q.
Key (Key): the key matrix K i=XiWi k is obtained by multiplying the feature X i by the key vector W i k.
Value (Value): the value matrix V i=XiWi v is obtained by multiplying the feature X i by the value vector W i v.
Then, by passing the query matrix Q i, key matrix K i, and value matrix V i into the attention mechanism, the kth multi-headed attention mechanism is calculated:
In the attention mechanism, weights are calculated according to the similarity between the query matrix Q i and the key matrix K i, and then the weights are multiplied by the value matrix V i, so as to finally obtain the output Y k i of the kth multi-head attention mechanism. Therefore, a multi-head attention mechanism is applied to the characteristics obtained by each window, and information interaction and characteristic fusion between the windows can be realized, so that the expression capacity and the anti-interference capacity of the network are improved.
Fig. 2 is a schematic flow chart of obtaining a feature map according to an embodiment of the present application, as shown in fig. 2, on the basis of the flow chart shown in fig. 1, step S101 includes the following steps:
Step 201, performing feature extraction through a first embedded block according to the thermal image graph to obtain a first feature graph.
Specifically, the thermal image is input into a first embedded block, which performs feature extraction on the input thermal image, and typically employs convolution, attention mechanisms, and the like to capture local and global features in the image.
It should be noted that, the thermal image is preprocessed, including clipping, normalization, rotation, scaling, and the like, to obtain a preprocessed thermal image, and then feature extraction is performed.
Step S202, extracting features through a second embedded block according to the first feature map to obtain a second feature map.
Specifically, the first feature map is taken as input and is transmitted into a second embedded block, the second embedded block further extracts features, and the operations of convolution, attention mechanisms and the like are adopted to acquire richer semantic information. After the second embedding block processing, a second feature map is obtained, which contains higher level feature representations.
And step S203, carrying out feature extraction through a third embedded block according to the second feature map to obtain a third feature map.
Specifically, the second feature map is input into a third embedded block, and the third embedded block can continue to extract features, possibly further increase the depth of the network, and the like, so as to improve the expression capability of the model. After the third embedded block is processed, a third feature map is obtained, wherein the third feature map contains more abstract and semantically rich feature information.
And step S204, carrying out feature extraction through a fourth embedded block according to the third feature map to obtain a fourth feature map.
Specifically, the third feature map is taken as input and is transmitted into a fourth embedded block, and the fourth embedded block further extracts features and possibly processes and adjusts the features to adapt to specific task requirements. After the fourth embedding block processing, a fourth feature map is obtained, which contains higher-level, more global feature representations.
With continued reference to fig. 1, step S102 is performed after step S101, as follows.
Step S102, carrying out multi-scale feature communication processing based on the feature map to obtain a multi-scale feature communication map; carrying out multidimensional feature pooling processing based on the feature map to obtain multidimensional pooling features; and obtaining a prediction mask by adopting a mask prediction model based on the multi-scale feature communication graph and the multi-dimensional pooling features.
Fig. 3 is a schematic flow chart of obtaining a multi-scale feature communication chart according to an embodiment of the present application, as shown in fig. 3, based on the flow chart shown in fig. 1, including the following steps:
step S301, performing image merging processing according to the first feature map, the second feature map, the third feature map and the fourth feature map, and determining a corresponding first merged feature map, second merged feature map, third merged feature map and fourth merged feature map.
Specifically, image merging processing is performed to obtain a first merged feature map, a second merged feature map, a third merged feature map and a fourth merged feature map corresponding to the original image size, wherein abstract feature representations of the thermal image map are included.
Wherein I up (x, y) is a combined feature map corresponding to the original image size obtained by up-sampling, its coordinates are (x, y),For the output feature map, k is the upsampling factor,/>In the downscaling up-sampling operation, interpolation methods are typically used to fill in feature maps of the original image size. Specifically, for each pixel location (x, y), the nearest integer coordinate/>, is foundThe feature of the location is then assigned to I up (x, y). This completes the mapping of the feature values on the low resolution feature map back to the original image size.
Step S302, performing autocorrelation processing according to the first merging feature map, the second merging feature map, the third merging feature map and the fourth merging feature map, and determining a correlation map.
Specifically, the manner of performing the autocorrelation processing is consistent, and taking the first combined feature map as an example, the autocorrelation calculation is performed on the first combined feature map I up (x, y); by normalizing the feature data F m for each row position m and the feature data F n for column position n in the first merged feature map:
Wherein, Is two norms,/>For normalized row position,/>Is the normalized row position.
A scalar product C (m,n) between each pair of positions is calculated:
accumulating the results to obtain a correlation diagram C:
C={C(m,n)}∈R
The correlation graph C is a matrix in which elements represent similarities or correlations between different locations in the feature graph.
And step S303, carrying out multi-scale feature communication processing according to the related graph to obtain a multi-scale feature communication graph.
The multiscale feature communication processing formula is as follows:
Wherein, For continuous feature mapping of correlation diagram C, i.e., multiscale feature communication diagram, C in is the ith row n column of correlation diagram C,/>For the multi-scale communication parameters, δ m and δ n are the spatial offset values of the channel 2 axis, and b is a constant parameter.
By carrying out weighted summation on the features of different scales, the multi-scale information can be effectively fused, and the detection and recognition capability of the model on targets of different scales can be improved.
Fig. 4 is a schematic flow chart of obtaining multidimensional pooling features according to an embodiment of the present application, as shown in fig. 4, based on the flow chart shown in fig. 1, including the following steps:
and S401, carrying out image merging processing according to the first feature map, the second feature map, the third feature map and the fourth feature map, and determining corresponding first merging feature map, second merging feature map, third merging feature map and fourth merging feature map.
The processing steps herein are identical to step S301, and will not be described here again.
And step S402, respectively carrying out convolution processing according to the first merging feature map, the second merging feature map, the third merging feature map and the fourth merging feature map to obtain a corresponding first convolution feature map, a corresponding second convolution feature map, a corresponding third convolution feature map and a corresponding fourth convolution feature map.
Specifically, the first merging feature map obtains a first convolution feature map through 1×1 convolution extraction features, the second merging feature map obtains a second convolution feature map through 1×1 convolution extraction features, the third merging feature map obtains a third convolution feature map through 1×1 convolution extraction features, and the fourth merging feature map obtains a fourth convolution feature map through 1×1 convolution extraction features.
And step S403, performing tensor splicing according to the first convolution feature map, the second convolution feature map, the third convolution feature map and the fourth convolution feature map to obtain a spliced tensor.
Specifically, the formula for tensor stitching is:
Wherein T (a, B, …) is a stitching tensor, a, B, … is a first convolution feature map, a second convolution feature map, a third convolution feature map, and a fourth convolution feature map that need stitching, and subscripts m, n respectively represent rows and columns of images.
And (3) a new splicing tensor T can be obtained through tensor splicing operation, wherein the new splicing tensor T comprises information of the first convolution characteristic diagram, the second convolution characteristic diagram, the third convolution characteristic diagram and the fourth convolution characteristic diagram, and is spliced in a row-column mode.
And step S404, carrying out multidimensional feature pooling processing according to the splicing tensor to obtain multidimensional pooling features.
Specifically, the splicing tensor is decomposed into feature graphs of different channels, and different feature pooling operations are carried out on the feature graphs of each channel, so that multidimensional pooling features are obtained. The specific calculation process is as follows: let the size of the stitching tensor T be h×w×c, where h, w denote the height and width of the feature map, respectively, and c denote the dimension. The stitching tensor T is decomposed into c feature maps according to dimension c, denoted as T 1,T2,...,Tc, where T i has dimensions h×w. And carrying out different feature pooling operations on each feature map T i to obtain a multidimensional pooled feature F i. Common feature pooling operations include max pooling, average pooling, and the like. And combining the feature maps F i after feature pooling according to the dimension c to obtain a multidimensional pooled feature F, wherein the dimension is h 'multiplied by w' multiplied by c, and h 'and w' are the height and width of the feature pooling. Preferably, the present embodiment employs a three-dimensional feature pooling process.
In one embodiment, the mask prediction model includes 11×1 convolutional layer with softmax and fupscale modules, where fupscale modules include 31×1 convolutional layers and 3 bilinear upsampling layers f×2,1×1 convolutional layers followed by a ReLU activation function.
And obtaining a prediction mask by adopting a mask prediction model based on the multi-scale feature communication graph and the multi-dimensional pooling features.
According to the multi-scale feature communication diagram and the multi-dimensional pooling features, a group of features with fewer channels are obtained through a 1 multiplied by 1 convolution layer, and the softmax activation function can convert output into probability distribution; the fupscale module is then used to fuse and refine the features. The fupscale module includes 31 x1 convolutional layers and 3 bilinear upsampling layers f x 2. The 1 x1 convolutional layer is also followed by a ReLU activation function. In fupscale, a 1 x1 convolutional layer is first used to increase the feature dimension and adjust the number of channels output to the desired number. The spatial resolution of the feature tensor is then extended by a factor of 2 using a bilinear upsampling layer, where f x2 represents upsampling by a factor of 2 for each dimension of the feature tensor. And performing three times of 1 multiplied by 1 convolution and three times of bilinear upsampling operations, so as to realize feature fusion and refinement. Finally, the prediction mask is obtained by processing the 1×1 convolutional layer with the ReLU activation function. The prediction mask considers the multi-scale features and the multi-dimensional pooling features at the same time, so that the image information is captured better, and the prediction precision is improved.
With continued reference to fig. 1, step S103 is performed after step S102, as follows.
And step S103, training by adopting a continuous cognitive model based on the prediction mask, the multidimensional pooling features and the feature map after the pre-convolution until a preset condition is met, so as to obtain a trained nondestructive testing model.
In one embodiment, the total loss function of the continuous cognitive model is:
Wherein, As a total loss function,/>For the cross entropy loss function, y i is a preset label image, p i is a predicted value of a continuous cognitive model,/>The loss function is pooled for the multidimensional features, P dhw is the multidimensional pooling result, y dhw is the position corresponding to the defect feature,/>For the strip pooling loss function, P dh is the strip pooling result, y dh is the position corresponding to the defect feature, d, h and w are the depth, height and width of the multi-scale feature communication graph respectively, and lambda is a constant parameter.
In particular, in the field of image processing, a continuous cognitive model (Continuous Cognitive Model) is a model based on the principles of operation of the human visual system that attempts to simulate the manner in which the human eye and brain are processing images. The continuous cognitive model realizes image processing by dividing an image into a plurality of partial areas and performing feature extraction and processing in each area. These local areas may be pixels, patches or larger image areas. The model may select different region sizes and resolutions depending on the requirements of the task. As each region is processed, the continuous cognitive model will use a series of filters and feature extractors to extract the local features of the image. These features are then passed to subsequent processing layers, such as classifiers or object detectors, for higher-level image understanding tasks.
Cross entropy loss functionAnd the method is used for measuring the difference between the predicted value p i of the continuous cognitive model and the preset label image y i. By minimizing the loss function, the model can be made to predict the target class more accurately.
Multidimensional feature pooling loss functionThe difference between the multidimensional pooling result P dhw and the defect characteristic position y dhw is measured. The loss function can help the model learn better characteristic representation and improve the detection capability of the defect area.
Strip pooling loss functionThe difference between the banding pool result P dh and the defect feature position y dh is measured. The loss function can help the model learn better characteristic representation and improve the detection capability of the strip defects.
By training a continuous cognitive model and minimizing the total loss functionA trained non-destructive inspection model can be obtained. And training a nondestructive testing model by using the continuous cognition model, and obtaining a target thermal image after training, and obtaining a defect probability value by using an evaluation module in the trained nondestructive testing model.
In summary, according to the method for constructing the nondestructive testing model provided by the embodiment of the application, a thermal image is obtained, and a characteristic image is obtained by adopting a characteristic extraction network model; performing multi-scale feature communication processing based on the feature map to obtain a multi-scale feature communication map; carrying out multidimensional feature pooling processing based on the feature map to obtain multidimensional pooling features; based on the multi-scale feature communication graph and the multi-dimensional pooling features, a mask prediction model is adopted to obtain a prediction mask; based on the prediction mask, the multidimensional pooling features and the feature map after the pre-convolution, training is performed by adopting a continuous cognitive model until a preset condition is met, and a trained nondestructive testing model is obtained. The nondestructive testing model is used for detecting the defects of the target testing object with high precision, and is particularly suitable for nondestructive testing of complex workpieces.
In a second aspect, an embodiment of the present application provides a nondestructive testing method, and fig. 5 is a schematic flow chart of the nondestructive testing method provided in the embodiment of the present application, as shown in fig. 5, where the method includes the following steps:
Step S501, acquiring a target thermal image of a target detection object.
Step S502, determining a target detection result of the target detection object by using the nondestructive detection model constructed by the nondestructive detection model construction method based on the target thermal image.
Specifically, the embodiment of the application adopts a non-destructive testing model based on FMH-transducer. FMH-transducer is a neural network model based on attention mechanisms, where FMH (Factorized Multi-head) refers to breaking down the original multi-head attention mechanism into two steps of local and global attention. The multi-head attention mechanism contained in the STB basic block in each embedded block corresponding to the local attention is used to capture local information, and the global attention is used to capture global information. In FMH-transducer, each attention header is divided into a local sub-header and a global sub-header, which are responsible for calculating the attention weight in a local range and in a whole sequence range, respectively. Such decomposition may increase computational efficiency and better capture different levels of semantic information. In addition to improvements in attention mechanisms, FMH-transformers have introduced other techniques such as position coding, residual linking, and layer normalization to improve model performance and training effort. And an evaluation module in the non-destructive testing model based on the FMH-transducer is used for judging whether defects exist in the image or not by analyzing the target pooling characteristics and outputting a defect probability value as a target detection result. Based on the magnitude of the probability value, it can be determined whether a defect exists.
Acquiring a target thermal image of a target detection object; and determining a target detection result of the target detection object through the nondestructive detection model based on the target thermal image. The nondestructive testing model comprises an evaluation module, wherein the evaluation module is used for acquiring target multidimensional pooling characteristics according to a target thermal image, and acquiring a defect probability value as a target detection result through the evaluation module according to the target multidimensional pooling characteristics. The feature may be converted to a defect probability value using, for example, a softmax function to transform the target multidimensional pooled feature.
Fig. 6 is a schematic diagram of a frame of a nondestructive testing method according to an embodiment of the present application, as shown in fig. 6, including:
Step S1: based on the acquired thermal image of the target detection object, extracting features through the first embedded block to obtain a first feature image;
step S2: according to the first feature map, feature extraction is carried out through the second embedded block, and a second feature map is obtained;
step S3: according to the second feature map, feature extraction is carried out through the third embedded block, and a third feature map is obtained;
Step S4: and carrying out feature extraction through the fourth embedded block according to the third feature map to obtain a fourth feature map.
Step S5: performing image merging processing according to the first feature map, the second feature map, the third feature map and the fourth feature map, and determining a first merged feature map, a second merged feature map, a third merged feature map and a fourth merged feature map which correspond to the first feature map, the second feature map, the third feature map and the fourth feature map;
step S6: performing autocorrelation processing according to the first combined feature map, the second combined feature map, the third combined feature map and the fourth combined feature map, and determining a correlation map;
Step S7: and carrying out multi-scale feature communication processing according to the correlation diagram to obtain the multi-scale feature communication diagram.
Step S8: respectively carrying out convolution processing according to the first merging feature map, the second merging feature map, the third merging feature map and the fourth merging feature map to obtain a first convolution feature map, a second convolution feature map, a third convolution feature map and a fourth convolution feature map which are corresponding to each other;
Step S9: performing tensor stitching according to the first convolution feature map, the second convolution feature map, the third convolution feature map and the fourth convolution feature map to obtain stitching tensors;
step S10: and carrying out multidimensional feature pooling processing according to the splicing tensor to obtain the multidimensional pooling features.
Step S11: based on the multi-scale feature communication graph and the multi-dimensional pooling features, a mask prediction model is adopted to obtain a prediction mask;
Step S12: based on the prediction mask, the multidimensional pooling features and the feature map after the pre-convolution, training by adopting a continuous cognitive model until a preset condition is met, and obtaining a trained nondestructive testing model.
Step S13: and acquiring a target thermal image of a target detection object, and determining a target detection result of the target detection object through an evaluation module in a trained nondestructive detection model based on the target thermal image.
It should be noted that the steps illustrated in the above-described flow or flow diagrams of the figures may be performed in a computer system, such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flow diagrams, in some cases, the steps illustrated or described may be performed in an order other than that illustrated herein.
In a third aspect, an embodiment of the present application provides a nondestructive testing apparatus, including:
A laser for generating infrared laser light to irradiate the target detection object;
The infrared imager is used for receiving an infrared thermal wave signal reflected by the target detection object and converting the infrared thermal wave signal into an electric signal;
the water cooler is used for cooling heat generated by the laser;
the upper computer is used for carrying out nondestructive detection according to the electric signals;
a memory having executable code stored therein and one or more processors, which when executed, are operable to implement the steps of any of the method embodiments described above.
Specifically, the laser is used as a radiation source, can generate infrared laser radiation, has a wavelength generally between that of visible light and microwaves, has good penetrating power, and can penetrate through the target detection object and generate reflection so as to detect and analyze information such as structures, defects, temperature distribution and the like in the target detection object. The upper computer is communicated with the laser through a standard RS232 interface and controls the laser, and is firstly responsible for monitoring and recording the temperature of each module in the laser during operation, and checking and diagnosing faults of each module which possibly occur so as to ensure that the laser can maintain stable temperature and operation state during operation. Subsequently, the upper computer is switched to a user use mode, so that an operator is allowed to select different operation modes, such as an external continuous mode and the like, so as to meet different working requirements.
The water cooling machine is a water circulation system comprising a water pump, a radiator, a water tank and a cooling pipeline, when the laser operates, generated heat is conducted to the water cooling machine, the water pump in the water cooling machine pumps cooling water out of the water tank, the cooling water is contacted with the laser through the radiator and the cooling pipeline, cold water absorbs heat after being contacted with equipment and is then conveyed back to the water tank, the cooling water emits heat under the action of the radiator and the cooling pipeline, the temperature is gradually reduced, the cooling water is circulated again to cool the equipment, and finally the safe operation of the thermal excitation module is ensured.
The infrared thermal wave imager receives an infrared thermal wave signal reflected by a target detection object through a built-in infrared thermal sensor, converts the infrared thermal wave signal into an electric signal, records and processes the electric signal to obtain temperature changes of different positions of the target detection object, further generates a temperature distribution diagram, and sends data to an upper computer. The upper computer receives the temperature distribution diagram sent by the infrared thermal wave imager and detects whether the detected object is damaged or not through the nondestructive detection model.
Optionally, the nondestructive testing device may further include a transmission device and an input/output device, where the transmission device is connected to the processor, and the input/output device is connected to the processor.
It should be noted that, specific examples in this embodiment may refer to examples described in the foregoing embodiments and alternative implementations, and this embodiment is not repeated herein.
In a fourth aspect, in combination with the nondestructive testing method in the foregoing embodiment, an embodiment of the present application may be implemented by providing a storage medium. The storage medium has a computer program stored thereon; the computer program, when executed by a processor, implements any of the nondestructive testing methods of the above embodiments.
In one embodiment, a computer device is provided, which may be a terminal. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by the processor to implement a non-destructive inspection method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
In an embodiment, fig. 7 is a schematic diagram of an internal structure of an electronic device according to an embodiment of the present application, as shown in fig. 7, and an electronic device, which may be a server, and an internal structure diagram of which may be shown in fig. 7, is provided. The electronic device includes a processor, a network interface, an internal memory, and a non-volatile memory connected by an internal bus, where the non-volatile memory stores an operating system, computer programs, and a database. The processor is used for providing computing and control capabilities, the network interface is used for communicating with an external terminal through a network connection, the internal memory is used for providing an environment for the operation of an operating system and a computer program, the computer program is executed by the processor to realize a nondestructive testing method, and the database is used for storing data.
It will be appreciated by those skilled in the art that the structure shown in fig. 7 is merely a block diagram of a portion of the structure associated with the present inventive arrangements and is not limiting of the electronic device to which the present inventive arrangements are applied, and that a particular electronic device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous link (SYNCHLINK) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
It should be understood by those skilled in the art that the technical features of the above-described embodiments may be combined in any manner, and for brevity, all of the possible combinations of the technical features of the above-described embodiments are not described, however, they should be considered as being within the scope of the description provided herein, as long as there is no contradiction between the combinations of the technical features.
The above description is only of the preferred embodiments of the present application and is not intended to limit the present application, but various modifications and variations can be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (6)

1. A method for constructing a non-destructive testing model, the method comprising:
based on the acquired thermal image of the target detection object, adopting a feature extraction network model to obtain a feature image;
the feature map comprises a first feature map, a second feature map, a third feature map and a fourth feature map;
The feature extraction network model comprises a first embedded block, a second embedded block, a third embedded block and a fourth embedded block which are sequentially connected;
The first embedded block, the second embedded block, the third embedded block and the fourth embedded block all comprise initial convolution blocks and STB basic blocks, wherein the initial convolution blocks are formed by stacking convolution kernel sizes of 1X 1 convolution layers, 3X 3 convolution layers and 1X 1 convolution layers, and the STB basic blocks comprise a multi-head attention mechanism;
The obtaining a feature map by adopting a feature extraction network model based on the obtained thermal image map of the target detection object comprises the following steps:
according to the thermal image, extracting features through the first embedded block to obtain a first feature image;
according to the first feature map, feature extraction is carried out through the second embedded block, and a second feature map is obtained;
According to the second feature map, feature extraction is carried out through the third embedded block, and a third feature map is obtained;
According to the third feature map, feature extraction is carried out through the fourth embedded block, and a fourth feature map is obtained;
Performing multi-scale feature communication processing based on the feature map to obtain a multi-scale feature communication map; carrying out multidimensional feature pooling processing based on the feature map to obtain multidimensional pooling features; based on the multi-scale feature communication graph and the multi-dimensional pooling features, a mask prediction model is adopted to obtain a prediction mask;
The multi-scale feature communication processing is performed based on the feature map to obtain a multi-scale feature communication map, which comprises the following steps:
performing image merging processing according to the first feature map, the second feature map, the third feature map and the fourth feature map, and determining a first merged feature map, a second merged feature map, a third merged feature map and a fourth merged feature map which correspond to the first feature map, the second feature map, the third feature map and the fourth feature map;
Performing autocorrelation processing according to the first combined feature map, the second combined feature map, the third combined feature map and the fourth combined feature map, and determining a correlation map;
performing multi-scale feature communication processing according to the correlation diagram to obtain a multi-scale feature communication diagram;
the step of carrying out multidimensional feature pooling processing based on the feature map to obtain multidimensional pooled features comprises the following steps:
performing image merging processing according to the first feature map, the second feature map, the third feature map and the fourth feature map, and determining a first merged feature map, a second merged feature map, a third merged feature map and a fourth merged feature map which correspond to the first feature map, the second feature map, the third feature map and the fourth feature map;
Respectively carrying out convolution processing according to the first merging feature map, the second merging feature map, the third merging feature map and the fourth merging feature map to obtain a first convolution feature map, a second convolution feature map, a third convolution feature map and a fourth convolution feature map which are corresponding to each other;
performing tensor stitching according to the first convolution feature map, the second convolution feature map, the third convolution feature map and the fourth convolution feature map to obtain stitching tensors;
Carrying out multidimensional feature pooling processing according to the splicing tensor to obtain multidimensional pooling features;
Training by adopting a continuous cognitive model based on the prediction mask, the multidimensional pooling features and the feature map after the pre-convolution until a preset condition is met to obtain a trained nondestructive testing model;
wherein, the total loss function of the continuous cognitive model is:
Wherein, As a total loss function,/>For the cross entropy loss function, y i is a preset label image, p i is a predicted value of a continuous cognitive model,/>The loss function is pooled for the multidimensional features, P dhw is the multidimensional pooling result, y dhw is the position corresponding to the defect feature,/>For the strip pooling loss function, P dh is the strip pooling result, y dh is the position corresponding to the defect feature, d, h and w are the depth, height and width of the multi-scale feature communication graph respectively, and lambda is a constant parameter.
2. The method of claim 1, wherein each convolution layer in the initial convolution block is followed by a BN layer and a ReLU activation function.
3. The method of claim 1, wherein the mask prediction model comprises 1 x 1 convolution layer with softmax and fupscale modules, wherein the fupscale modules comprise 31 x 1 convolution layers and 3 bilinear upsampling layers f x 2, the 1 x 1 convolution layers being further followed by a ReLU activation function.
4. A method of non-destructive testing, comprising:
acquiring a target thermal image of a target detection object;
Determining a target detection result of the target detection object by a nondestructive detection model constructed by the nondestructive detection model construction method according to any one of claims 1-3 based on the target thermographic image.
5. A non-destructive inspection apparatus, comprising:
A laser for generating infrared laser light to irradiate the target detection object;
the infrared imager is used for receiving an infrared thermal wave signal reflected by the target detection object and converting the infrared thermal wave signal into an electric signal;
the water cooler is used for cooling heat generated by the laser;
the upper computer is used for carrying out nondestructive detection according to the electric signals;
A memory and one or more processors, the memory having executable code stored therein, the one or more processors when executing the executable code, for implementing the non-destructive inspection method of claim 4.
6. A computer-readable storage medium, having stored thereon a program which, when executed by a processor, implements the non-destructive inspection method of claim 4.
CN202410166766.0A 2024-02-06 2024-02-06 Nondestructive testing model construction method, nondestructive testing device and medium Active CN117710379B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410166766.0A CN117710379B (en) 2024-02-06 2024-02-06 Nondestructive testing model construction method, nondestructive testing device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410166766.0A CN117710379B (en) 2024-02-06 2024-02-06 Nondestructive testing model construction method, nondestructive testing device and medium

Publications (2)

Publication Number Publication Date
CN117710379A CN117710379A (en) 2024-03-15
CN117710379B true CN117710379B (en) 2024-05-10

Family

ID=90157458

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410166766.0A Active CN117710379B (en) 2024-02-06 2024-02-06 Nondestructive testing model construction method, nondestructive testing device and medium

Country Status (1)

Country Link
CN (1) CN117710379B (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107192689A (en) * 2017-04-28 2017-09-22 浙江必达科技有限公司 A kind of original packing milk powder lossless detection method based on multiple dimensioned tera-hertz spectra
CN111325748A (en) * 2020-03-20 2020-06-23 哈尔滨工业大学 Infrared thermal image nondestructive testing method based on convolutional neural network
CN111667011A (en) * 2020-06-08 2020-09-15 平安科技(深圳)有限公司 Damage detection model training method, damage detection model training device, damage detection method, damage detection device, damage detection equipment and damage detection medium
KR20210040853A (en) * 2020-06-30 2021-04-14 베이징 바이두 넷컴 사이언스 앤 테크놀로지 코., 엘티디. Product defect detection method and apparatus, electronic device and storage medium
CN115063348A (en) * 2022-05-17 2022-09-16 昭通亮风台信息科技有限公司 Part surface defect detection method, device, equipment and medium
CN115393741A (en) * 2022-07-28 2022-11-25 泰瑞数创科技(北京)股份有限公司 Ground feature classification artificial intelligence identification method and system based on unmanned aerial vehicle low-altitude sampling
CN115510319A (en) * 2022-09-22 2022-12-23 山西大学 Recommendation method and system based on potential interest multi-view fusion
CN116256392A (en) * 2023-03-07 2023-06-13 国家电投集团江苏电力有限公司 Coating defect pulse infrared thermal wave nondestructive detection device for offshore wind turbine generator
CN116402821A (en) * 2023-06-08 2023-07-07 湖南大学 Aircraft skin gluing quality defect detection method based on neural network
CN116559181A (en) * 2023-07-07 2023-08-08 杭州灵西机器人智能科技有限公司 Defect detection method, system, device and medium based on luminosity stereoscopic vision
CN116630323A (en) * 2023-07-25 2023-08-22 山东建筑大学 Automatic calculation method, system, medium and equipment for corrosion depth of dense metal
CN116824352A (en) * 2023-07-20 2023-09-29 安徽大学 Water surface floater identification method based on semantic segmentation and image anomaly detection
EP4280173A1 (en) * 2022-05-17 2023-11-22 Anhui NIO Autonomous Driving Technology Co., Ltd. Multi-task object detection method, electronic device, medium, and vehicle
CN117274768A (en) * 2023-07-31 2023-12-22 鹏城实验室 Training method of target detection network, target detection method and related device
CN117409190A (en) * 2023-12-12 2024-01-16 长春理工大学 Real-time infrared image target detection method, device, equipment and storage medium

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107192689A (en) * 2017-04-28 2017-09-22 浙江必达科技有限公司 A kind of original packing milk powder lossless detection method based on multiple dimensioned tera-hertz spectra
CN111325748A (en) * 2020-03-20 2020-06-23 哈尔滨工业大学 Infrared thermal image nondestructive testing method based on convolutional neural network
CN111667011A (en) * 2020-06-08 2020-09-15 平安科技(深圳)有限公司 Damage detection model training method, damage detection model training device, damage detection method, damage detection device, damage detection equipment and damage detection medium
KR20210040853A (en) * 2020-06-30 2021-04-14 베이징 바이두 넷컴 사이언스 앤 테크놀로지 코., 엘티디. Product defect detection method and apparatus, electronic device and storage medium
EP4280173A1 (en) * 2022-05-17 2023-11-22 Anhui NIO Autonomous Driving Technology Co., Ltd. Multi-task object detection method, electronic device, medium, and vehicle
CN115063348A (en) * 2022-05-17 2022-09-16 昭通亮风台信息科技有限公司 Part surface defect detection method, device, equipment and medium
CN115393741A (en) * 2022-07-28 2022-11-25 泰瑞数创科技(北京)股份有限公司 Ground feature classification artificial intelligence identification method and system based on unmanned aerial vehicle low-altitude sampling
CN115510319A (en) * 2022-09-22 2022-12-23 山西大学 Recommendation method and system based on potential interest multi-view fusion
CN116256392A (en) * 2023-03-07 2023-06-13 国家电投集团江苏电力有限公司 Coating defect pulse infrared thermal wave nondestructive detection device for offshore wind turbine generator
CN116402821A (en) * 2023-06-08 2023-07-07 湖南大学 Aircraft skin gluing quality defect detection method based on neural network
CN116559181A (en) * 2023-07-07 2023-08-08 杭州灵西机器人智能科技有限公司 Defect detection method, system, device and medium based on luminosity stereoscopic vision
CN116824352A (en) * 2023-07-20 2023-09-29 安徽大学 Water surface floater identification method based on semantic segmentation and image anomaly detection
CN116630323A (en) * 2023-07-25 2023-08-22 山东建筑大学 Automatic calculation method, system, medium and equipment for corrosion depth of dense metal
CN117274768A (en) * 2023-07-31 2023-12-22 鹏城实验室 Training method of target detection network, target detection method and related device
CN117409190A (en) * 2023-12-12 2024-01-16 长春理工大学 Real-time infrared image target detection method, device, equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CASPPNet: a chained atrous spatial pyramid pooling network for steel defect detection;Zhouzhou Zheng 等;《Measurement Science and Technology》;20220503;第33卷;1-9 *
Multi-scale triple-attention network for pixelwise crack segmentation;Lei Yang 等;《Automation in Construction》;20230404;第150卷;1-12 *
强化空间感知的多尺度芯片表面缺陷检测算法;郭伟峰 等;《仪表技术与传感器》;20231231(第8期);120-126 *

Also Published As

Publication number Publication date
CN117710379A (en) 2024-03-15

Similar Documents

Publication Publication Date Title
EP3961484B1 (en) Medical image segmentation method and device, electronic device and storage medium
CN112233117A (en) New coronary pneumonia CT detects discernment positioning system and computing equipment
CN109285105B (en) Watermark detection method, watermark detection device, computer equipment and storage medium
WO2020046960A1 (en) System and method for optimizing damage detection results
CN108447061B (en) Commodity information processing method and device, computer equipment and storage medium
EP4246458A1 (en) System for three-dimensional geometric guided student-teacher feature matching (3dg-stfm)
CN111429482A (en) Target tracking method and device, computer equipment and storage medium
CN111239684A (en) Binocular fast distance measurement method based on YoloV3 deep learning
CN111798490B (en) Video SAR vehicle target detection method
CN115496971A (en) Infrared target detection method and device, electronic equipment and storage medium
CN115187530A (en) Method, device, terminal and medium for identifying ultrasonic automatic breast full-volume image
KR20230036030A (en) Method and system for detecting anomalies in images using a plurality of machine learning programs
CN114898357A (en) Defect identification method and device, electronic equipment and computer readable storage medium
CN116704324A (en) Target detection method, system, equipment and storage medium based on underwater image
CN117576029A (en) Binocular vision-based part defect detection and evaluation method and device
CN112990107B (en) Hyperspectral remote sensing image underwater target detection method and device and computer equipment
Wei et al. Panorama-to-model registration through integration of image retrieval and semantic reprojection
CN116205918B (en) Multi-mode fusion semiconductor detection method, device and medium based on graph convolution
CN117710379B (en) Nondestructive testing model construction method, nondestructive testing device and medium
CN107742114A (en) high spectrum image feature detection method and device
CN116758419A (en) Multi-scale target detection method, device and equipment for remote sensing image
Zhang et al. Shared contents alignment across multiple granularities for robust SAR-optical image matching
CN112862002A (en) Training method of multi-scale target detection model, target detection method and device
CN116563569B (en) Hybrid twin network-based heterogeneous image key point detection method and system
CN118298109B (en) Multi-mode electronic information system view processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant