CN117670835A - Puncture damage detection method based on neural network - Google Patents

Puncture damage detection method based on neural network Download PDF

Info

Publication number
CN117670835A
CN117670835A CN202311670734.6A CN202311670734A CN117670835A CN 117670835 A CN117670835 A CN 117670835A CN 202311670734 A CN202311670734 A CN 202311670734A CN 117670835 A CN117670835 A CN 117670835A
Authority
CN
China
Prior art keywords
puncture
puncture damage
network model
damage
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311670734.6A
Other languages
Chinese (zh)
Other versions
CN117670835B (en
Inventor
张宏
刘梦真
黄广炎
徐媛媛
李豪天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Chongqing Innovation Center of Beijing University of Technology
Original Assignee
Beijing Institute of Technology BIT
Chongqing Innovation Center of Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT, Chongqing Innovation Center of Beijing University of Technology filed Critical Beijing Institute of Technology BIT
Priority to CN202311670734.6A priority Critical patent/CN117670835B/en
Publication of CN117670835A publication Critical patent/CN117670835A/en
Application granted granted Critical
Publication of CN117670835B publication Critical patent/CN117670835B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/094Adversarial learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/096Transfer learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/766Arrangements for image or video recognition or understanding using pattern recognition or machine learning using regression, e.g. by projecting features on hyperplanes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30124Fabrics; Textile; Paper
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a puncture damage detection method based on a neural network, which belongs to the technical fields of machine vision and artificial intelligence, and comprises the following steps: obtaining amplified puncture damage data; constructing a twin countermeasure network model, and generating segmented puncture damage image data through the trained twin countermeasure network model; constructing a puncture damage data set through the segmented puncture damage image data; constructing a classification network, training the classification network through a puncture damage data set, and obtaining a classification network model; modifying the classification network into a regression network through a transfer learning method, and training the model to obtain a regression network model; and acquiring a real-time puncture damage image, inputting the real-time puncture damage image into a regression network model, and generating a puncture damage detection result. The invention can realize the rapid and accurate detection of puncture damage and provides a solution for the treatment and research of puncture damage in the fields of medical first aid, criminal legal medicine and other fields.

Description

Puncture damage detection method based on neural network
Technical Field
The invention belongs to the fields of machine vision and artificial intelligence, and particularly relates to a puncture damage detection method based on a neural network.
Background
Stab-resistant garments have important applications in the field of personal protection. The stab-resistant clothing is easy to be damaged by various punctures in the wearing and using process. In the face of treatment of penetrating lesions, it is important to accurately determine the nature, depth, and possibly the important tissues and structures affected by the lesion. The existing detection method for puncture damage is mainly traditional manual detection. The staff can only judge the situation of the puncture injury according to the wound form after the puncture injury and estimate the key parameters such as initial kinetic energy of puncture.
Manual testing typically requires some time to complete and is subjective resulting in unreliable results. On the other hand, some puncture damage results can be obtained through a traditional dynamic impact experiment to assist in judgment. However, the dynamic impact test is costly and the test results are difficult to fully cover the real situation, so the reference is poor. In summary, the existing methods have the defects of low accuracy, high cost, poor stability, difficulty in large-scale application and the like, and are difficult to provide real-time and accurate puncture damage detection. There is therefore a need for more advanced techniques and methods to improve the accuracy and reliability of puncture damage detection.
Disclosure of Invention
The invention aims to provide a puncture damage detection method based on a neural network, which aims to solve the problems existing in the prior art.
In order to achieve the above object, the present invention provides a puncture damage detection method based on a neural network, including:
acquiring puncture damage image data through a dynamic puncture experiment, and processing the puncture damage image data to obtain amplified puncture damage data;
constructing a twin countermeasure network model, training the twin countermeasure network model based on the amplified puncture damage data, continuously dividing the amplified puncture damage data through the trained twin countermeasure network model, and generating divided puncture damage image data;
constructing a puncture damage data set according to the segmented puncture damage image data;
constructing a classification network, and training the classification network through the puncture damage data set to obtain a classification network model;
modifying the classification network into a regression network through a transfer learning method, and training the model to obtain a regression network model;
and acquiring a real-time puncture damage image, inputting the real-time puncture damage image into the regression network model, and generating a puncture damage detection result.
Preferably, the process of obtaining amplified puncture damage data includes:
key parameters under different falling heights are obtained through a dynamic puncture experiment, and damage image data are acquired through an image acquisition experiment;
performing data enhancement on the damaged image data to obtain enhanced image data;
and marking the main characteristics of the enhanced image data based on the key parameters to obtain the amplified puncture damage data.
Preferably, the process of constructing the twin countermeasure network model based on the amplified puncture damage data includes:
two identical convolutional neural network models are built, a BatchNorm2d layer and a nonlinear ReLU layer are added after two-dimensional convolutional layers in the convolutional neural network models, and the twin countermeasure network model is generated.
Preferably, the training the twin countermeasure network model based on the amplified puncture damage data includes:
taking the puncture damage image sample and the complete image sample as input sample combinations, and simultaneously inputting the puncture damage image sample and the complete image sample into the twin countermeasure network model for feature extraction, so as to generate a feature mask;
comparing the feature masks to generate a loss value of the twin countermeasure network model;
and taking the average loss value of the twin countermeasure network model as the loss value of the twin countermeasure network model to adjust network parameters, and obtaining the trained twin countermeasure network model.
Preferably, the expression for generating the loss value of the twin countermeasure network model is:
wherein L is avg For the loss value of the twinning countermeasure network, N is the number of batches input by the sample combination, (y) 1 -x 1 ) 2 To puncture the Euclidean distance between the lesion image and its corresponding mask, (y) 2 -x 2 ) 2 Is the euclidean distance between the good image and its corresponding mask.
Preferably, the process of constructing the puncture injury data set from the segmented puncture injury image data includes:
the puncture damage data set is constructed by using the segmented puncture damage image, and the corresponding initial puncture kinetic energy, maximum penetration layer number and puncture peak force;
the number of the divided puncture damage images is used as training data, and the initial puncture kinetic energy, the maximum number of penetration layers and the puncture peak force are labels of the training data.
Preferably, the process of obtaining the classification network model includes:
constructing a classification network, and training the classification network through the puncture damage data set to obtain the classification network model;
the input of the classification network model is puncture damage image data in the puncture damage data set, and the output is puncture initial kinetic energy, the maximum number of penetration layers and puncture peak force corresponding to a puncture damage image in the puncture damage data set.
Preferably, the process of obtaining the regression network model includes:
and migrating the structure and parameters of the classification network model into a regression network to generate a migration network model, changing an output layer of the migration network model into output neurons, and training the migration network model by taking puncture initial kinetic energy, the maximum number of penetration layers and puncture peak force as labels to generate the regression network model.
The invention has the technical effects that:
obtaining puncture damage samples under different gradient puncture kinetic energy through a dynamic puncture experiment, obtaining puncture damage image data of the puncture damage samples by using an image acquisition device, and carrying out sample amplification on the puncture damage image data; training to obtain a twin countermeasure network model by using the amplified puncture damage image data; dividing and extracting the amplified puncture damage image data by using the trained twin countermeasure network model; constructing a puncture damage data set; training by using the puncture damage data set to obtain a classification network model; training based on the classification network model to obtain a regression network model; the detection of any puncture injury can be realized by using a twin antagonism network model and a regression network model.
The invention can realize the rapid and accurate detection of puncture damage and provides a solution for the treatment and research of puncture damage in the fields of medical first aid, criminal legal medicine and other fields.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application, illustrate and explain the application and are not to be construed as limiting the application. In the drawings:
fig. 1 is a schematic flow chart of a neural network-based puncture damage detection method in an embodiment of the present invention;
FIG. 2 is a schematic diagram of a twin countermeasure network in accordance with an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a puncture damage detection device based on a neural network in an embodiment of the present invention.
Detailed Description
It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other. The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer executable instructions, and that although a logical order is illustrated in the flowcharts, in some cases the steps illustrated or described may be performed in an order other than that illustrated herein.
Example 1
As shown in fig. 1, the embodiment provides a puncture damage detection method based on a neural network, which includes:
acquiring puncture damage image data through a dynamic puncture experiment, and processing the puncture damage image data to obtain amplified puncture damage data;
constructing a twin countermeasure network model, training the twin countermeasure network model based on the amplified puncture damage data, continuously dividing the amplified puncture damage data through the trained twin countermeasure network model, and generating divided puncture damage image data;
constructing a puncture damage data set according to the segmented puncture damage image data;
constructing a classification network, and training the classification network through the puncture damage data set to obtain a classification network model;
modifying the classification network into a regression network through a transfer learning method, and training the model to obtain a regression network model;
and acquiring a real-time puncture damage image, inputting the real-time puncture damage image into the regression network model, and generating a puncture damage detection result.
In this embodiment, the puncture damage detection method based on the neural network specifically includes the steps of:
and step 1, data acquisition. Puncture damage samples under different gradient puncture kinetic energy are obtained through dynamic puncture experiments, puncture damage image data of the puncture damage samples are obtained through image acquisition equipment, and amplified puncture damage image data are obtained through data rotation, shearing and other data enhancement algorithms. In the dynamic puncture experiment process, the maximum number of penetration layers, the puncture peak force and the puncture initial kinetic energy corresponding to the puncture damage sample are recorded.
Step 2: training the twinning countermeasure network. And training to obtain a twin countermeasure network model by using the amplified puncture damage image data. The input of the twin countermeasure network model is the amplified puncture damage image data, and the output is the segmented puncture damage image data. And marking the puncture damage region in the puncture damage image by the segmented puncture damage image.
Step 3: and (5) image segmentation. And dividing and extracting the amplified puncture damage image data by using the trained twin countermeasure network model to obtain the divided puncture damage image data with the size of n pixels.
Step 4: a data set is prepared. And constructing a puncture damage data set by using the segmented puncture damage image, the corresponding puncture initial kinetic energy, the maximum penetration layer number and the puncture peak force. The segmented puncture damage image data are used as training data, and puncture initial kinetic energy, the maximum penetration layer number and puncture peak force are used as labels of the training data.
And 5, training a classification network. And training the classification network by using the puncture damage data set to obtain a classification network model. The input of the classification network model is puncture damage image data in the puncture damage data set, and the output is puncture initial kinetic energy, the maximum number of penetration layers and puncture peak force corresponding to a puncture damage image in the puncture damage data set.
And 6, training a regression network. And obtaining the classification network model, modifying the classification network into a regression network by using the idea of transfer learning, and then training the model again to obtain the regression network model. The regression network model is input with arbitrary puncture damage image data, and output with corresponding puncture initial kinetic energy, maximum penetration layer number and puncture peak force.
And 7, detecting any puncture injury. And acquiring a puncture damage sample under any puncture kinetic energy by using an artificial method, and acquiring image data of the puncture damage sample under any puncture kinetic energy by using an image acquisition device. And dividing and extracting the puncture damage sample image data under any puncture kinetic energy by using the trained twin countermeasure network model to obtain a puncture damage image. And inputting the puncture damage image into the regression network model to obtain initial puncture kinetic energy, the maximum number of penetration layers and puncture peak force. And synchronously marking the initial kinetic energy of puncture, the maximum number of penetration layers and the puncture peak force into the puncture damage image, and displaying the puncture damage image on an interface.
Further optimizing the scheme, the detailed process of data acquisition is as follows: firstly, values of three key parameters, namely initial puncture kinetic energy, maximum penetration layer number and puncture peak force, at 9 falling heights are acquired through a dynamic puncture experiment. And 5 groups of puncture damage image data under 9 falling heights are obtained through an image acquisition experiment. The 45 images are preprocessed to adapt to the requirements of network model training. The initial image resolution acquired was 2349x2311. The puncture damage region image is cut to obtain an image with 512x512 resolution, and the image data is amplified by 5 times in a mode of random cutting, overturning, rotating and other image data enhancement to enrich the data. Finally, 225 image samples of the puncture injury region were collected. Meanwhile, 225 intact image samples without puncture damage are also collected correspondingly. A total of 450 image sample data sets were obtained. The image samples in the dataset containing the puncture lesion area are then labeled. The 9 kinds of falling heights are set to 9 kinds of labels according to 1-9 from low to high. And marking the main characteristics of the whole puncture damage region by taking the geometric center of the puncture damage region in the image sample as the circle center and drawing a circle with the radius r. The calculation formula of r is r=18+idx2, wherein id is a label value of each type of height, and the unit of r is a pixel. In this way, 10 types of sample data (including samples without puncture damage) are marked with features, and masks required for training the network model are generated. The mask is used to effectively locate the target feature region in the image.
Further optimizing scheme, the structure of the twin countermeasure network is as follows: the twin countermeasure network is mainly composed of two identical Convolutional Neural Networks (CNNs) in parallel. Mainly consists of 11 two-dimensional convolution layers and 3 largest pooling layers. Each two-dimensional convolution layer is followed by a BatchNorm2d layer and a non-linear ReLU layer. The BatchNorm2d layer is used to normalize the sample features of each small lot, making the mean value of the output features close to 0 and the variance close to 1. Such batch normalization operations aim to solve the gradient extinction and gradient explosion problems during neural network training. The convergence speed and stability of the network can be improved. Compared with the sigmoid activation function commonly used in deep learning, the ReLU activation function can change the output of a part of neurons into 0, reduce the dependence among network parameters, and effectively reduce the problem of overfitting. The derivative of the sigmoid function gradually approaches 0 at both ends, so that the problem of gradient disappearance is easy to occur. After the image passes through each layer of CNN, the resolution decreases by a factor of 2 after each pass through the max-pooling layer. After layer 3, an attention mechanism layer consisting of two fully connected convolutional neural networks is passed. This is a mechanism of extrusion (Squeeze) and Excitation (specification). By this mechanism, the network can learn to use global information to selectively emphasize useful information features and suppress less useful features. And the attention mechanism is computationally lightweight, adding only a limited computational burden.
Further optimizing scheme, the training process of the twin countermeasure network is as follows: in the training process, the puncture damage image sample and the complete image sample are simultaneously input into the twin countermeasure network as an input sample combination. And respectively extracting the characteristics through the CNN networks. The respective generated feature masks are output at a resolution of 64x64. And comparing the generated masks with the masks marked by the masks, and obtaining the loss values of the masks. And after the loss values of the two CNNs are averaged, returning the loss values serving as the loss values of the twin countermeasure network to the CNN network with the weight sharing to adjust network parameters. The loss function formula of the twin countermeasure network is Wherein L is avg For the loss value of the twinning countermeasure network, N is the number of batches input by the sample combination, (y) 1 -x 1 ) 2 To puncture the Euclidean distance between the lesion image and its corresponding mask, (y) 2 -x 2 ) 2 Is the euclidean distance between the good image and its corresponding mask. Compared with the traditional CNN training, the training strategy effectively solves the problem of sample imbalance under the premise of small samples. In the dataset constructed in this study, the positive and negative samples differed significantly. If a traditional CNN model is used for training, only one sample can be input at a time. The alternating input of positive and negative samples into the network model can cause uneven label distribution, and the network parameters can generate very large fluctuation in the adjustment process. And due to the relatively small number of samplesThe models will not have enough information to accurately learn their features. This can greatly reduce the generalization ability of the model. The twin countermeasure network combines the positive and negative samples into an input combination, so that the model can learn the characteristics of the positive and negative samples simultaneously. The model can reach an ideal value faster in parameter adjustment, so that the learning efficiency of the model is improved. On the other hand, loss is estimated for each CNN network individually, and the loss is averaged and then fed back to the network as a whole loss value for parameter adjustment. The gradient descent process is effectively optimized, and overfitting is avoided.
Further optimizing scheme, the structure of the segmentation network is as follows: the network is mainly composed of 2 two-dimensional convolution layers, 2 maximum pooling layers and 2 full connection layers. The two-dimensional convolution layer and the maximum pooling layer in the first two layers are mainly used for further extracting features and reducing the data dimension of the input segmented image. And the full-connection layer of the third layer performs linear transformation on the extracted features to finally reduce the output dimension to a one-dimensional vector. This is also known as the "score" for each category. Each category is classified according to a different "score". The final full connection layer realizes the important functions of combining and mapping the extracted features to the category probability space, so that the neural network can realize the classification task.
Further optimizing scheme, the structure of the regression network is as follows: and migrating the structure and parameters (except the last layer) which are already adjusted in the classification network into the regression network, so that the extraction of the image features in the regression network can be ensured to be very accurate and efficient. The last layer (output layer) of the split network is then modified to an output neuron to represent the continuous prediction. The prediction function of the label training network of the regression network model is respectively taken as 3 key parameters (initial puncture kinetic energy, maximum penetration layer number and puncture peak force). Because the impact kinetic energy, the peak force and the numerical distribution of the number of penetration layers are greatly different, the maximum-minimum normalization processing is carried out on the three labels. Scaling the value range of the data to between 0 and 1 and then realizing the quantization function of three key parameter labels by the regression model.
In a second aspect, embodiments of the present application provide a puncture damage detection device based on a neural network, where the device includes:
the data acquisition module is used for acquiring different puncture damage images;
the image segmentation module is used for segmenting a puncture damage region in the puncture damage image by using a twin antagonism network;
the key parameter quantification module is used for quantifying three labels of initial puncture kinetic energy, maximum penetration layer number and puncture peak force according to the characteristics of the puncture damage area by using a regression network;
the display module is used for displaying the segmented results of the different puncture damage images and the quantized key parameters on a display screen of the equipment in real time;
the operation processor module is used for operating algorithms such as a twin countermeasure network, a regression network and the like deployed in the puncture damage quantification method;
and the storage module is used for storing the puncture damage image segmentation result, the key parameter quantification result and the like of the twin countermeasure network in the equipment.
In a third aspect, embodiments of the present application provide a computer device including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the steps of the method or module described above when the processor executes the computer program.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the above method
Example two
An embodiment provides a puncture damage detection method based on a neural network, including:
step 1, data acquisition. Firstly, values of three key parameters, namely initial puncture kinetic energy, maximum penetration layer number and puncture peak force, at 9 falling heights are acquired through a dynamic puncture experiment. And 5 groups of puncture damage image data under 9 falling heights are obtained through an image acquisition experiment. The 45 images are preprocessed to adapt to the requirements of network model training. The initial image resolution acquired was 2349x2311. The puncture damage region image is cut to obtain an image with 512x512 resolution, and the image data is amplified by 5 times in a mode of random cutting, overturning, rotating and other image data enhancement to enrich the data. Finally, 225 image samples of the puncture injury region were collected. Meanwhile, 225 intact image samples without puncture damage are also collected correspondingly. A total of 450 image sample data sets were obtained. The image samples in the dataset containing the puncture lesion area are then labeled. The 9 kinds of falling heights are set to 9 kinds of labels according to 1-9 from low to high. And marking the main characteristics of the whole puncture damage region by taking the geometric center of the puncture damage region in the image sample as the circle center and drawing a circle with the radius r. The calculation formula of r is r=18+idx2, wherein id is a label value of each type of height, and the unit of r is a pixel. In this way, 10 types of sample data (including samples without puncture damage) are marked with features, and masks required for training the network model are generated.
Step 2 training the twin countermeasure network for puncture damage image segmentation. And (3) dividing and extracting features of the puncture damage region by using a twin countermeasure network on the acquired puncture damage image. A schematic diagram of the structure of the twinning countermeasure network is shown in fig. 2. In the training process, the puncture damage image sample and the complete image sample are simultaneously input into the twin countermeasure network as an input sample combination. And respectively extracting the characteristics through the CNN networks. The respective generated feature masks are output at a resolution of 64x64. And comparing the generated masks with the masks marked by the masks, and obtaining the loss values of the masks. And after the loss values of the two CNNs are averaged, returning the loss values serving as the loss values of the twin countermeasure network to the CNN network with the weight sharing to adjust network parameters. For the loss function L avg The value of (2) is set to not higher than 0.001. Because the twin countermeasure network proposed in the method has very excellent performance in the field of small data sets of image segmentation, the twin countermeasure network extracts more accurate image information as far as possible, and creates better conditions for classification networks and regression networks in the next step.
Step 3 data set preparation. And constructing a puncture damage data set by using the segmented puncture damage image, the corresponding puncture initial kinetic energy, the maximum penetration layer number and the puncture peak force. The segmented puncture injury image data are used as training data, and puncture initial kinetic energy, the maximum penetration layer number and puncture peak force are used as labels of the training data.
And step 4, training a classification network by using the segmented puncture damage image and key parameters in the puncture process. After the classification network inputs the puncture damage image, the corresponding puncture initial kinetic energy, the maximum penetration layer number and the puncture peak force existing in the puncture damage data set can be output.
And 5, setting a regression network according to the classification network by using the migration learning idea. And migrating the structure and parameters (except the last layer) which are already adjusted in the classification network into the regression network, so that the extraction of the image features in the regression network can be ensured to be very accurate and efficient. The last layer (output layer) of the classification network is modified to an output neuron to represent a continuous prediction value. The prediction function of the label training network of the regression network model is respectively taken as 3 key parameters (initial puncture kinetic energy, maximum penetration layer number and puncture peak force). Because the impact kinetic energy, the peak force and the numerical distribution of the number of penetration layers are greatly different, the maximum-minimum normalization processing is carried out on the three labels. Scaling the value range of the data to between 0 and 1 and then realizing the quantization function of three key parameter labels by the regression model. The prediction effect of the regression network on the three key parameter labels is shown in table 1, and it can be seen that the predicted values are very close to the normalized values.
And 6, displaying the puncture damage quantification result. And after integrating the puncture damage quantification methods based on the twin antagonism network and the regression network, displaying the quantification result of each time on an interface, and realizing real-time quantification of the puncture damage image.
TABLE 1
The embodiment of the application provides a puncture damage detection method and device based on a neural network, which are verified in an actual application scene:
5 testers with different body types were selected to perform random knife stab tests on the aramid fiber reinforced polymer composite (AFRP) used in the training model in this patent. And evaluating the performance of the puncture damage prediction model according to the puncture damage image and the number of penetration layers obtained by the test. The number of penetration layers is predicted using a puncture damage prediction model. The actual number of penetration layers of the 5 participants can be directly obtained, so that the predicted result of the number of penetration layers is compared with the actual result. The comparison results are shown in Table 2. And in the acquired data range, the prediction accuracy of the puncture damage prediction model reaches 88.57 percent.
TABLE 2
The equipment used mainly comprises the following main modules, as shown in fig. 3, mainly comprising: the data acquisition module is used for acquiring different puncture damage images;
the image segmentation module is used for segmenting a puncture damage region in the puncture damage image by using a twin antagonism network;
the key parameter quantification module is used for quantifying three labels of initial puncture kinetic energy, maximum penetration layer number and puncture peak force according to the characteristics of the puncture damage area by using a regression network;
the display module is used for displaying the segmented results of the different puncture damage images and the quantized key parameters on a display screen of the equipment in real time;
the operation processor module is used for operating algorithms such as a twin countermeasure network, a regression network and the like deployed in the puncture damage quantification method;
and the storage module is used for storing the puncture damage image segmentation result, the key parameter quantification result and the like of the twin countermeasure network in the equipment.
The foregoing is merely a preferred embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions easily contemplated by those skilled in the art within the technical scope of the present application should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (8)

1. The puncture damage detection method based on the neural network is characterized by comprising the following steps of:
acquiring puncture damage image data through a dynamic puncture experiment, and processing the puncture damage image data to obtain amplified puncture damage data;
constructing a twin countermeasure network model, training the twin countermeasure network model based on the amplified puncture damage data, continuously dividing the amplified puncture damage data through the trained twin countermeasure network model, and generating divided puncture damage image data;
constructing a puncture damage data set according to the segmented puncture damage image data;
constructing a classification network, and training the classification network through the puncture damage data set to obtain a classification network model;
modifying the classification network into a regression network through a transfer learning method and performing model training to obtain a regression network model;
and acquiring a real-time puncture damage image, inputting the real-time puncture damage image into the regression network model, and generating a puncture damage detection result.
2. The neural network-based puncture damage detection method of claim 1, wherein the process of obtaining amplified puncture damage data comprises:
key parameters under different falling heights are obtained through a dynamic puncture experiment, and damage image data are acquired through an image acquisition experiment;
performing data enhancement on the damaged image data to obtain enhanced image data;
and marking the main characteristics of the enhanced image data based on the key parameters to obtain the amplified puncture damage data.
3. The neural network-based puncture damage detection method of claim 1, wherein the process of constructing a twin countermeasure network model based on the amplified puncture damage data comprises:
two identical convolutional neural network models are built, a BatchNorm2d layer and a nonlinear ReLU layer are added after two-dimensional convolutional layers in the convolutional neural network models, and the twin countermeasure network model is generated.
4. The neural network-based puncture damage detection method of claim 1, wherein training the twin countermeasure network model based on the amplified puncture damage data comprises:
taking the puncture damage image sample and the complete image sample as input sample combinations, and simultaneously inputting the puncture damage image sample and the complete image sample into the twin countermeasure network model for feature extraction, so as to generate a feature mask;
comparing the feature masks to generate a loss value of the twin countermeasure network model;
and taking the average loss value of the twin countermeasure network model as the loss value of the twin countermeasure network model to adjust network parameters, and obtaining the trained twin countermeasure network model.
5. The neural network-based puncture damage detection method of claim 1, wherein the expression for generating the loss value of the twin countermeasure network model is:
wherein L is avg For the loss value of the twinning countermeasure network, N is the number of batches input by the sample combination, (y) 1 -x 1 ) 2 To puncture the Euclidean distance between the lesion image and its corresponding mask, (y) 2 -x 2 ) 2 Is the euclidean distance between the good image and its corresponding mask.
6. The neural network-based puncture damage detection method of claim 1, wherein constructing a puncture damage data set from the segmented puncture damage image data comprises:
the puncture damage data set is constructed by using the segmented puncture damage image, and the corresponding initial puncture kinetic energy, maximum penetration layer number and puncture peak force;
the number of the divided puncture damage images is used as training data, and the initial puncture kinetic energy, the maximum number of penetration layers and the puncture peak force are labels of the training data.
7. The neural network-based puncture damage detection method of claim 1, wherein the process of obtaining the classification network model comprises:
constructing a classification network, and training the classification network through the puncture damage data set to obtain the classification network model;
the input of the classification network model is puncture damage image data in the puncture damage data set, and the output is puncture initial kinetic energy, the maximum number of penetration layers and puncture peak force corresponding to a puncture damage image in the puncture damage data set.
8. The neural network-based puncture damage detection method of claim 1, wherein the process of obtaining a regression network model comprises:
and migrating the structure and parameters of the classification network model into a regression network to generate a migration network model, changing an output layer of the migration network model into output neurons, and training the migration network model by taking puncture initial kinetic energy, the maximum number of penetration layers and puncture peak force as labels to generate the regression network model.
CN202311670734.6A 2023-12-07 2023-12-07 Puncture damage detection method based on neural network Active CN117670835B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311670734.6A CN117670835B (en) 2023-12-07 2023-12-07 Puncture damage detection method based on neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311670734.6A CN117670835B (en) 2023-12-07 2023-12-07 Puncture damage detection method based on neural network

Publications (2)

Publication Number Publication Date
CN117670835A true CN117670835A (en) 2024-03-08
CN117670835B CN117670835B (en) 2024-07-05

Family

ID=90070998

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311670734.6A Active CN117670835B (en) 2023-12-07 2023-12-07 Puncture damage detection method based on neural network

Country Status (1)

Country Link
CN (1) CN117670835B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117994346A (en) * 2024-04-03 2024-05-07 华中科技大学同济医学院附属协和医院 Digital twinning-based puncture instrument detection method, system and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112150407A (en) * 2019-10-30 2020-12-29 重庆大学 Deep learning detection method and system for inclusion defect of aerospace composite material of small sample
CN113160200A (en) * 2021-04-30 2021-07-23 聚时科技(上海)有限公司 Industrial image defect detection method and system based on multitask twin network
WO2022198866A1 (en) * 2021-03-22 2022-09-29 腾讯云计算(北京)有限责任公司 Image processing method and apparatus, and computer device and medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112150407A (en) * 2019-10-30 2020-12-29 重庆大学 Deep learning detection method and system for inclusion defect of aerospace composite material of small sample
WO2022198866A1 (en) * 2021-03-22 2022-09-29 腾讯云计算(北京)有限责任公司 Image processing method and apparatus, and computer device and medium
CN113160200A (en) * 2021-04-30 2021-07-23 聚时科技(上海)有限公司 Industrial image defect detection method and system based on multitask twin network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
RISHENG LIU 等: "Twin Adversarial Contrastive Learning for Underwater Image Enhancement and Beyond", 《 IEEE TRANSACTIONS ON IMAGE PROCESSING》, vol. 31, 18 July 2022 (2022-07-18), pages 4922 - 4936, XP011915201, DOI: 10.1109/TIP.2022.3190209 *
林艳彬: "基于异构数据融合的超短期太阳能预测", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》, no. 2020, 15 July 2020 (2020-07-15), pages 041 - 1 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117994346A (en) * 2024-04-03 2024-05-07 华中科技大学同济医学院附属协和医院 Digital twinning-based puncture instrument detection method, system and storage medium

Also Published As

Publication number Publication date
CN117670835B (en) 2024-07-05

Similar Documents

Publication Publication Date Title
Li et al. Accurate retinal vessel segmentation in color fundus images via fully attention-based networks
Velasco et al. A smartphone-based skin disease classification using mobilenet cnn
CN109376636B (en) Capsule network-based eye fundus retina image classification method
CN110827260B (en) Cloth defect classification method based on LBP characteristics and convolutional neural network
Karthiga et al. Transfer learning based breast cancer classification using one-hot encoding technique
CN107066934A (en) Tumor stomach cell image recognition decision maker, method and tumor stomach section identification decision equipment
CN107516312A (en) A kind of Chinese medicine complexion automatic classification method using shallow-layer neutral net
CN114494195A (en) Small sample attention mechanism parallel twinning method for fundus image classification
Mandache et al. Basal cell carcinoma detection in full field OCT images using convolutional neural networks
CN117670835B (en) Puncture damage detection method based on neural network
CN112580445A (en) Human body gait image visual angle conversion method based on generation of confrontation network
CN112686336A (en) Burn surface of a wound degree of depth classification system based on neural network
Li et al. A deep learning method for material performance recognition in laser additive manufacturing
Liu et al. Classification and research of skin lesions based on machine learning.
Swathi et al. Skin Cancer Detection using VGG16, InceptionV3 and ResUNet
CN111950362B (en) Golden monkey face image recognition method, device, equipment and storage medium
CN114065831A (en) Hyperspectral image classification method based on multi-scale random depth residual error network
CN114862868B (en) Cerebral apoplexy final infarction area division method based on CT perfusion source data
Gulati et al. Classification and detection of diabetic eye diseases using deep learning: A review and comparative analysis
CN111368663A (en) Method, device, medium and equipment for recognizing static facial expressions in natural scene
CN114418999B (en) Retinopathy detection system based on lesion attention pyramid convolution neural network
Wang Deep Learning-based and Machine Learning-based Application in Skin Cancer Image Classification
Arjun et al. A combined approach of VGG 16 and LSTM transfer learning technique for skin melanoma classification
Volety et al. Classification of burn images into 1st, 2nd, and 3rd degree using state-of-the-art deep learning techniques
Ye et al. Multiple-instance cnn improved by s3ta for colon cancer classification with unannotated histopathological images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant