CN112381108A - Bullet trace similarity recognition method and system based on graph convolution neural network deep learning - Google Patents
Bullet trace similarity recognition method and system based on graph convolution neural network deep learning Download PDFInfo
- Publication number
- CN112381108A CN112381108A CN202010345147.XA CN202010345147A CN112381108A CN 112381108 A CN112381108 A CN 112381108A CN 202010345147 A CN202010345147 A CN 202010345147A CN 112381108 A CN112381108 A CN 112381108A
- Authority
- CN
- China
- Prior art keywords
- neural network
- convolution neural
- graph convolution
- trace
- deep learning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 41
- 238000000034 method Methods 0.000 title claims abstract description 35
- 238000013135 deep learning Methods 0.000 title claims abstract description 16
- 238000012549 training Methods 0.000 claims abstract description 40
- 230000006870 function Effects 0.000 claims description 24
- 238000013507 mapping Methods 0.000 claims description 24
- 230000004913 activation Effects 0.000 claims description 22
- 238000001514 detection method Methods 0.000 claims description 14
- 239000011159 matrix material Substances 0.000 claims description 11
- 238000005070 sampling Methods 0.000 claims description 11
- 238000004364 calculation method Methods 0.000 claims description 10
- 238000013527 convolutional neural network Methods 0.000 claims description 10
- 238000003062 neural network model Methods 0.000 claims description 9
- 230000011218 segmentation Effects 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 8
- 238000005457 optimization Methods 0.000 claims description 7
- 238000010606 normalization Methods 0.000 claims description 6
- 230000008707 rearrangement Effects 0.000 claims description 6
- 230000008569 process Effects 0.000 claims description 5
- 238000011840 criminal investigation Methods 0.000 claims description 4
- 230000008034 disappearance Effects 0.000 claims description 4
- 238000010008 shearing Methods 0.000 claims description 4
- 238000012360 testing method Methods 0.000 claims description 4
- 239000013598 vector Substances 0.000 claims description 4
- 238000002474 experimental method Methods 0.000 claims description 3
- 238000011176 pooling Methods 0.000 claims description 3
- 238000004458 analytical method Methods 0.000 claims description 2
- 230000036425 denaturation Effects 0.000 claims 1
- 238000004925 denaturation Methods 0.000 claims 1
- 239000000284 extract Substances 0.000 claims 1
- 230000009471 action Effects 0.000 description 2
- 238000009795 derivation Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000007790 scraping Methods 0.000 description 2
- 230000002411 adverse Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000001125 extrusion Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
- G06V10/464—Salient features, e.g. scale invariant feature transforms [SIFT] using a plurality of salient features, e.g. bag-of-words [BoW] representations
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computational Linguistics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Biology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a bullet trace similarity recognition method and system based on graph convolution neural network deep learning, belonging to the technical field of criminals. According to the invention, through the two steps, the characteristics of the bullet trace are extracted, the model is trained and recognized, and the accuracy of recognition can be improved by adopting the graph convolution neural network for training.
Description
Technical Field
The invention belongs to the technical field of criminal investigation, and particularly relates to a bullet trace similarity identification method and system based on atlas neural network deep learning.
Background
The rifle trace of the gun and bullet is a concave strip trace (linear trace) formed on the surface of the armor by the squeezing, shearing and scraping actions of the inner surface of the rifle during the squeezing process of the bullet. Because the extrusion force of the negative and positive rifles on the bullet armor in the gun tube is different and the shearing and scraping action of the two edge sides of the positive rifling on the armor, the part of the bullet armor in direct contact with the surface of the positive rifling is compressed and deformed to be concave, thereby being obviously different from the part of the negative rifling surface. China implements a strict gun management and control policy, and registers and files-building management are carried out on the official guns, because the number of the official guns is huge (tens of thousands in the common grade city).
Compared with the traditional mode of observing through a microscope and artificially comparing morphological characteristics, the image recognition and three-dimensional scanning technology which is aroused in recent years provides new solutions for quantitative testing of damage of rifling linear traces
In recent years, image processing and three-dimensional scanning technologies are applied to linear trace inspection in large quantity, however, adverse factors such as random field trace feature expression, complex algorithm structure and large file volume have great influence on the practical application potential, and the practical value of the method is severely limited.
Disclosure of Invention
The method realizes the characteristic extraction of the bullet traces and the training and recognition of the model through the trace characteristic extraction mapping step and the two steps of the graph convolution neural network training and the similarity recognition, and the accuracy of the recognition can be improved by adopting the graph convolution neural network for training.
In order to achieve the purpose, the invention is realized by the following technical scheme: the bullet trace similarity recognition method and system based on the graph convolution neural network deep learning are used for criminal investigation, bullet trace detection and other scenes needing trace comparison.
Preferably, the trace feature extraction and mapping step includes the following steps:
s1: performing single transverse detection on the section trace of the cable broken end clamp to be detected by using a trace single-point laser detection device to obtain a one-dimensional discretization sequence f (N), wherein N is 0,1, N and N is the number of sampling points;
s2: and (n) performing m-layer multi-scale wavelet transform to obtain the components of wavelets with different time scales:
wherein, amFor the m-th layer approximation data, diFor the detail data of the i-th layer, the scale S is 2m;
S3: let diHas a parameterized profile curve of betaWhere D is the domain of determination of the parameterization,for the real number set, define | | · | | asEuclidean 2 norm of (a), defining a continuous mappingDefining the shape of beta using a square root velocity functionWherein:
for each oneThere is a beta curve which can be defined by the square root velocity function of q, this curve passing throughIs obtained byThe beta curve is scaled to unit length to achieve scale invariance. To this end, the profile curve β is represented in the pre-shaped spaceUnit of (1) hyper-sphere point xiWaiting for mapping into an embedded layer of the convolutional neural network;
s4: repeating the steps S11 and S12 on all M cable head breaking clamp section detection traces to obtain the hypersphere points of the mapping unit of the contour curve respectively corresponding to the M cable head breaking clamp section detection traces, thereby forming a sample set X ═ { X ═ X1,x2,...xN}。
Preferably, the graph convolution neural network training includes 1) establishing a training set, 2) tuning parameters and establishing a graph convolution neural network model, and 2) tuning parameters and establishing a graph convolution neural network model by a specific method, wherein G ═ V, EE represents a set of edges, i.e.The training model consists of two parts: 1) the GCN component is responsible for sampling all node information in K-order neighborhood, 2) the self-encoder (AE) component is used for extracting hidden features of an activation value matrix A learned by the GCN component and preserving a node cluster structure by combining Laplace feature mapping (LE), and the GCN component utilizes a graph convolution neural network to save nodes in a training modelSampling the structure and characteristic information of all nodes in the K steps for the center, namely coding K-order neighborhood information, generating an activation value matrix A used as the input of a self-coder component by combining with the label training of the nodes, wherein the GCN can simultaneously code the local structure and characteristic information of the network by supervised learning based on node labels, omitting secondary structure information which has small influence on low-dimensional vectors of the generated nodes and is outside the K-order neighborhood, utilizing the activation value matrix A learned by the GCN as the input of the self-coder, further extracting the characteristic information from the A by the self-coder in an unsupervised learning mode, and mapping the original network to a lower-dimensional space by combining with Laplace characteristic mapping.
Preferably, the similarity identification comprises the following steps: s1: forming a triple, randomly selecting a sample X from a training sample set Xp1Then randomly selecting a sum xp1Samples x formed belonging to the same toolp2And samples x formed by different toolsoThus, a T ═ x is formedp1,xp2,xoTriple, f (x)i) Is xiThe dimension of the embedding layer is controlled by the size of the last layer of the network branch;
s2 triple selection and data enhancement, respectively evaluating 4 strategies of full contour, contour rearrangement, contour segmentation and patch, selecting the most appropriate strategy through actual test to implicitly define relevant characteristics and characteristics to be inhibited by the convolutional neural network, and avoiding the occurrence of the condition that all samples are inhibited or the samples are distinguished only by local characteristics due to weight sharing in the convolutional neural network;
s3, constructing a graph convolution neural network based on triple loss, wherein the neural network is formed by connecting three parallel convolution neural networks with a triple loss layer;
the distances between all samples are used, and Δ is achieved using the Softmax layer and the root mean square standard+While satisfying less than Δ1 +And is less than Delta*=min(Δ1 -,Δ2 -) To simplify the training sample selection process, the loss is defined as:
mixing L with2The norm is used for evaluating the distance between marks in the embedded layer, and the loss function is utilized to minimize the local difference value between the matched marks, so that the similarity calculation is completed.
Preferably, the full contour in S2 is to adopt random vertical clipping to increase sample variability during training, and center clipping is used for similarity calculation.
Preferably, in S2, the contours are rearranged, the negative samples and the positive samples are randomly arranged by the same factor, the arrangement is performed simultaneously in the whole ternary array, and the center clipping without rearrangement is used for the identity calculation.
Preferably, the contour segmentation and the random contour clipping in S2 are performed by using the contour segmentation to pre-train the lower layer of the complete contour triple network to perform the identity calculation.
Preferably, in the patch of S2, random blocks are cut out from the input contour, similar to the contour segments, to ensure that the positive and negative samples do not overlap, and the horizontal inversion of the samples is performed randomly.
Preferably, the step S3 includes building a graph convolution neural network based on triple loss, establishing a convolution neural network structure optimization and ranking standard, performing batch normalization after each convolution layer to reduce dependency on input normalization and initialization of the network, estimating convolution size, mapping number, and pooling layer size through an empirical experiment to prevent overfitting, introducing an average pool and a Relu activation function to accelerate training speed and reduce influence caused by gradient disappearance, performing optimization by using random gradient descent, finally performing similarity identification by using a trained trace feature convolution neural network model, constructing a similarity matching ranking standard by using an average precision average and a receiver operation feature curve, and comprehensively estimating classification and identification results.
The invention has the beneficial effects that:
the method realizes the characteristic extraction of the bullet traces and the training and recognition of the model by the trace characteristic extraction mapping step and the two steps of the graph convolution neural network training and the similarity recognition, and can improve the recognition accuracy and the model training speed by adopting the graph convolution neural network for training.
Drawings
FIG. 1 is a schematic diagram of multi-scale registration of trace signals;
FIG. 2 is a schematic diagram of trace similarity matching deep learning model training;
fig. 3 is a Relu activation function image.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the accompanying drawings and examples, which are not intended to limit the present invention.
As shown in fig. 1-2, in embodiment 1, the method and system for identifying bullet trace similarity based on graph convolution neural network deep learning are used for criminal investigation, bullet trace detection, and other scenes requiring trace comparison, and the method includes a trace feature extraction and mapping step and a graph convolution neural network training and similarity identification step.
The trace feature extraction and mapping step comprises the following steps:
s1: performing single transverse detection on the section trace of the cable broken end clamp to be detected by using a trace single-point laser detection device to obtain a one-dimensional discretization sequence f (N), wherein N is 0,1, N and N is the number of sampling points;
s2: and (n) performing m-layer multi-scale wavelet transform to obtain the components of wavelets with different time scales:
wherein, amIs m atLayer approximation data, diFor the detail data of the i-th layer, the scale S is 2m;
S3: let diHas a parameterized profile curve of betaWhere D is the domain of determination of the parameterization,for the real number set, define | | · | | asEuclidean 2 norm of (a), defining a continuous mappingDefining the shape of beta using a square root velocity functionWherein:
for each oneThere is a beta curve which can be defined by the square root velocity function of q, this curve passing throughIs obtained byThe beta curve is scaled to unit length to achieve scale invariance. To this end, the profile curve β is represented in the pre-shaped spaceUnit of (1) hyper-sphere point xiWait for mapping into convolutional neural networkAn embedding layer of (a);
s4: repeating the steps S11 and S12 on all M cable head breaking clamp section detection traces to obtain the hypersphere points of the mapping unit of the contour curve respectively corresponding to the M cable head breaking clamp section detection traces, thereby forming a sample set X ═ { X ═ X1,x2,...xN}。
The graph convolution neural network training comprises 1) establishing a training set, 2) adjusting parameters and establishing a graph convolution neural network model, and 2) adjusting parameters and establishing the graph convolution neural network model by a specific method that G ═ V, EE represents a set of edges, i.e.The training model consists of two parts: 1) the GCN component is responsible for sampling all node information in K-order neighborhood, and 2) the self-encoder (AE) component is used for extracting hidden features of an activation value matrix A learned by the GCN component and retaining a node cluster structure by combining with Laplace Eigenmap (LE), and the GCN component uses a graph convolutional neural network to save nodes in a training modelThe method comprises the steps of sampling the structure and characteristic information of all nodes in K steps for the center, namely encoding K-order neighborhood information, generating an activation value matrix A used as input of a self-encoder component by combining label training of the nodes, enabling GCN to encode local structure and characteristic information of a network at the same time through supervised learning based on node labels, omitting secondary structure information which has small influence on low-dimensional vectors of the generated nodes and is outside the K-order neighborhood, utilizing the activation value matrix A learned by GCN as input of the self-encoder, further extracting the characteristic information from A by the self-encoder in an unsupervised learning mode, and mapping the original network to a lower-dimensional space by combining Laplace characteristic mapping.
Linearly combining two components and combining the two components with a training set by using a Stacking method (Stacking) in ensemble learning, wherein the low-dimensional vector representation of the node obtained by the whole model can retain the characteristic information of the node and the structure, linearly combining a GCN component and an AE component by means of Stacking, and controlling the loss functions of the two components by using two hyper-parameters alpha and beta,
wherein, the loss function of the node sampling component is as follows:
The loss function of the self-encoder component AE is:
β is the weight of the AE loss function from the encoder component.
Finally, the loss function of the training model is defined as:
wherein, yiIn order for the node to be a true tag,is a predictive tag for the GCN and,is an activation value matrix, K is a node viThe neighborhood order of (a) is,in order to reconstruct the matrix of activation values,implicit layers for AE from encoder L-th layer indicate, L is the number of implicit layers for AE.
The model optimization part is accelerated by a graphics card (GPU) by using a TensorFlow framework, and an AdamaOptimizer optimizer provided by TensorFlow is used for updating model parameters, improving the traditional gradient decline by using momentum (namely the moving average of the parameters), promoting the dynamic adjustment of hyper-parameters, and enabling the model to be trained quickly and effectively. The model parameters are updated on only one batch each time, and the memory occupation during model training is further reduced.
The similarity identification comprises the following steps: s1: forming a triple, randomly selecting a sample X from a training sample set Xp1Then randomly selecting a sum xp1Samples x formed belonging to the same toolp2And samples x formed by different toolsoThus, a T ═ x is formedp1,xp2,xo-a triplet of the data stream to be transmitted,
f(xi) Is xiThe dimension of the embedding layer is controlled by the size of the last layer of the network branch;
s2 triple selection and data enhancement, respectively evaluating 4 strategies of full contour, contour rearrangement, contour segmentation and patch, selecting the most appropriate strategy through actual test to implicitly define relevant characteristics and characteristics to be inhibited by the convolutional neural network, and avoiding the occurrence of the condition that all samples are inhibited or the samples are distinguished only by local characteristics due to weight sharing in the convolutional neural network;
s3, constructing a graph convolution neural network based on triple loss, wherein the neural network is formed by connecting three parallel convolution neural networks with a triple loss layer;
the distances between all samples are used, and Δ is achieved using the Softmax layer and the root mean square standard+While satisfying less than Δ1 +And is less than Delta*=min(Δ1 -,Δ2 -) To simplify the training sample selection process, the loss is defined as:
mixing L with2The norm is used for evaluating the distance between marks in the embedded layer, and the loss function is utilized to minimize the local difference value between the matched marks, so that the similarity calculation is completed.
The full contour in S2 is to adopt random vertical clipping to increase the variability of the sample during training, and center clipping is used for similarity calculation. And in the step S2, the outlines are rearranged, the negative samples and the positive samples are randomly arranged by the same factor and are arranged in the whole ternary array at the same time, and center cropping without rearrangement is used for calculating the identification degree. And in the S2, contour segmentation and contour random shearing are carried out, and the analysis of the positive samples and the negative samples is independently carried out by using the contour segmentation to pre-train the lower layer of the complete contour triple network to complete the calculation of the degree of identity. In the patch of S2, random blocks are cut out from the input contour, similar to the contour segments, to ensure that the positive and negative samples do not overlap, and the horizontal inversion of the samples is performed randomly.
S3 sets up a graph convolution neural network based on triple loss, structure optimization and sorting standard establishment of the convolution neural network are performed, batch normalization is performed after each convolution layer to reduce dependence on input normalization and initialization of the network, convolution size, mapping quantity and pooling layer size are evaluated through empirical experiments to prevent overfitting, an average pool and a Relu activation function are introduced to accelerate training speed and reduce influence caused by gradient disappearance, optimization is performed through random gradient descent, finally similarity recognition is performed through a trained trace feature convolution neural network model, a similarity matching sorting standard is established through an average precision average value and a receiver operation feature curve, and classification and recognition results are comprehensively evaluated.
When the Relu activation function is used for reverse propagation, gradient disappearance can be avoided, the Relu activation function can enable the output of a part of neurons to be 0, thus sparsity of a network is caused, the mutual dependency of parameters is reduced, the over-fitting problem is relieved, and derivation is simple compared with a sigmoid activation function and a tanh activation function. The invention adopts sigmoid and other functions, the calculated amount is large when calculating the activation function (exponential operation), the derivation relates to division when calculating the error gradient by back propagation, the calculated amount is relatively large, and the calculated amount in the whole process is greatly saved by adopting the Relu activation function. As shown in fig. 3, the Relu activation function is as follows:
finally, it should be noted that: the above examples are only used to illustrate the technical solution of the present invention and not to limit it; although the present disclosure has been described in detail with reference to preferred embodiments, those of ordinary skill in the art will understand that: the specific embodiments of the present disclosure may be modified or equivalents may be substituted for elements thereof; without departing from the spirit of the present disclosure, it is intended to cover all such modifications and variations as fall within the true spirit and scope of the invention.
Claims (9)
1. A bullet trace similarity recognition method and system based on graph convolution neural network deep learning are characterized in that: the bullet trace similarity recognition method and system based on the graph convolution neural network deep learning are used for criminal investigation, bullet trace detection and other scenes needing trace comparison.
2. The bullet trace similarity recognition method and system based on the graph convolution neural network deep learning according to claim 1, wherein the method comprises the following steps: the trace feature extraction and mapping step comprises the following steps:
s1: performing single transverse detection on the section trace of the cable broken end clamp to be detected by using a trace single-point laser detection device to obtain a one-dimensional discretization sequence f (N), wherein N is 0,1, N and N is the number of sampling points;
s2: and (n) performing m-layer multi-scale wavelet transform to obtain the components of wavelets with different time scales:
wherein, amFor the m-th layer approximation data, diFor the detail data of the i-th layer, the scale S is 2m;
S3: let diIs a parameterized profile curve ofWhere D is the domain of determination of the parameterization,for the real number set, define | | · | | asEuclidean 2 norm of (a), defining a continuous mappingDefining the shape of beta using a square root velocity functionWherein:
for each oneThere is a beta curve which can be defined by the square root velocity function of q, this curve passing throughIs obtained byScaling beta curves to unit length to achieve scale invarianceAnd (6) denaturation. To this end, the profile curve β is represented in the pre-shaped spaceUnit of (1) hyper-sphere point xiWaiting for mapping into an embedded layer of the convolutional neural network;
s4: repeating the steps S11 and S12 on all M cable head-breaking clamp section detection traces to obtain the hypersphere points of the mapping unit of the corresponding profile curve, thereby forming a sample set X ═ { X ═1,x2,...xN}。
3. The bullet trace similarity recognition method and system based on the deep learning of the graph convolution neural network as claimed in claim 1 or 2, wherein: the graph convolution neural network training comprises 1) establishing a training set, 2) adjusting parameters and establishing a graph convolution neural network model, and 2) adjusting parameters and establishing the graph convolution neural network model by a specific method that G ═ V, EE represents a set of edges, i.e.The training model consists of two parts: 1) the GCN component is responsible for sampling all node information in K-order neighborhood, and 2) the self-encoder (AE) component is used for extracting hidden features of an activation value matrix A learned by the GCN component and retaining a node cluster structure by combining with Laplace Eigenmap (LE), and the GCN component uses a graph convolutional neural network to save nodes in a training modelSampling the structure and characteristic information of all nodes in K steps for the center, namely coding K-order neighborhood information, generating an activation value matrix A used as input of a self-coder component by combining label training of the nodes, wherein the GCN can simultaneously code the local structure and characteristic information of the network by supervised learning based on node labels, and omitting the generation of the K-order neighborhoodAnd (3) secondary structure information with small influence of low-dimensional vectors of nodes is used as input of a self-encoder, an activation value matrix A learned by GCN is used as input of the self-encoder, the self-encoder further extracts feature information from A in an unsupervised learning mode, and the original network is mapped to a space with lower dimension by combining Laplace feature mapping.
4. The bullet trace similarity recognition method and system based on the deep learning of the graph convolution neural network as claimed in claim 1 or 2, wherein: the similarity identification comprises the following steps: s1: forming a triple, randomly selecting a sample X from a training sample set Xp1Then randomly selecting a sum xp1Samples x formed belonging to the same toolp2And samples x formed by different toolsoThus, a T ═ x is formedp1,xp2,xoTriple, f (x)i) Is xiThe dimension of the embedding layer is controlled by the size of the last layer of the network branch;
s2 triple selection and data enhancement, respectively evaluating 4 strategies of full contour, contour rearrangement, contour segmentation and patch, selecting the most appropriate strategy through actual test to implicitly define relevant characteristics and characteristics to be suppressed by the convolutional neural network, and avoiding the occurrence of the situation that all samples are suppressed or the samples are distinguished only by local characteristics due to weight sharing in the convolutional neural network;
s3, constructing a graph convolution neural network based on triple loss, wherein the neural network is formed by connecting three parallel convolution neural networks with a triple loss layer;
the distances between all samples are used, and Δ is achieved using the Softmax layer and the root mean square standard+While satisfying less than Δ1 +And is less than Delta*=min(Δ1 -,Δ2-) to simplify the training sample selection process, define the loss as:
mixing L with2The norm is used for evaluating the distance between marks in the embedded layer, and the loss function is utilized to minimize the local difference value between the matched marks, so that the similarity calculation is completed.
5. The bullet trace similarity recognition method and system based on the graph convolution neural network deep learning according to claim 4, wherein the method comprises the following steps: the full contour in S2 is to adopt random vertical clipping to increase sample variability during training, and center clipping is used for similarity calculation.
6. The bullet trace similarity recognition method and system based on the graph convolution neural network deep learning according to claim 4, wherein the method comprises the following steps: and in the step S2, the outlines are rearranged, the negative samples and the positive samples are randomly arranged by the same factor and are arranged in the whole ternary array at the same time, and center clipping without rearrangement is used for calculating the degree of identity.
7. The bullet trace similarity recognition method and system based on the graph convolution neural network deep learning according to claim 4, wherein the method comprises the following steps: and in the S2, contour segmentation and contour random shearing are carried out, and the analysis of the positive samples and the negative samples is independently carried out by using the contour segmentation to pre-train the lower layer of the complete contour triple network to complete the calculation of the degree of identity.
8. The bullet trace similarity recognition method and system based on the graph convolution neural network deep learning according to claim 3, wherein the method comprises the following steps: in the patch of S2, random blocks are cut out from the input contour, similar to the contour segments, to ensure that the positive and negative samples do not overlap, and the horizontal inversion of the samples is performed randomly.
9. The bullet trace similarity recognition method and system based on the deep learning of the graph convolution neural network as claimed in any one of claims 5, 6, 7 and 8, wherein: s3 sets up a graph convolution neural network based on triple loss, structure optimization and sorting standard establishment of the convolution neural network are performed, batch normalization is performed after each convolution layer to reduce dependence on input normalization and initialization of the network, convolution size, mapping number and pooling layer size are evaluated through empirical experiments to prevent overfitting, an average pool and a Relu activation function are introduced to accelerate training speed and reduce influence caused by gradient disappearance, optimization is performed through random gradient descent, finally similarity recognition is performed through a trained trace feature convolution neural network model, a similarity matching sorting standard is established through an average precision average and a receiver operation feature curve, and classification and recognition results are evaluated comprehensively.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010345147.XA CN112381108A (en) | 2020-04-27 | 2020-04-27 | Bullet trace similarity recognition method and system based on graph convolution neural network deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010345147.XA CN112381108A (en) | 2020-04-27 | 2020-04-27 | Bullet trace similarity recognition method and system based on graph convolution neural network deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112381108A true CN112381108A (en) | 2021-02-19 |
Family
ID=74586308
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010345147.XA Pending CN112381108A (en) | 2020-04-27 | 2020-04-27 | Bullet trace similarity recognition method and system based on graph convolution neural network deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112381108A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113744238A (en) * | 2021-09-01 | 2021-12-03 | 南京工业大学 | Method for establishing bullet trace database |
CN113806547A (en) * | 2021-10-15 | 2021-12-17 | 南京大学 | Deep learning multi-label text classification method based on graph model |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
RU2009124396A (en) * | 2009-06-23 | 2010-12-27 | Закрытое акционерное общество "Лазерные диагностические инструменты-Русприбор" (RU) | METHOD FOR AUTOMATIC RECOGNITION OF FIRES OF FIRE-SHOT WEAPON ON THE SIDE SURFACE IMAGE OF THE PULSE (OR CASES) |
CN111639664A (en) * | 2020-04-07 | 2020-09-08 | 昆明理工大学 | Line trace batch comparison system based on multi-strategy mode |
-
2020
- 2020-04-27 CN CN202010345147.XA patent/CN112381108A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
RU2009124396A (en) * | 2009-06-23 | 2010-12-27 | Закрытое акционерное общество "Лазерные диагностические инструменты-Русприбор" (RU) | METHOD FOR AUTOMATIC RECOGNITION OF FIRES OF FIRE-SHOT WEAPON ON THE SIDE SURFACE IMAGE OF THE PULSE (OR CASES) |
CN111639664A (en) * | 2020-04-07 | 2020-09-08 | 昆明理工大学 | Line trace batch comparison system based on multi-strategy mode |
Non-Patent Citations (5)
Title |
---|
NAN PAN等: ""A Study of the Shearing Section Trace Matching Technology Based on Elastic Shape Metric and Deep Learning"", 《SENSORS AND MATERIALS》 * |
O. GIUDICE等: ""Siamese Ballistics Neural Network"", 《2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP)》 * |
潘楠 等: ""非线性线条痕迹小波域特征快速溯源算法研究"", 《电子测量与仪器学报》 * |
王杰 等: ""基于图卷积网络和自编码器的半监督网络表示学习模型"", 《模式识别与人工智能》 * |
程琳: "基于小波变换的工具痕迹图像识别研究", 《滁州学院学报》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113744238A (en) * | 2021-09-01 | 2021-12-03 | 南京工业大学 | Method for establishing bullet trace database |
CN113744238B (en) * | 2021-09-01 | 2023-08-01 | 南京工业大学 | Method for establishing bullet trace database |
CN113806547A (en) * | 2021-10-15 | 2021-12-17 | 南京大学 | Deep learning multi-label text classification method based on graph model |
CN113806547B (en) * | 2021-10-15 | 2023-08-11 | 南京大学 | Deep learning multi-label text classification method based on graph model |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210049423A1 (en) | Efficient image classification method based on structured pruning | |
Lu et al. | Object detection based on SSD-ResNet | |
CN108921019B (en) | Gait recognition method based on GEI and TripletLoss-DenseNet | |
CN109002755B (en) | Age estimation model construction method and estimation method based on face image | |
CN108875933B (en) | Over-limit learning machine classification method and system for unsupervised sparse parameter learning | |
CN110097060B (en) | Open set identification method for trunk image | |
CN112528928B (en) | Commodity identification method based on self-attention depth network | |
CN105913081B (en) | SAR image classification method based on improved PCAnet | |
CN110188827B (en) | Scene recognition method based on convolutional neural network and recursive automatic encoder model | |
CN109871749B (en) | Pedestrian re-identification method and device based on deep hash and computer system | |
CN111259917B (en) | Image feature extraction method based on local neighbor component analysis | |
CN109871379B (en) | Online Hash nearest neighbor query method based on data block learning | |
CN111273288B (en) | Radar unknown target identification method based on long-term and short-term memory network | |
CN112381108A (en) | Bullet trace similarity recognition method and system based on graph convolution neural network deep learning | |
CN113011243A (en) | Facial expression analysis method based on capsule network | |
CN110991554B (en) | Improved PCA (principal component analysis) -based deep network image classification method | |
Mamatkulovich | Lightweight residual layers based convolutional neural networks for traffic sign recognition | |
CN115131558A (en) | Semantic segmentation method under less-sample environment | |
CN111310820A (en) | Foundation meteorological cloud chart classification method based on cross validation depth CNN feature integration | |
CN108388918B (en) | Data feature selection method with structure retention characteristics | |
CN108496174B (en) | Method and system for face recognition | |
Husain et al. | Face recognition method based on residual convolution neural network | |
CN113297964A (en) | Video target recognition model and method based on deep migration learning | |
CN111401434A (en) | Image classification method based on unsupervised feature learning | |
CN115393631A (en) | Hyperspectral image classification method based on Bayesian layer graph convolution neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210219 |
|
RJ01 | Rejection of invention patent application after publication |