CN112381108A - Bullet trace similarity recognition method and system based on graph convolution neural network deep learning - Google Patents

Bullet trace similarity recognition method and system based on graph convolution neural network deep learning Download PDF

Info

Publication number
CN112381108A
CN112381108A CN202010345147.XA CN202010345147A CN112381108A CN 112381108 A CN112381108 A CN 112381108A CN 202010345147 A CN202010345147 A CN 202010345147A CN 112381108 A CN112381108 A CN 112381108A
Authority
CN
China
Prior art keywords
neural network
convolution neural
graph convolution
trace
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010345147.XA
Other languages
Chinese (zh)
Inventor
潘楠
沈鑫
钱俊兵
黎兰豪崎
赵成俊
夏丰领
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kunming University of Science and Technology
Original Assignee
Kunming University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kunming University of Science and Technology filed Critical Kunming University of Science and Technology
Priority to CN202010345147.XA priority Critical patent/CN112381108A/en
Publication of CN112381108A publication Critical patent/CN112381108A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • G06V10/464Salient features, e.g. scale invariant feature transforms [SIFT] using a plurality of salient features, e.g. bag-of-words [BoW] representations

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a bullet trace similarity recognition method and system based on graph convolution neural network deep learning, belonging to the technical field of criminals. According to the invention, through the two steps, the characteristics of the bullet trace are extracted, the model is trained and recognized, and the accuracy of recognition can be improved by adopting the graph convolution neural network for training.

Description

Bullet trace similarity recognition method and system based on graph convolution neural network deep learning
Technical Field
The invention belongs to the technical field of criminal investigation, and particularly relates to a bullet trace similarity identification method and system based on atlas neural network deep learning.
Background
The rifle trace of the gun and bullet is a concave strip trace (linear trace) formed on the surface of the armor by the squeezing, shearing and scraping actions of the inner surface of the rifle during the squeezing process of the bullet. Because the extrusion force of the negative and positive rifles on the bullet armor in the gun tube is different and the shearing and scraping action of the two edge sides of the positive rifling on the armor, the part of the bullet armor in direct contact with the surface of the positive rifling is compressed and deformed to be concave, thereby being obviously different from the part of the negative rifling surface. China implements a strict gun management and control policy, and registers and files-building management are carried out on the official guns, because the number of the official guns is huge (tens of thousands in the common grade city).
Compared with the traditional mode of observing through a microscope and artificially comparing morphological characteristics, the image recognition and three-dimensional scanning technology which is aroused in recent years provides new solutions for quantitative testing of damage of rifling linear traces
In recent years, image processing and three-dimensional scanning technologies are applied to linear trace inspection in large quantity, however, adverse factors such as random field trace feature expression, complex algorithm structure and large file volume have great influence on the practical application potential, and the practical value of the method is severely limited.
Disclosure of Invention
The method realizes the characteristic extraction of the bullet traces and the training and recognition of the model through the trace characteristic extraction mapping step and the two steps of the graph convolution neural network training and the similarity recognition, and the accuracy of the recognition can be improved by adopting the graph convolution neural network for training.
In order to achieve the purpose, the invention is realized by the following technical scheme: the bullet trace similarity recognition method and system based on the graph convolution neural network deep learning are used for criminal investigation, bullet trace detection and other scenes needing trace comparison.
Preferably, the trace feature extraction and mapping step includes the following steps:
s1: performing single transverse detection on the section trace of the cable broken end clamp to be detected by using a trace single-point laser detection device to obtain a one-dimensional discretization sequence f (N), wherein N is 0,1, N and N is the number of sampling points;
s2: and (n) performing m-layer multi-scale wavelet transform to obtain the components of wavelets with different time scales:
Figure BDA0002469801130000021
wherein, amFor the m-th layer approximation data, diFor the detail data of the i-th layer, the scale S is 2m
S3: let diHas a parameterized profile curve of beta
Figure BDA0002469801130000022
Where D is the domain of determination of the parameterization,
Figure BDA0002469801130000023
for the real number set, define | | · | | as
Figure BDA0002469801130000024
Euclidean 2 norm of (a), defining a continuous mapping
Figure BDA0002469801130000025
Defining the shape of beta using a square root velocity function
Figure BDA0002469801130000026
Wherein:
Figure BDA0002469801130000027
for each one
Figure BDA0002469801130000028
There is a beta curve which can be defined by the square root velocity function of q, this curve passing through
Figure BDA0002469801130000029
Is obtained by
Figure BDA00024698011300000210
The beta curve is scaled to unit length to achieve scale invariance. To this end, the profile curve β is represented in the pre-shaped space
Figure BDA00024698011300000211
Unit of (1) hyper-sphere point xiWaiting for mapping into an embedded layer of the convolutional neural network;
s4: repeating the steps S11 and S12 on all M cable head breaking clamp section detection traces to obtain the hypersphere points of the mapping unit of the contour curve respectively corresponding to the M cable head breaking clamp section detection traces, thereby forming a sample set X ═ { X ═ X1,x2,...xN}。
Preferably, the graph convolution neural network training includes 1) establishing a training set, 2) tuning parameters and establishing a graph convolution neural network model, and 2) tuning parameters and establishing a graph convolution neural network model by a specific method, wherein G ═ V, E
Figure BDA00024698011300000212
E represents a set of edges, i.e.
Figure BDA00024698011300000213
The training model consists of two parts: 1) the GCN component is responsible for sampling all node information in K-order neighborhood, 2) the self-encoder (AE) component is used for extracting hidden features of an activation value matrix A learned by the GCN component and preserving a node cluster structure by combining Laplace feature mapping (LE), and the GCN component utilizes a graph convolution neural network to save nodes in a training model
Figure BDA00024698011300000214
Sampling the structure and characteristic information of all nodes in the K steps for the center, namely coding K-order neighborhood information, generating an activation value matrix A used as the input of a self-coder component by combining with the label training of the nodes, wherein the GCN can simultaneously code the local structure and characteristic information of the network by supervised learning based on node labels, omitting secondary structure information which has small influence on low-dimensional vectors of the generated nodes and is outside the K-order neighborhood, utilizing the activation value matrix A learned by the GCN as the input of the self-coder, further extracting the characteristic information from the A by the self-coder in an unsupervised learning mode, and mapping the original network to a lower-dimensional space by combining with Laplace characteristic mapping.
Preferably, the similarity identification comprises the following steps: s1: forming a triple, randomly selecting a sample X from a training sample set Xp1Then randomly selecting a sum xp1Samples x formed belonging to the same toolp2And samples x formed by different toolsoThus, a T ═ x is formedp1,xp2,xoTriple, f (x)i) Is xiThe dimension of the embedding layer is controlled by the size of the last layer of the network branch;
s2 triple selection and data enhancement, respectively evaluating 4 strategies of full contour, contour rearrangement, contour segmentation and patch, selecting the most appropriate strategy through actual test to implicitly define relevant characteristics and characteristics to be inhibited by the convolutional neural network, and avoiding the occurrence of the condition that all samples are inhibited or the samples are distinguished only by local characteristics due to weight sharing in the convolutional neural network;
s3, constructing a graph convolution neural network based on triple loss, wherein the neural network is formed by connecting three parallel convolution neural networks with a triple loss layer;
the distances between all samples are used, and Δ is achieved using the Softmax layer and the root mean square standard+While satisfying less than Δ1 +And is less than Delta*=min(Δ1 -2 -) To simplify the training sample selection process, the loss is defined as:
Figure BDA0002469801130000031
mixing L with2The norm is used for evaluating the distance between marks in the embedded layer, and the loss function is utilized to minimize the local difference value between the matched marks, so that the similarity calculation is completed.
Preferably, the full contour in S2 is to adopt random vertical clipping to increase sample variability during training, and center clipping is used for similarity calculation.
Preferably, in S2, the contours are rearranged, the negative samples and the positive samples are randomly arranged by the same factor, the arrangement is performed simultaneously in the whole ternary array, and the center clipping without rearrangement is used for the identity calculation.
Preferably, the contour segmentation and the random contour clipping in S2 are performed by using the contour segmentation to pre-train the lower layer of the complete contour triple network to perform the identity calculation.
Preferably, in the patch of S2, random blocks are cut out from the input contour, similar to the contour segments, to ensure that the positive and negative samples do not overlap, and the horizontal inversion of the samples is performed randomly.
Preferably, the step S3 includes building a graph convolution neural network based on triple loss, establishing a convolution neural network structure optimization and ranking standard, performing batch normalization after each convolution layer to reduce dependency on input normalization and initialization of the network, estimating convolution size, mapping number, and pooling layer size through an empirical experiment to prevent overfitting, introducing an average pool and a Relu activation function to accelerate training speed and reduce influence caused by gradient disappearance, performing optimization by using random gradient descent, finally performing similarity identification by using a trained trace feature convolution neural network model, constructing a similarity matching ranking standard by using an average precision average and a receiver operation feature curve, and comprehensively estimating classification and identification results.
The invention has the beneficial effects that:
the method realizes the characteristic extraction of the bullet traces and the training and recognition of the model by the trace characteristic extraction mapping step and the two steps of the graph convolution neural network training and the similarity recognition, and can improve the recognition accuracy and the model training speed by adopting the graph convolution neural network for training.
Drawings
FIG. 1 is a schematic diagram of multi-scale registration of trace signals;
FIG. 2 is a schematic diagram of trace similarity matching deep learning model training;
fig. 3 is a Relu activation function image.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the accompanying drawings and examples, which are not intended to limit the present invention.
As shown in fig. 1-2, in embodiment 1, the method and system for identifying bullet trace similarity based on graph convolution neural network deep learning are used for criminal investigation, bullet trace detection, and other scenes requiring trace comparison, and the method includes a trace feature extraction and mapping step and a graph convolution neural network training and similarity identification step.
The trace feature extraction and mapping step comprises the following steps:
s1: performing single transverse detection on the section trace of the cable broken end clamp to be detected by using a trace single-point laser detection device to obtain a one-dimensional discretization sequence f (N), wherein N is 0,1, N and N is the number of sampling points;
s2: and (n) performing m-layer multi-scale wavelet transform to obtain the components of wavelets with different time scales:
Figure BDA0002469801130000051
wherein, amIs m atLayer approximation data, diFor the detail data of the i-th layer, the scale S is 2m
S3: let diHas a parameterized profile curve of beta
Figure BDA0002469801130000052
Where D is the domain of determination of the parameterization,
Figure BDA0002469801130000053
for the real number set, define | | · | | as
Figure BDA0002469801130000054
Euclidean 2 norm of (a), defining a continuous mapping
Figure BDA0002469801130000055
Defining the shape of beta using a square root velocity function
Figure BDA0002469801130000056
Wherein:
Figure BDA0002469801130000057
for each one
Figure BDA0002469801130000058
There is a beta curve which can be defined by the square root velocity function of q, this curve passing through
Figure BDA0002469801130000059
Is obtained by
Figure BDA00024698011300000510
The beta curve is scaled to unit length to achieve scale invariance. To this end, the profile curve β is represented in the pre-shaped space
Figure BDA00024698011300000511
Unit of (1) hyper-sphere point xiWait for mapping into convolutional neural networkAn embedding layer of (a);
s4: repeating the steps S11 and S12 on all M cable head breaking clamp section detection traces to obtain the hypersphere points of the mapping unit of the contour curve respectively corresponding to the M cable head breaking clamp section detection traces, thereby forming a sample set X ═ { X ═ X1,x2,...xN}。
The graph convolution neural network training comprises 1) establishing a training set, 2) adjusting parameters and establishing a graph convolution neural network model, and 2) adjusting parameters and establishing the graph convolution neural network model by a specific method that G ═ V, E
Figure BDA00024698011300000512
E represents a set of edges, i.e.
Figure BDA00024698011300000513
The training model consists of two parts: 1) the GCN component is responsible for sampling all node information in K-order neighborhood, and 2) the self-encoder (AE) component is used for extracting hidden features of an activation value matrix A learned by the GCN component and retaining a node cluster structure by combining with Laplace Eigenmap (LE), and the GCN component uses a graph convolutional neural network to save nodes in a training model
Figure BDA00024698011300000514
The method comprises the steps of sampling the structure and characteristic information of all nodes in K steps for the center, namely encoding K-order neighborhood information, generating an activation value matrix A used as input of a self-encoder component by combining label training of the nodes, enabling GCN to encode local structure and characteristic information of a network at the same time through supervised learning based on node labels, omitting secondary structure information which has small influence on low-dimensional vectors of the generated nodes and is outside the K-order neighborhood, utilizing the activation value matrix A learned by GCN as input of the self-encoder, further extracting the characteristic information from A by the self-encoder in an unsupervised learning mode, and mapping the original network to a lower-dimensional space by combining Laplace characteristic mapping.
Linearly combining two components and combining the two components with a training set by using a Stacking method (Stacking) in ensemble learning, wherein the low-dimensional vector representation of the node obtained by the whole model can retain the characteristic information of the node and the structure, linearly combining a GCN component and an AE component by means of Stacking, and controlling the loss functions of the two components by using two hyper-parameters alpha and beta,
wherein, the loss function of the node sampling component is as follows:
Figure BDA0002469801130000061
α is the weight of the node sampling component loss function.
The loss function of the self-encoder component AE is:
Figure BDA0002469801130000062
β is the weight of the AE loss function from the encoder component.
Finally, the loss function of the training model is defined as:
Figure BDA0002469801130000063
wherein, yiIn order for the node to be a true tag,
Figure BDA0002469801130000064
is a predictive tag for the GCN and,
Figure BDA0002469801130000065
is an activation value matrix, K is a node viThe neighborhood order of (a) is,
Figure BDA0002469801130000066
in order to reconstruct the matrix of activation values,
Figure BDA0002469801130000067
implicit layers for AE from encoder L-th layer indicate, L is the number of implicit layers for AE.
The model optimization part is accelerated by a graphics card (GPU) by using a TensorFlow framework, and an AdamaOptimizer optimizer provided by TensorFlow is used for updating model parameters, improving the traditional gradient decline by using momentum (namely the moving average of the parameters), promoting the dynamic adjustment of hyper-parameters, and enabling the model to be trained quickly and effectively. The model parameters are updated on only one batch each time, and the memory occupation during model training is further reduced.
The similarity identification comprises the following steps: s1: forming a triple, randomly selecting a sample X from a training sample set Xp1Then randomly selecting a sum xp1Samples x formed belonging to the same toolp2And samples x formed by different toolsoThus, a T ═ x is formedp1,xp2,xo-a triplet of the data stream to be transmitted,
f(xi) Is xiThe dimension of the embedding layer is controlled by the size of the last layer of the network branch;
s2 triple selection and data enhancement, respectively evaluating 4 strategies of full contour, contour rearrangement, contour segmentation and patch, selecting the most appropriate strategy through actual test to implicitly define relevant characteristics and characteristics to be inhibited by the convolutional neural network, and avoiding the occurrence of the condition that all samples are inhibited or the samples are distinguished only by local characteristics due to weight sharing in the convolutional neural network;
s3, constructing a graph convolution neural network based on triple loss, wherein the neural network is formed by connecting three parallel convolution neural networks with a triple loss layer;
the distances between all samples are used, and Δ is achieved using the Softmax layer and the root mean square standard+While satisfying less than Δ1 +And is less than Delta*=min(Δ1 -2 -) To simplify the training sample selection process, the loss is defined as:
Figure BDA0002469801130000071
mixing L with2The norm is used for evaluating the distance between marks in the embedded layer, and the loss function is utilized to minimize the local difference value between the matched marks, so that the similarity calculation is completed.
The full contour in S2 is to adopt random vertical clipping to increase the variability of the sample during training, and center clipping is used for similarity calculation. And in the step S2, the outlines are rearranged, the negative samples and the positive samples are randomly arranged by the same factor and are arranged in the whole ternary array at the same time, and center cropping without rearrangement is used for calculating the identification degree. And in the S2, contour segmentation and contour random shearing are carried out, and the analysis of the positive samples and the negative samples is independently carried out by using the contour segmentation to pre-train the lower layer of the complete contour triple network to complete the calculation of the degree of identity. In the patch of S2, random blocks are cut out from the input contour, similar to the contour segments, to ensure that the positive and negative samples do not overlap, and the horizontal inversion of the samples is performed randomly.
S3 sets up a graph convolution neural network based on triple loss, structure optimization and sorting standard establishment of the convolution neural network are performed, batch normalization is performed after each convolution layer to reduce dependence on input normalization and initialization of the network, convolution size, mapping quantity and pooling layer size are evaluated through empirical experiments to prevent overfitting, an average pool and a Relu activation function are introduced to accelerate training speed and reduce influence caused by gradient disappearance, optimization is performed through random gradient descent, finally similarity recognition is performed through a trained trace feature convolution neural network model, a similarity matching sorting standard is established through an average precision average value and a receiver operation feature curve, and classification and recognition results are comprehensively evaluated.
When the Relu activation function is used for reverse propagation, gradient disappearance can be avoided, the Relu activation function can enable the output of a part of neurons to be 0, thus sparsity of a network is caused, the mutual dependency of parameters is reduced, the over-fitting problem is relieved, and derivation is simple compared with a sigmoid activation function and a tanh activation function. The invention adopts sigmoid and other functions, the calculated amount is large when calculating the activation function (exponential operation), the derivation relates to division when calculating the error gradient by back propagation, the calculated amount is relatively large, and the calculated amount in the whole process is greatly saved by adopting the Relu activation function. As shown in fig. 3, the Relu activation function is as follows:
Figure BDA0002469801130000081
finally, it should be noted that: the above examples are only used to illustrate the technical solution of the present invention and not to limit it; although the present disclosure has been described in detail with reference to preferred embodiments, those of ordinary skill in the art will understand that: the specific embodiments of the present disclosure may be modified or equivalents may be substituted for elements thereof; without departing from the spirit of the present disclosure, it is intended to cover all such modifications and variations as fall within the true spirit and scope of the invention.

Claims (9)

1. A bullet trace similarity recognition method and system based on graph convolution neural network deep learning are characterized in that: the bullet trace similarity recognition method and system based on the graph convolution neural network deep learning are used for criminal investigation, bullet trace detection and other scenes needing trace comparison.
2. The bullet trace similarity recognition method and system based on the graph convolution neural network deep learning according to claim 1, wherein the method comprises the following steps: the trace feature extraction and mapping step comprises the following steps:
s1: performing single transverse detection on the section trace of the cable broken end clamp to be detected by using a trace single-point laser detection device to obtain a one-dimensional discretization sequence f (N), wherein N is 0,1, N and N is the number of sampling points;
s2: and (n) performing m-layer multi-scale wavelet transform to obtain the components of wavelets with different time scales:
Figure FDA0002469801120000011
wherein, amFor the m-th layer approximation data, diFor the detail data of the i-th layer, the scale S is 2m
S3: let diIs a parameterized profile curve of
Figure FDA0002469801120000012
Where D is the domain of determination of the parameterization,
Figure FDA0002469801120000013
for the real number set, define | | · | | as
Figure FDA0002469801120000014
Euclidean 2 norm of (a), defining a continuous mapping
Figure FDA0002469801120000015
Defining the shape of beta using a square root velocity function
Figure FDA0002469801120000016
Wherein:
Figure FDA0002469801120000017
for each one
Figure FDA0002469801120000018
There is a beta curve which can be defined by the square root velocity function of q, this curve passing through
Figure FDA0002469801120000019
Is obtained by
Figure FDA00024698011200000110
Scaling beta curves to unit length to achieve scale invarianceAnd (6) denaturation. To this end, the profile curve β is represented in the pre-shaped space
Figure FDA00024698011200000111
Unit of (1) hyper-sphere point xiWaiting for mapping into an embedded layer of the convolutional neural network;
s4: repeating the steps S11 and S12 on all M cable head-breaking clamp section detection traces to obtain the hypersphere points of the mapping unit of the corresponding profile curve, thereby forming a sample set X ═ { X ═1,x2,...xN}。
3. The bullet trace similarity recognition method and system based on the deep learning of the graph convolution neural network as claimed in claim 1 or 2, wherein: the graph convolution neural network training comprises 1) establishing a training set, 2) adjusting parameters and establishing a graph convolution neural network model, and 2) adjusting parameters and establishing the graph convolution neural network model by a specific method that G ═ V, E
Figure FDA0002469801120000021
E represents a set of edges, i.e.
Figure FDA0002469801120000022
The training model consists of two parts: 1) the GCN component is responsible for sampling all node information in K-order neighborhood, and 2) the self-encoder (AE) component is used for extracting hidden features of an activation value matrix A learned by the GCN component and retaining a node cluster structure by combining with Laplace Eigenmap (LE), and the GCN component uses a graph convolutional neural network to save nodes in a training model
Figure FDA0002469801120000023
Sampling the structure and characteristic information of all nodes in K steps for the center, namely coding K-order neighborhood information, generating an activation value matrix A used as input of a self-coder component by combining label training of the nodes, wherein the GCN can simultaneously code the local structure and characteristic information of the network by supervised learning based on node labels, and omitting the generation of the K-order neighborhoodAnd (3) secondary structure information with small influence of low-dimensional vectors of nodes is used as input of a self-encoder, an activation value matrix A learned by GCN is used as input of the self-encoder, the self-encoder further extracts feature information from A in an unsupervised learning mode, and the original network is mapped to a space with lower dimension by combining Laplace feature mapping.
4. The bullet trace similarity recognition method and system based on the deep learning of the graph convolution neural network as claimed in claim 1 or 2, wherein: the similarity identification comprises the following steps: s1: forming a triple, randomly selecting a sample X from a training sample set Xp1Then randomly selecting a sum xp1Samples x formed belonging to the same toolp2And samples x formed by different toolsoThus, a T ═ x is formedp1,xp2,xoTriple, f (x)i) Is xiThe dimension of the embedding layer is controlled by the size of the last layer of the network branch;
s2 triple selection and data enhancement, respectively evaluating 4 strategies of full contour, contour rearrangement, contour segmentation and patch, selecting the most appropriate strategy through actual test to implicitly define relevant characteristics and characteristics to be suppressed by the convolutional neural network, and avoiding the occurrence of the situation that all samples are suppressed or the samples are distinguished only by local characteristics due to weight sharing in the convolutional neural network;
s3, constructing a graph convolution neural network based on triple loss, wherein the neural network is formed by connecting three parallel convolution neural networks with a triple loss layer;
the distances between all samples are used, and Δ is achieved using the Softmax layer and the root mean square standard+While satisfying less than Δ1 +And is less than Delta*=min(Δ1 -2-) to simplify the training sample selection process, define the loss as:
Figure FDA0002469801120000031
mixing L with2The norm is used for evaluating the distance between marks in the embedded layer, and the loss function is utilized to minimize the local difference value between the matched marks, so that the similarity calculation is completed.
5. The bullet trace similarity recognition method and system based on the graph convolution neural network deep learning according to claim 4, wherein the method comprises the following steps: the full contour in S2 is to adopt random vertical clipping to increase sample variability during training, and center clipping is used for similarity calculation.
6. The bullet trace similarity recognition method and system based on the graph convolution neural network deep learning according to claim 4, wherein the method comprises the following steps: and in the step S2, the outlines are rearranged, the negative samples and the positive samples are randomly arranged by the same factor and are arranged in the whole ternary array at the same time, and center clipping without rearrangement is used for calculating the degree of identity.
7. The bullet trace similarity recognition method and system based on the graph convolution neural network deep learning according to claim 4, wherein the method comprises the following steps: and in the S2, contour segmentation and contour random shearing are carried out, and the analysis of the positive samples and the negative samples is independently carried out by using the contour segmentation to pre-train the lower layer of the complete contour triple network to complete the calculation of the degree of identity.
8. The bullet trace similarity recognition method and system based on the graph convolution neural network deep learning according to claim 3, wherein the method comprises the following steps: in the patch of S2, random blocks are cut out from the input contour, similar to the contour segments, to ensure that the positive and negative samples do not overlap, and the horizontal inversion of the samples is performed randomly.
9. The bullet trace similarity recognition method and system based on the deep learning of the graph convolution neural network as claimed in any one of claims 5, 6, 7 and 8, wherein: s3 sets up a graph convolution neural network based on triple loss, structure optimization and sorting standard establishment of the convolution neural network are performed, batch normalization is performed after each convolution layer to reduce dependence on input normalization and initialization of the network, convolution size, mapping number and pooling layer size are evaluated through empirical experiments to prevent overfitting, an average pool and a Relu activation function are introduced to accelerate training speed and reduce influence caused by gradient disappearance, optimization is performed through random gradient descent, finally similarity recognition is performed through a trained trace feature convolution neural network model, a similarity matching sorting standard is established through an average precision average and a receiver operation feature curve, and classification and recognition results are evaluated comprehensively.
CN202010345147.XA 2020-04-27 2020-04-27 Bullet trace similarity recognition method and system based on graph convolution neural network deep learning Pending CN112381108A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010345147.XA CN112381108A (en) 2020-04-27 2020-04-27 Bullet trace similarity recognition method and system based on graph convolution neural network deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010345147.XA CN112381108A (en) 2020-04-27 2020-04-27 Bullet trace similarity recognition method and system based on graph convolution neural network deep learning

Publications (1)

Publication Number Publication Date
CN112381108A true CN112381108A (en) 2021-02-19

Family

ID=74586308

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010345147.XA Pending CN112381108A (en) 2020-04-27 2020-04-27 Bullet trace similarity recognition method and system based on graph convolution neural network deep learning

Country Status (1)

Country Link
CN (1) CN112381108A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113744238A (en) * 2021-09-01 2021-12-03 南京工业大学 Method for establishing bullet trace database
CN113806547A (en) * 2021-10-15 2021-12-17 南京大学 Deep learning multi-label text classification method based on graph model

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2009124396A (en) * 2009-06-23 2010-12-27 Закрытое акционерное общество "Лазерные диагностические инструменты-Русприбор" (RU) METHOD FOR AUTOMATIC RECOGNITION OF FIRES OF FIRE-SHOT WEAPON ON THE SIDE SURFACE IMAGE OF THE PULSE (OR CASES)
CN111639664A (en) * 2020-04-07 2020-09-08 昆明理工大学 Line trace batch comparison system based on multi-strategy mode

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2009124396A (en) * 2009-06-23 2010-12-27 Закрытое акционерное общество "Лазерные диагностические инструменты-Русприбор" (RU) METHOD FOR AUTOMATIC RECOGNITION OF FIRES OF FIRE-SHOT WEAPON ON THE SIDE SURFACE IMAGE OF THE PULSE (OR CASES)
CN111639664A (en) * 2020-04-07 2020-09-08 昆明理工大学 Line trace batch comparison system based on multi-strategy mode

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
NAN PAN等: ""A Study of the Shearing Section Trace Matching Technology Based on Elastic Shape Metric and Deep Learning"", 《SENSORS AND MATERIALS》 *
O. GIUDICE等: ""Siamese Ballistics Neural Network"", 《2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP)》 *
潘楠 等: ""非线性线条痕迹小波域特征快速溯源算法研究"", 《电子测量与仪器学报》 *
王杰 等: ""基于图卷积网络和自编码器的半监督网络表示学习模型"", 《模式识别与人工智能》 *
程琳: "基于小波变换的工具痕迹图像识别研究", 《滁州学院学报》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113744238A (en) * 2021-09-01 2021-12-03 南京工业大学 Method for establishing bullet trace database
CN113744238B (en) * 2021-09-01 2023-08-01 南京工业大学 Method for establishing bullet trace database
CN113806547A (en) * 2021-10-15 2021-12-17 南京大学 Deep learning multi-label text classification method based on graph model
CN113806547B (en) * 2021-10-15 2023-08-11 南京大学 Deep learning multi-label text classification method based on graph model

Similar Documents

Publication Publication Date Title
US20210049423A1 (en) Efficient image classification method based on structured pruning
Lu et al. Object detection based on SSD-ResNet
CN108921019B (en) Gait recognition method based on GEI and TripletLoss-DenseNet
CN109002755B (en) Age estimation model construction method and estimation method based on face image
CN108875933B (en) Over-limit learning machine classification method and system for unsupervised sparse parameter learning
CN110097060B (en) Open set identification method for trunk image
CN112528928B (en) Commodity identification method based on self-attention depth network
CN105913081B (en) SAR image classification method based on improved PCAnet
CN110188827B (en) Scene recognition method based on convolutional neural network and recursive automatic encoder model
CN109871749B (en) Pedestrian re-identification method and device based on deep hash and computer system
CN111259917B (en) Image feature extraction method based on local neighbor component analysis
CN109871379B (en) Online Hash nearest neighbor query method based on data block learning
CN111273288B (en) Radar unknown target identification method based on long-term and short-term memory network
CN112381108A (en) Bullet trace similarity recognition method and system based on graph convolution neural network deep learning
CN113011243A (en) Facial expression analysis method based on capsule network
CN110991554B (en) Improved PCA (principal component analysis) -based deep network image classification method
Mamatkulovich Lightweight residual layers based convolutional neural networks for traffic sign recognition
CN115131558A (en) Semantic segmentation method under less-sample environment
CN111310820A (en) Foundation meteorological cloud chart classification method based on cross validation depth CNN feature integration
CN108388918B (en) Data feature selection method with structure retention characteristics
CN108496174B (en) Method and system for face recognition
Husain et al. Face recognition method based on residual convolution neural network
CN113297964A (en) Video target recognition model and method based on deep migration learning
CN111401434A (en) Image classification method based on unsupervised feature learning
CN115393631A (en) Hyperspectral image classification method based on Bayesian layer graph convolution neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210219

RJ01 Rejection of invention patent application after publication