CN113744238B - Method for establishing bullet trace database - Google Patents

Method for establishing bullet trace database Download PDF

Info

Publication number
CN113744238B
CN113744238B CN202111020231.5A CN202111020231A CN113744238B CN 113744238 B CN113744238 B CN 113744238B CN 202111020231 A CN202111020231 A CN 202111020231A CN 113744238 B CN113744238 B CN 113744238B
Authority
CN
China
Prior art keywords
trace
picture
bullet
neural network
false
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111020231.5A
Other languages
Chinese (zh)
Other versions
CN113744238A (en
Inventor
张�浩
肖永飞
诸嘉翎
耿乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Tech University
Original Assignee
Nanjing Tech University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Tech University filed Critical Nanjing Tech University
Priority to CN202111020231.5A priority Critical patent/CN113744238B/en
Publication of CN113744238A publication Critical patent/CN113744238A/en
Application granted granted Critical
Publication of CN113744238B publication Critical patent/CN113744238B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The invention discloses a method for establishing a bullet trace database, and belongs to the technical field of bullet trace identification. The method comprises the following steps: acquiring bullet trace pictures, and performing sampling and filtering treatment to obtain trace picture samples; creating a deep convolution to generate an antagonistic neural network model, and initially setting training parameters; constructing a residual feedback module, and applying the residual feedback module to the discriminator D; selecting an objective function; taking the trace picture sample as input, performing iterative training, and generating false trace pictures; building a Siamese Network, comparing the generated false trace picture with the original picture, and judging whether the generated false trace picture can be used as a real bullet trace picture or not; the false trace pictures which can be used as the true bullet trace pictures can form a bullet trace database. The method solves the defects of the traditional bullet trace picture acquisition and saves gun gunpowder resources required by bullet trace picture acquisition.

Description

Method for establishing bullet trace database
Technical Field
The invention relates to the field of bullet trace identification, in particular to a method for establishing a bullet trace database based on deep convolution generation antagonistic neural network.
Background
The gun-related cases are often related to serious violent crimes, have great social hazard and must be paid enough attention. Bullet trace means: during the firing of the bullet, various marks are left on the bullet, such as rifling marks, bullet bottom pits, firing pin marks and the like, due to the extrusion of the firearm and the bullet. These marks have unique properties, and each gun leaves its own mark on the bullet, so comparison of bullet marks can be used as important evidence of case detection. Because of the diversity and complexity of the firearm and bullet structure, the construction of a bullet trace database with detailed content and wide coverage range is of great significance, which is helpful for the identification of the firearm and ammunition inspection of the case involved in the firearm. The establishment of the bullet mark database requires a large number of data samples, when the traditional bullet mark picture database is established, a large number of bullets are consumed, bullet mark data acquisition is carried out, but the acquisition amount of bullet mark data is huge, the required sample amount of each type of mark is huge, and the database is difficult to perfect through acquisition.
Disclosure of Invention
In order to solve the defects of the traditional bullet trace picture collection and save gun powder resources required by bullet trace picture collection, a method for establishing a bullet trace database based on deep convolution generation antagonistic neural network is provided, the method can provide evidence for gun crimes, help for attack illegal crimes and maintain national public safety and social stability. .
To achieve the above object, the present invention provides a method for creating a bullet trace database, comprising:
step S1: acquiring bullet trace pictures, and carrying out sampling and filtering treatment on the bullet trace pictures to obtain trace picture samples;
step S2: creating a deep convolution generating an antagonistic neural network model, wherein the deep convolution generating the antagonistic neural network model comprises a generator G and a discriminator D, and pre-setting training parameters of the generator G and the discriminator D;
step S3: constructing a residual feedback module, and applying the residual feedback module to the discriminator D, wherein a signal generated by the residual feedback module can be fed back to the generator G;
step S4: selecting an objective function;
step S5: taking the trace picture sample as the input of the deep convolution generation countermeasure neural network model, and performing iterative training on the deep convolution generation countermeasure neural network model to obtain a trained deep convolution generation countermeasure neural network model and a generated false trace picture;
step S6: constructing a Siamese Network, inputting the generated false trace picture and the original bullet trace picture into the Siamese Network, comparing the output result and loss of the two, and judging whether the generated false trace picture can be used as a real bullet trace picture;
step S7: the false trace pictures which can be used as the true bullet trace pictures can form a bullet trace database.
Preferably, in the step S1, the sampling filtering process includes: acquiring a shell bottom trace through a three-dimensional confocal microscope, removing surrounding irrelevant areas, reserving an area with central characteristics, carrying out two-dimensional Gaussian regression filtering on the surface of a shell bottom nest, extracting microscopic morphological characteristics, and obtaining a bullet trace picture; the two-dimensional Gaussian regression filtering formula is as follows:
wherein t (ζ, η) is the input surface, and ζ, η are the abscissa and ordinate below the t surface, respectively; s (x, y) is the filtered output surface, x, y are the abscissa under the s surface, respectively; ρ (r) is a square term;
and minimizing errors through a zero-order least square method, then carrying out a data enhancement algorithm on bullet trace pictures, expanding a small amount of obtained trace pictures, and taking all the processed bullet trace pictures as trace picture samples.
Preferably, in the step S2, the generator G uses transposed convolution for upsampling for the neural network that generates the false trace picture for the initial noise input; the discriminator D is a neural network for distinguishing trace picture samples and false trace pictures, and a pooling layer is not used, but a convolution layer is used for replacing the neural network;
the core formula between the generator G and the discriminator D is:
wherein the method comprises the steps ofRefers to taking a real sample in training sample x, while +.>Refers to samples extracted from a known noise profile.
Preferably, in the step S3, the residual feedback module is specifically:
the deep convolution generates an antagonistic neural network model training process for the input data x i Through one time weight W 1 Transforming and activating function sigma as input y to a layer two neuron 1 =σ(W 1 x i ) Then go through weight W 2 And activating function sigma as input y to the third layer neuron 2 =σ(W 2 y 1 ) The method comprises the steps of carrying out a first treatment on the surface of the The residual feedback module then re-uses y 2 =σ(W 2 y 1 )+x i As the input of the neuron of the third layer, when the input dimension and the output dimension are different, the dimension transformation is needed, and y is adopted 2 =σ(W 2 y 1 )+W s x i As input to the third layer neuron, W s For a changed number of channels;
adding and reusing the residual feedback module in the discriminator D, and obtaining Y in the final bottom characteristic convolution layer final Feedback to the initial upsampling layer of generator G.
Preferably, in the step S4: in the deep convolution generation antagonistic neural network model, the generator G adopts a ReLu function as an activation function sigma, and a final output layer of the generator is activated by adopting a tanh function; the discriminator D uses the LeakyReLu function as an activation function throughout.
Preferably, in the step S5, the iterative training is specifically:
the trace picture sample is used as input data of an anti-neural network model generated by deep convolution, cross entropy loss function and back propagation are adopted to optimize the anti-neural network model generated by the deep convolution, the false trace picture and loss value generated by each epoch are recorded, and meanwhile, the characteristics of the last convolution layer of the discriminator D are extracted, output and stored; and comparing the generated false trace picture with a trace picture sample serving as input data, and adjusting parameters of an initial convolution layer of the generator G to obtain a deep convolution generation antagonistic neural network model.
Preferably, in the step S6:
building a Siamese Network, wherein the Siamese Network comprises a Network A and a Network B; the original bullet trace picture and the generated false trace picture are respectively put into a network A and a network B, the picture is converted into a feature space by adopting a mapping function, each picture corresponds to a feature vector of the feature space, and when the distance between the generated false trace picture and the feature vector of the original bullet trace picture meets the minimum Euclidean distance formula, the false trace picture generated by the opposite neural network is generated by deep convolution and can be used as the bullet trace picture;
the Euclidean distance formula is as follows:
min(E W (X1,X2))=||G W (X1)-G W (X2)||,
wherein: x1 is an original bullet trace picture, X2 is a generated false trace picture, G W For model mapping functions E W Is the Euclidean distance.
According to the method, the false trace picture which can be used as the bullet trace picture is generated by the anti-neural network through deep convolution with residual feedback, the bullet trace database is expanded, the problems that traditional bullet trace data acquisition is tedious and time and labor are wasted are solved, in the method, the constructed residual feedback module transmits the specificity of bullet trace data to the generator G, the generator G is helped to more accurately generate the false trace picture which can be used as the bullet trace picture, and the purpose of efficiently establishing the database is achieved.
Drawings
FIG. 1 is a flow chart of the deep convolution generation of an antagonistic neural network model according to the present invention.
Fig. 2 is a schematic diagram of a residual feedback module according to the present invention.
Fig. 3 is a block diagram of a discriminator D in a model of a deep convolution generation antagonistic neural network according to the invention.
Fig. 4 is a schematic diagram of a Siamese Network structure according to the present invention.
Fig. 5 is a photograph of an original bullet trace.
Fig. 6 is a trace picture sample after enhancement by a data enhancement algorithm.
Detailed Description
Embodiments of the technical scheme of the present invention will be described in detail below with reference to the accompanying drawings. The following examples are only for more clearly illustrating the technical aspects of the present invention, and thus are merely examples, and are not intended to limit the scope of the present invention.
It is noted that unless otherwise indicated, technical or scientific terms used herein should be given the ordinary meaning as understood by one of ordinary skill in the art to which this invention pertains.
In the description of the present application, it should be understood that the terms "center," "longitudinal," "transverse," "length," "width," "thickness," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," "clockwise," "counterclockwise," "axial," "radial," "circumferential," etc. indicate orientations or positional relationships based on the orientation or positional relationships shown in the drawings, are merely for convenience in describing the present invention and to simplify the description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be configured and operated in a particular orientation, and therefore should not be construed as limiting the present invention.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. In the description of the present invention, the meaning of "plurality" is two or more unless specifically defined otherwise.
In this application, unless specifically stated and limited otherwise, the terms "mounted," "connected," "secured," and the like are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally formed; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communicated with the inside of two elements or the interaction relationship of the two elements. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art according to the specific circumstances.
In this application, unless expressly stated or limited otherwise, a first feature "up" or "down" a second feature may be the first and second features in direct contact, or the first and second features in indirect contact via an intervening medium. Moreover, a first feature being "above," "over" and "on" a second feature may be a first feature being directly above or obliquely above the second feature, or simply indicating that the first feature is level higher than the second feature. The first feature being "under", "below" and "beneath" the second feature may be the first feature being directly under or obliquely below the second feature, or simply indicating that the first feature is less level than the second feature.
Examples
The embodiment provides a method for establishing a bullet trace database, which specifically comprises the following steps:
firstly, acquiring bullet bottom nest marks of a bullet, sampling the bullet by using a three-dimensional confocal microscope, and generating bullet bottom mark pictures; processing the trace picture in matlab to obtain a three-dimensional morphology picture of the bullet; removing the impact trace of the shooting pin in the center of the picture, and reserving a bullet bottom nest trace feature centralized area; and then carrying out two-dimensional Gaussian regression filtering treatment on the surface picture of the bullet bottom nest, extracting microscopic morphological features, and finally obtaining a trace picture with the size of 224 multiplied by 224. The bottom pit trace is a trace reflecting the unused structural characteristics of the ground of the shell and the ground of the shell of the machine gun under the action of the gunpowder gas pressure.
Writing a data enhancement code serving as a data enhancement algorithm in python and tensorsurface environments, and expanding the obtained trace picture; the data enhancement algorithm comprises: and rotating the trace picture by 30 degrees, horizontally moving by 0.2 proportion, vertically moving by 0.2 proportion, horizontally overturning, and finally filling the empty region with pixels. And taking the enhanced trace picture as the whole sample data set, namely a trace picture sample.
Performing a deep convolution to generate a construction of an antagonistic neural network model, the deep convolution generating an antagonistic neural network model including a Generator G (Generator) and a discriminator D (Discriminator); the generator G network configuration is shown in table 1.
Table 1 network formation of generator G
The step size defaults to 1. The points in the potential space are sampled using a positive too much distribution, rather than a uniform distribution, resulting in noise. The network input of the generator G is noise with the size of 112 multiplied by 1, the shape of input data is changed through a full connection layer and a Reshape layer, and the data is processed through a convolution and up-sampling layer; in this process, the Relu function is continuously used as an activation function, and the last layer is activated by the Tanh function, so as to generate a false bullet picture with the size of 224×225×3. Instead of using a full connection layer in the whole generator G, a convolution layer and an up-sampling layer are substituted, while batch normalization operations are used to help the gradient flow.
The network composition of the discriminator D is shown in table 2.
Table 2 network configuration of discriminator D
Layer(s) Size of the device Step/fill Activation function Output size
Input 224×224×3 224×224×3
Conv1-1 1×1,128 LeakyReLU 224×224×128
Conv1-2 2×2,128 2/False LeakyReLU 112×112×128
Conv2-1 1×1,128 LeakyReLU 112×112×128
Conv2-2 4×4,128 2/False LeakyReLU 55×55×128
Conv3 4×4,256 2/False LeakyReLU 26×26×256
Conv4 2×2,256 2/False LeakyReLU 13×13×256
Conv5 2×2,256 2/False LeakyReLU 6×6×256
Conv6 2×2,512 2/False LeakyReLU 3×3×512
Fullcon 4608 sigmoid 4608×1
The step size defaults to 1. No pooling operation is employed throughout generator G, with a step convolution layer instead. To reduce the number of operations, the convolution process uses a convolution kernel of size 1 x1, helping the discriminator D to generate feature vectors faster. In discriminator D, in order to prevent gradient sparseness, a LeakyReLU function is used instead of the Relu function; although the LeakyReLU function is similar to the Relu function, the LeakyReLU function allows for a smaller negative activation value, thereby relaxing the sparsity constraint; and the judgment of the full connection layer is processed by adopting a sigmoid function.
As shown in fig. 2, the residual feedback module is constructed and applied to the discriminator D.
In the discriminator D, a residual feedback module is continuously employed:
the convolutional layer Conv2-1 outputs a feature vector x of 112×112×128 size 1 The formula is as follows:
x′ 1 =W 128 x 1
converted into feature vector x 'of 55 x 128 size' 1 Feature vector y of 55×55×128 size output to convolutional layer Conv2-2 1 The two are overlapped to be used as input data y 'of a third convolution layer Conv 3' 2 The method comprises the steps of carrying out a first treatment on the surface of the The specific formula is as follows:
y 1 =σ(W 1 x 1 )
y' 2 =σ(W 2 y 1 )+x′ 1
The residual feedback modules are repeatedly used by the convolution layers Conv3, conv4 and Conv5, so that the feature extraction of the picture is more accurate, and the recognition degree of the discriminator D is improved.
In the network of the discriminator D, the residual feedback module feeds back the eigenvector generated by the convolutional layer Conv2-2 as a signal to the convolutional layer Conv1 in the generator G network. The characteristic vector of 55×55×128 output from the convolution layer Conv2-2 is subjected to convolution layer (1×1, 256) processing to obtain a feedback signal with the size of 55×55×256.
The feature vector with the size of 55×55×256 generated by the discriminator D and the feature vector with the size of 56×56×256 generated by the generator G are combined and used as the input of the Upsampling layer Upsampling1, so that the generated picture is more realistic.
Therefore, the residual feedback module can avoid the accuracy drop of the network caused by the increase of the depth in the discriminator D, and can also feed back the main characteristics of the trace picture to the generator G, so that the performance of the deep convolution generation against the neural network is improved.
Setting super parameters: the network training uses a random gradient descent algorithm (SGD) that is alternately optimized for D and G. The learning rate is 0.008; to stabilize the training process, learning rate decay decay=10 was used -8 The method comprises the steps of carrying out a first treatment on the surface of the Batch size batch_size=20; training times epoch=300.
And taking the trace picture sample as input data, performing iterative training on the constructed deep convolution generation countermeasure neural network model, and adopting a cross entropy loss function to enable initial parameters in a generator G and a discriminator D in the deep convolution generation countermeasure neural network model to be updated and optimized continuously.
In training D, it is desirable that the model V (G, D) value be as large as possible, so that W is updated in the parameter ij ←W ij +ΔW ij The method comprises the steps of carrying out a first treatment on the surface of the When training G, the smaller the value of V (G, D), the better, and the parameter updating adopts W ij ←W ij -ΔW ij
And recording the false trace picture and the loss value generated by each epoch, extracting the characteristics of the final convolution layer of the discriminator D, and outputting and storing. And comparing the generated false trace picture with a trace picture sample serving as input data, and adjusting parameters of an initial convolution layer of the generator G to obtain good deep convolution to generate an antagonistic neural network model and the false trace picture.
The generator G and the discriminator D are mutually constrained and mutually fed back to form a dynamic game process; the core formula between the two is as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,refers to taking a real sample in training sample x, while +.>Refers to samples extracted from a known noise profile.
Finally, a Siamese Network is constructed, as shown in Table 3, and includes a twin Network A and a Network B.
TABLE 3 Siamese Network configuration
Taking two pictures at a time, wherein one original bullet trace picture and the other depth convolution generate false trace pictures generated by the antagonistic neural network. Respectively placing the two pictures into a twin network A and a twin network B, carrying out feature processing on the two pictures, and finally obtaining two feature one-dimensional vectors; by applying the European distance formula,
min(E W (XA,XB))=||G W (XA)-G W (XB)||
wherein X1 is an original bullet trace picture, X2 is a generated false trace picture, G W For model mapping functions E W Is the Euclidean distance.
When the distance between the generated false trace picture and the feature vector of the original bullet trace picture meets the minimum Euclidean distance formula, the false trace picture generated by the depth convolution against the neural network is proved to be used as the bullet trace.
Finally, false trace pictures which can be used as real bullet trace pictures can be generated through the deep convolution generation antagonistic neural network, so that bullet trace data generation and supplement are effectively facilitated, bullet trace data acquisition cost is reduced, time is saved, and database establishment is efficiently completed.

Claims (3)

1. A method of creating a bullet trace database, the method comprising:
step S1: acquiring bullet trace pictures, and carrying out sampling and filtering treatment on the bullet trace pictures to obtain trace picture samples;
step S2: creating a deep convolution generating an antagonistic neural network model, wherein the deep convolution generating the antagonistic neural network model comprises a generator G and a discriminator D, and pre-setting training parameters of the generator G and the discriminator D;
step S3: constructing a residual feedback module, and applying the residual feedback module to the discriminator D, wherein a signal generated by the residual feedback module can be fed back to the generator G;
step S4: selecting an objective function;
step S5: taking the trace picture sample as the input of the deep convolution generation countermeasure neural network model, and performing iterative training on the deep convolution generation countermeasure neural network model to obtain a trained deep convolution generation countermeasure neural network model and a generated false trace picture;
step S6: constructing a Siamese Network, inputting the generated false trace picture and the original bullet trace picture into the Siamese Network, comparing the output result and loss of the two, and judging whether the generated false trace picture can be used as a real bullet trace picture;
step S7: the false trace pictures which can be used as the true bullet trace pictures can form a bullet trace database;
in the step S1, the sampling filtering process includes: acquiring a shell bottom trace through a three-dimensional confocal microscope, removing surrounding irrelevant areas, reserving an area with central characteristics, carrying out two-dimensional Gaussian regression filtering on the surface of a shell bottom nest, extracting microscopic morphological characteristics, and obtaining a bullet trace picture; the two-dimensional Gaussian regression filtering formula is as follows:
wherein t (ζ, η) is the input surface, and ζ, η are the abscissa and ordinate below the t surface, respectively; s (x, y) is the filtered output surface, x, y are the abscissa under the s surface, and ρ (t (ζ, η) -s (x, y)) is the square term of t (ζ, η) -s (x, y), respectively;
minimizing errors through a zero-order least square method, then carrying out a data enhancement algorithm on bullet trace pictures, expanding a small amount of obtained trace pictures, and taking all processed bullet trace pictures as trace picture samples;
in the step S2, the generator G is a neural network that generates a false trace picture at last for initial noise input, and the generator G uses transposed convolution for up-sampling; the discriminator D is a neural network for distinguishing trace picture samples and false trace pictures, and a pooling layer is not used, but a convolution layer is used for replacing the neural network;
the core formula between the generator G and the discriminator D is:
wherein:refers to taking a real sample in training sample n, while +.>Means that samples extracted from known noise distributions, V (D, G) generates an antagonistic neural network model for the deep convolution;
in the step S3, the residual feedback module specifically includes:
the deep convolution generates an antagonistic neural network model training process for the input data x i Through one time weight W 1 Transforming and activating function sigma as input y to a layer two neuron 1 =σ(W 1 x i ) Then go through weight W 2 Activation functionSigma as input y to the third layer neuron 2 =σ(W 2 y 1 ) The method comprises the steps of carrying out a first treatment on the surface of the The residual feedback module then re-uses y 2 =σ(W 2 y 1 )+x i As the input of the neuron of the third layer, when the input dimension and the output dimension are different, the dimension transformation is needed, and y is adopted 2 =σ(W 2 y 1 )+W s x i As input to the third layer neuron, W s For a changed number of channels;
adding and reusing the residual feedback module in the discriminator D, and obtaining Y in the final bottom characteristic convolution layer final Feedback to the initial upsampling layer of generator G;
in the step S6:
building a Siamese Network, wherein the Siamese Network comprises a Network A and a Network B; the original bullet trace picture and the generated false trace picture are respectively put into a network A and a network B, the picture is converted into a feature space by adopting a mapping function, each picture corresponds to a feature vector of the feature space, and when the distance between the generated false trace picture and the feature vector of the original bullet trace picture meets the minimum Euclidean distance formula, the false trace picture generated by the opposite neural network is generated by deep convolution and can be used as the bullet trace picture;
the Euclidean distance formula is as follows:
min(E W (x1,X2))=||G W (x1)-G W (x2)||,
wherein: x1 is an original bullet trace picture, X2 is a generated false trace picture, G W For model mapping functions E W Is the Euclidean distance.
2. A method for creating a bullet trace database as claimed in claim 1, wherein in step S4: in the deep convolution generation antagonistic neural network model, the generator G adopts a ReLu function as an activation function sigma, and a final output layer of the generator is activated by adopting a tanh function; the discriminator D uses the LeakyReLu function as an activation function throughout.
3. A method for creating a bullet trace database as claimed in claim 1, wherein in said step S5, said iterative training is specifically:
the trace picture sample is used as input data of an anti-neural network model generated by deep convolution, cross entropy loss function and back propagation are adopted to optimize the anti-neural network model generated by the deep convolution, the false trace picture and loss value generated by each epoch are recorded, and meanwhile, the characteristics of the last convolution layer of the discriminator D are extracted, output and stored; and comparing the generated false trace picture with a trace picture sample serving as input data, and adjusting parameters of an initial convolution layer of the generator G to obtain a deep convolution generation antagonistic neural network model.
CN202111020231.5A 2021-09-01 2021-09-01 Method for establishing bullet trace database Active CN113744238B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111020231.5A CN113744238B (en) 2021-09-01 2021-09-01 Method for establishing bullet trace database

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111020231.5A CN113744238B (en) 2021-09-01 2021-09-01 Method for establishing bullet trace database

Publications (2)

Publication Number Publication Date
CN113744238A CN113744238A (en) 2021-12-03
CN113744238B true CN113744238B (en) 2023-08-01

Family

ID=78734637

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111020231.5A Active CN113744238B (en) 2021-09-01 2021-09-01 Method for establishing bullet trace database

Country Status (1)

Country Link
CN (1) CN113744238B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109191502A (en) * 2018-08-14 2019-01-11 南京工业大学 A kind of method of automatic identification shell case trace
CN110570353A (en) * 2019-08-27 2019-12-13 天津大学 Dense connection generation countermeasure network single image super-resolution reconstruction method
CN110796080A (en) * 2019-10-29 2020-02-14 重庆大学 Multi-pose pedestrian image synthesis algorithm based on generation of countermeasure network
CN112001847A (en) * 2020-08-28 2020-11-27 徐州工程学院 Method for generating high-quality image by relatively generating antagonistic super-resolution reconstruction model
CN112329832A (en) * 2020-10-27 2021-02-05 中国人民解放军战略支援部队信息工程大学 Passive positioning target track data enhancement method and system based on deep convolution generation countermeasure network
CN112381108A (en) * 2020-04-27 2021-02-19 昆明理工大学 Bullet trace similarity recognition method and system based on graph convolution neural network deep learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109191502A (en) * 2018-08-14 2019-01-11 南京工业大学 A kind of method of automatic identification shell case trace
CN110570353A (en) * 2019-08-27 2019-12-13 天津大学 Dense connection generation countermeasure network single image super-resolution reconstruction method
CN110796080A (en) * 2019-10-29 2020-02-14 重庆大学 Multi-pose pedestrian image synthesis algorithm based on generation of countermeasure network
CN112381108A (en) * 2020-04-27 2021-02-19 昆明理工大学 Bullet trace similarity recognition method and system based on graph convolution neural network deep learning
CN112001847A (en) * 2020-08-28 2020-11-27 徐州工程学院 Method for generating high-quality image by relatively generating antagonistic super-resolution reconstruction model
CN112329832A (en) * 2020-10-27 2021-02-05 中国人民解放军战略支援部队信息工程大学 Passive positioning target track data enhancement method and system based on deep convolution generation countermeasure network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Autoencoding beyond pixels using a learned similarity metric;Anders Boesen Lindbo Larsen 等;《JMLR: W&CP》;全文 *
使用生成对抗网络的行人姿态转换方法研究;闫鑫;《中国知网》;全文 *

Also Published As

Publication number Publication date
CN113744238A (en) 2021-12-03

Similar Documents

Publication Publication Date Title
CN106845529A (en) Image feature recognition methods based on many visual field convolutional neural networks
CN110245711B (en) SAR target identification method based on angle rotation generation network
CN109214406A (en) Based on D-MobileNet neural network image classification method
CN110222773B (en) Hyperspectral image small sample classification method based on asymmetric decomposition convolution network
CN111028923B (en) Digital pathological image staining normalization method, electronic device and storage medium
CN110334645B (en) Moon impact pit identification method based on deep learning
CN107679539A (en) A kind of single convolutional neural networks local message wild based on local sensing and global information integration method
CN111402266A (en) Method and system for constructing digital core
CN106503661A (en) Face gender identification method based on fireworks depth belief network
CN113744238B (en) Method for establishing bullet trace database
CN111612906A (en) Method and system for generating three-dimensional geological model and computer storage medium
CN113920445A (en) Sea surface oil spill detection method based on multi-core classification model decision fusion
CN114841055B (en) Unmanned aerial vehicle cluster task pre-allocation method based on generation countermeasure network
CN116611576B (en) Carbon discharge prediction method and device
CN113643183B (en) Non-matching remote sensing image weak supervised learning super-resolution reconstruction method and system
CN112556682A (en) Automatic target detection algorithm for underwater composite sensor
CN116883364A (en) Apple leaf disease identification method based on CNN and Transformer
Kong et al. Dynamic multi-scale convolution for dialect identification
CN110414586A (en) Antifalsification label based on deep learning tests fake method, device, equipment and medium
CN110866552A (en) Hyperspectral image classification method based on full convolution space propagation network
CN110987751A (en) Quantitative grading evaluation method for pore throat of compact reservoir in three-dimensional space
CN113344110B (en) Fuzzy image classification method based on super-resolution reconstruction
CN108764233A (en) A kind of scene character recognition method based on continuous convolution activation
CN107330869A (en) Extraordinary image vegetarian refreshments reconstruct after overlapping cell segmentation
CN111738288B (en) Bullet warhead fast matching method based on single-point laser detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant