CN113744238A - Method for establishing bullet trace database - Google Patents

Method for establishing bullet trace database Download PDF

Info

Publication number
CN113744238A
CN113744238A CN202111020231.5A CN202111020231A CN113744238A CN 113744238 A CN113744238 A CN 113744238A CN 202111020231 A CN202111020231 A CN 202111020231A CN 113744238 A CN113744238 A CN 113744238A
Authority
CN
China
Prior art keywords
trace
bullet
picture
trace picture
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111020231.5A
Other languages
Chinese (zh)
Other versions
CN113744238B (en
Inventor
张�浩
肖永飞
诸嘉翎
耿乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Tech University
Original Assignee
Nanjing Tech University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Tech University filed Critical Nanjing Tech University
Priority to CN202111020231.5A priority Critical patent/CN113744238B/en
Publication of CN113744238A publication Critical patent/CN113744238A/en
Application granted granted Critical
Publication of CN113744238B publication Critical patent/CN113744238B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The invention discloses a method for establishing a bullet trace database, and belongs to the technical field of bullet trace identification and identification. The method comprises the following steps: acquiring a bullet trace picture, and performing sampling filtering processing to obtain a trace picture sample; creating a deep convolution to generate a confrontation neural network model, and setting training parameters initially; constructing a residual feedback module, and applying the residual feedback module to the discriminator D; selecting an objective function; taking the trace picture sample as input, performing iterative training, and generating a false trace picture; constructing a Siamese Network, comparing the generated false trace picture with an original picture, and judging whether the generated false trace picture can be used as a real bullet trace picture or not; the false trace picture which can be used as the real bullet trace picture can form a bullet trace database. The method solves the problem of insufficient acquisition of the traditional bullet trace picture and saves gun powder resources required by the acquisition of the bullet trace picture.

Description

Method for establishing bullet trace database
Technical Field
The invention relates to the field of bullet trace identification and identification, in particular to a method for generating an antithetical neural network based on deep convolution and establishing a bullet trace database.
Background
Gun-related cases are often related to serious violent crimes, are extremely harmful to the society and need to be paid sufficient attention. The bullet trace is: during the shooting process of the bullet, various traces such as rifling scratches, bottom pit marks, firing pin marks and the like are left on the bullet due to the compression of the firearm and the bullet. These marks have unique properties, and each gun leaves its own mark on the bullet, so the comparison of the bullet marks can be used as important evidence for case detection. Due to the fact that the structures of firearms and bullets are diverse and complex and difficult to search, the establishment of a bullet trace database with detailed contents and wide coverage range is of great significance, and identification of firearms and ammunition inspection related to the gun cases is facilitated. The establishment of the bullet mark database needs a large number of data samples, when the traditional bullet mark picture database is established, a large number of bullets need to be consumed, and bullet mark data acquisition is carried out, but the acquisition amount of the bullet mark data is huge, and the sample amount needed by each type of mark is huge, so that the database is difficult to perfect through acquisition.
Disclosure of Invention
In order to solve the defects of traditional bullet trace picture collection and save gun powder resources required by bullet trace picture collection, a method for establishing a bullet trace database by using an anti-neural network generated based on deep convolution is provided. .
To achieve the above object, the present invention provides a method for creating a database of bullet traces, comprising:
step S1: acquiring a bullet trace picture, and performing sampling filtering processing on the bullet trace picture to obtain a trace picture sample;
step S2: creating a deep convolution generation countermeasure neural network model which comprises a generator G and a discriminator D, and setting training parameters of the generator G and the discriminator D;
step S3: constructing a residual feedback module, applying the residual feedback module to the discriminator D, and feeding back a signal generated by the residual feedback module to the generator G;
step S4: selecting an objective function;
step S5: taking the trace image sample as an input of a depth convolution generation countermeasure neural network model, and performing iterative training on the depth convolution generation countermeasure neural network model to obtain a trained depth convolution generation countermeasure neural network model and a generated false trace image;
step S6: constructing a Simase Network, inputting the generated false trace picture and the original bullet trace picture into the Simase Network, comparing the output result and the loss of the generated false trace picture and the original bullet trace picture, and judging whether the generated false trace picture can be used as a real bullet trace picture or not;
step S7: the false trace picture which can be used as the real bullet trace picture can form a bullet trace database.
Preferably, in step S1, the sampling filtering process includes: acquiring a bullet shell bottom trace through a three-dimensional confocal microscope, removing irrelevant areas around the bullet shell bottom trace, keeping an area with concentrated middle features, performing two-dimensional Gaussian regression filtering on the surface of a bullet bottom nest, and extracting micro-morphology features to obtain a bullet trace picture; the two-dimensional Gaussian regression filter formula is as follows:
Figure BDA0003241620440000021
in the formula, t (xi, eta) is an input surface, and xi, eta are horizontal and vertical coordinates below the t surface respectively; s (x, y) is the filtered output surface, x, y are the horizontal and vertical coordinates below the s surface, respectively; ρ (r) is a square term;
and minimizing errors by a zero-order least square method, then performing a data enhancement algorithm on the bullet trace picture, expanding a few obtained trace pictures, and taking all processed bullet trace pictures as trace picture samples.
Preferably, in step S2, the generator G is a neural network that generates a false trace picture for initial noise input, and the generator G performs upsampling by using a transposed convolution; the discriminator D is a neural network for distinguishing trace picture samples from false trace pictures, does not use a pooling layer, and uses a convolution layer instead;
the core formula between the generator G and the discriminator D is:
Figure BDA0003241620440000022
wherein
Figure BDA0003241620440000023
Means to take a real sample in the training sample x, and
Figure BDA0003241620440000024
refers to samples taken from a known noise profile.
Preferably, in step S3, the residual feedback module specifically includes:
the deep convolution generates an antagonistic neural network model training process for input data xiPass through a weight W1Transformation and activation function σ as input y for second layer neurons1=σ(W1xi) Then passes through the weight W2And activation function σ as input to neurons of the third layer y2=σ(W2y1) (ii) a The residual error feedback module then outputs y2=σ(W2y1)+xiAs the input of the third layer of neurons, when the input dimension and the output dimension are different, dimension transformation is needed, and y is adopted2=σ(W2y1)+WsxiAs input to the third layer neurons, WsFor a changed number of channels;
adding and reusing residual feedback module in discriminator D, in the last bottom layer characteristic convolution layerObtained YfinalFed back to the initial upsampling layer of the generator G.
Preferably, in step S4: in the generation of the antagonistic neural network model by the deep convolution, the generator G adopts a ReLu function as an activation function sigma, and the final output layer of the generator is activated by a tanh function; the discriminator D adopts a LeakyReLu function as an activation function in the whole process.
Preferably, in step S5, the iterative training specifically includes:
taking the trace image sample as input data of a depth convolution generation countermeasure neural network model, optimizing the depth convolution generation countermeasure neural network model by adopting a cross entropy loss function and back propagation, recording a false trace image and a loss value generated by each epoch, extracting the characteristics of a final convolution layer of the discriminator D, and outputting and storing the characteristics; and comparing the generated false trace picture with a trace picture sample serving as input data, and adjusting the parameters of the initial convolution layer of the generator G to obtain a depth convolution to generate a confrontation neural network model.
Preferably, in step S6:
constructing a Siamese Network, wherein the Siamese Network comprises a Network A and a Network B; respectively putting an original bullet trace picture and a generated false trace picture into a network A and a network B, converting the pictures into a feature space by adopting a mapping function, wherein each picture corresponds to a feature vector of the feature space, and when the distance between the generated false trace picture and the feature vector of the original bullet trace picture meets a minimum Euclidean distance formula, the false trace picture generated by a depth convolution and antagonistic neural network can be used as the bullet trace picture;
the Euclidean distance formula is as follows:
min(EW(X1,X2))=||GW(X1)-GW(X2)||,
in the formula: x1 is the original bullet trace picture, X2 is the generated false trace picture, GWAs a model mapping function, EWIs the euclidean distance.
According to the method, the false trace picture which can be used as the bullet trace picture is generated by the anti-neural network through the deep convolution with the residual feedback, the bullet trace database is expanded, and the problems that traditional bullet trace data acquisition is complicated, time-consuming and labor-consuming are solved.
Drawings
FIG. 1 is a flow chart of the deep convolution generation of an antagonistic neural network model according to the present invention.
Fig. 2 is a schematic diagram of the residual feedback module according to the present invention.
FIG. 3 is a block diagram of a discriminator D in a deep convolution-generated antagonistic neural network model according to the present invention.
Fig. 4 is a siernese Network structure according to the present invention.
Fig. 5 is an original bullet trace picture.
Fig. 6 is a trace picture sample after enhancement by a data enhancement algorithm.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and therefore are only examples, and the protection scope of the present invention is not limited thereby.
It is to be noted that, unless otherwise specified, technical or scientific terms used herein shall have the ordinary meaning as understood by those skilled in the art to which the invention pertains.
In the description of the present application, it is to be understood that the terms "central," "longitudinal," "lateral," "length," "width," "thickness," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," "clockwise," "counterclockwise," "axial," "radial," "circumferential," and the like are used in the orientations and positional relationships indicated in the drawings for convenience in describing the invention and to simplify the description, and are not intended to indicate or imply that the referenced device or element must have a particular orientation, be constructed and operated in a particular orientation, and are therefore not to be considered limiting of the invention.
Furthermore, the terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
In this application, unless expressly stated or limited otherwise, the terms "mounted," "connected," "secured," and the like are to be construed broadly and can include, for example, fixed connections, removable connections, or integral parts; can be mechanically or electrically connected; either directly or indirectly through intervening media, either internally or in any other relationship. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
In this application, unless expressly stated or limited otherwise, the first feature "on" or "under" the second feature may be directly contacting the first and second features or indirectly contacting the first and second features through intervening media. Also, a first feature "on," "over," and "above" a second feature may be directly or diagonally above the second feature, or may simply indicate that the first feature is at a higher level than the second feature. A first feature being "under," "below," and "beneath" a second feature may be directly under or obliquely under the first feature, or may simply mean that the first feature is at a lesser elevation than the second feature.
Examples
The embodiment provides a method for establishing a bullet trace database, which specifically comprises the following steps:
firstly, acquiring a bullet bottom nest trace of a bullet, and sampling the bullet by using a three-dimensional confocal microscope to generate a bullet bottom trace picture; processing the trace picture in the matlab to obtain a three-dimensional shape picture of the bullet; removing the impact trace of a striker pin in the center of the picture, and reserving a bullet bottom pit trace characteristic concentration area; and then, performing two-dimensional Gaussian regression filtering processing on the surface picture of the bullet bottom nest, extracting micro-topography characteristics, and finally obtaining a trace picture with the size of 224 multiplied by 224. The trace of the bullet bottom nest is the trace of the bullet shell ground which has the reflection of the characteristic of unnecessary structure with the bullet shell ground of the machine gun under the action of the gas pressure of gunpowder.
Writing a data enhancement code serving as a data enhancement algorithm in a python and tensorflow environment, and expanding the obtained trace picture; the data enhancement algorithm comprises: and rotating the trace picture by 30 degrees, horizontally moving by 0.2 proportion, vertically moving by 0.2 proportion, horizontally turning over, and finally carrying out pixel filling on blank areas. And taking the enhanced trace picture as the whole sample data set, namely the trace picture sample.
Constructing a deep convolution generation antagonistic neural network model, wherein the deep convolution generation antagonistic neural network model comprises a generator G (Generator) and a discriminator D (discriminator); the generator G network is constructed as shown in table 1.
Table 1 network formation of generator G
Figure BDA0003241620440000051
Figure BDA0003241620440000061
The step size defaults to 1. Sampling points in the underlying space using a positive-too distribution, rather than a uniform distribution, produces noise. The network input of the generator G is noise with the size of 112 multiplied by 1, the shape of input data is changed through a full connection layer and a Reshape layer, and the data is processed through a convolution layer and an upper sampling layer; in the process, the Relu function is continuously used as an activation function, and the final layer is activated by the Tanh function, so that a false bullet hole picture with the size of 224 × 225 × 3 is generated. Instead of using fully connected layers throughout the generator G, convolutional and upsampled layers are substituted, while using bulk normalization operations to aid gradient flow.
The network configuration of the discriminator D is shown in table 2.
Table 2 network configuration of discriminator D
Layer(s) Size of Step size/fill Activating a function Output size
Input 224×224×3 224×224×3
Conv1-1 1×1,128 LeakyReLU 224×224×128
Conv1-2 2×2,128 2/False LeakyReLU 112×112×128
Conv2-1 1×1,128 LeakyReLU 112×112×128
Conv2-2 4×4,128 2/False LeakyReLU 55×55×128
Conv3 4×4,256 2/False LeakyReLU 26×26×256
Conv4 2×2,256 2/False LeakyReLU 13×13×256
Conv5 2×2,256 2/False LeakyReLU 6×6×256
Conv6 2×2,512 2/False LeakyReLU 3×3×512
Fullcon 4608 sigmoid 4608×1
The step size defaults to 1. Instead of pooling throughout the generator G, a stepped convolution layer is used instead. To reduce the number of operations, the convolution process uses a convolution kernel of size 1 × 1, which helps discriminator D to perform feature vector generation faster. In the discriminator D, in order to prevent gradient sparsity, a LeakyReLU function is adopted to replace a Relu function; although the LeakyReLU function and the Relu function are similar, the LeakyReLU function allows for smaller negative activation values, thereby relaxing the sparsity constraint; and if the full connection layer is judged, the sigmoid function is adopted for processing.
As shown in fig. 2, a residual feedback module is constructed and used in the discriminator D.
In discriminator D, a residual feedback module is continuously employed:
the convolutional layer Conv2-1 outputs a feature vector x of 112 × 112 × 1281By the formula:
x′1=W128x1
conversion to a feature vector of size 55 × 55 × 128 x'1Feature vector y of 55X 128 output to convolutional layer Conv2-21And the two are superimposed as input data y 'of the third buildup layer Conv 3'2(ii) a The specific formula is as follows:
y1=σ(W1x1)
and y'2=σ(W2y1)+x′1
The convolutional layers Conv3, Conv4 and Conv5 reuse the residual feedback module, so that the feature extraction of the pictures is more accurate, and the recognition degree of the discriminator D is improved.
The residual feedback module feeds back the eigenvectors generated by the convolutional layer Conv2-2 as signals to the convolutional layer Conv1 in the generator G network in the network of the discriminator D. The convolutional layer (1 × 1, 256) processing is performed on the characteristic vector of 55 × 55 × 128 output from the convolutional layer Conv2-2 to obtain a feedback signal of 55 × 55 × 256 size.
The combination of the feature vectors of size 55 × 55 × 256 generated by discriminator D and the feature vectors of size 56 × 56 × 256 generated by generator G is used as input to Upsampling layer Upsampling1, making it more realistic to produce a picture.
Therefore, the residual error feedback module can avoid the accuracy rate reduction caused by the increase of the network along with the depth in the discriminator D, and can also feed back the main characteristics of the trace picture to the generator G, so that the performance of the depth convolution generation for resisting the neural network is improved.
Setting a hyper-parameter: the network training adopts a random gradient descent algorithm (SGD) to alternately optimize D and G. The learning rate is 0.008; to stabilize the training process, a learning rate decay of 10 is used-8(ii) a Batch size batch _ size 20; the training time epoch is 300.
And taking the trace image sample as input data, performing iterative training on the constructed depth convolution generation antagonistic neural network model, and continuously updating and optimizing initial parameters in a generator G and a discriminator D in the depth convolution generation antagonistic neural network model by adopting a cross entropy loss function.
In training D, it is desirable that the larger the model V (G, D) value, the better, so W is used in updating the parametersij←Wij+ΔWij(ii) a When training G, the smaller the value of V (G, D) is, the better the value is, and W is adopted for updating the parametersij←Wij-ΔWij
And recording the false trace picture and the loss value generated by each epoch, extracting the characteristics of the last convolution layer of the discriminator D, and outputting and storing the characteristics. And comparing the generated false trace picture with a trace picture sample serving as input data, and adjusting the parameters of the initial convolution layer of the generator G to obtain good depth convolution to generate an anti-neural network model and a false trace picture.
The generator G and the discriminator D are mutually constrained and mutually fed back to form a dynamic game process; the core formula between the two is:
Figure BDA0003241620440000081
wherein the content of the first and second substances,
Figure BDA0003241620440000082
means to take a real sample in the training sample x, and
Figure BDA0003241620440000083
refers to samples taken from a known noise profile.
Finally, a Siamese Network is constructed, as shown in Table 3, and the Siamese Network includes a twin Network A and a Network B.
Table 3 Network composition of Siamese Network
Figure BDA0003241620440000084
Figure BDA0003241620440000091
Taking two pictures each time, wherein one picture is an original bullet trace picture, and the other picture is a deep convolution picture to generate a false trace picture for resisting the generation of the neural network. Respectively putting the pictures into a twin network A and a twin network B, performing feature processing on the two pictures, and finally obtaining two feature one-dimensional vectors; by applying the Euclidean distance formula,
min(EW(XA,XB))=||GW(XA)-GW(XB)||
wherein X1 is the original bullet trace picture, X2 is the generated false trace picture, GWAs a model mapping function, EWIs the euclidean distance.
And when the distance between the generated false trace picture and the feature vector of the original bullet trace picture meets the minimum Euclidean distance formula, the false trace picture generated by the depth convolution and generated by the anti-neural network can be used as the bullet trace.
Finally, a false trace picture which can be used as a real bullet trace picture can be generated by generating the antagonistic neural network through deep convolution, so that the generation and supplement of bullet trace data are effectively facilitated, the cost of bullet trace data acquisition is reduced, the time is saved, and the establishment of a database is efficiently completed.

Claims (7)

1. A method of building a database of bullet trails, the method comprising:
step S1: acquiring a bullet trace picture, and performing sampling filtering processing on the bullet trace picture to obtain a trace picture sample;
step S2: creating a deep convolution generation countermeasure neural network model which comprises a generator G and a discriminator D, and setting training parameters of the generator G and the discriminator D;
step S3: constructing a residual feedback module, applying the residual feedback module to the discriminator D, and feeding back a signal generated by the residual feedback module to the generator G;
step S4: selecting an objective function;
step S5: taking the trace image sample as an input of a depth convolution generation countermeasure neural network model, and performing iterative training on the depth convolution generation countermeasure neural network model to obtain a trained depth convolution generation countermeasure neural network model and a generated false trace image;
step S6: constructing a Simase Network, inputting the generated false trace picture and the original bullet trace picture into the Simase Network, comparing the output result and the loss of the generated false trace picture and the original bullet trace picture, and judging whether the generated false trace picture can be used as a real bullet trace picture or not;
step S7: the false trace picture which can be used as the real bullet trace picture can form a bullet trace database.
2. The method for creating a database of bullet trails according to claim 1, wherein said step S1, wherein said sampling filtering process comprises: acquiring a bullet shell bottom trace through a three-dimensional confocal microscope, removing irrelevant areas around the bullet shell bottom trace, keeping an area with concentrated middle features, performing two-dimensional Gaussian regression filtering on the surface of a bullet bottom nest, and extracting micro-morphology features to obtain a bullet trace picture; the two-dimensional Gaussian regression filter formula is as follows:
Figure FDA0003241620430000011
in the formula, t (xi, eta) is an input surface, and xi, eta are horizontal and vertical coordinates below the t surface respectively; s (x, y) is the filtered output surface, x, y are the horizontal and vertical coordinates below the s surface, respectively; ρ (r) is a square term;
and minimizing the error by a zero-order least square method, then expanding the obtained few trace pictures by performing a data enhancement algorithm on the bullet trace pictures, and taking all the processed bullet trace pictures as trace picture samples.
3. The method for creating a bullet trace database according to claim 1, wherein in step S2, the generator G is a neural network for inputting initial noise and finally generating a false trace picture, and the generator G uses transposed convolution for upsampling; the discriminator D is a neural network for distinguishing trace picture samples from false trace pictures, does not use a pooling layer, and uses a convolution layer instead;
the core formula between the generator G and the discriminator D is:
Figure FDA0003241620430000021
wherein
Figure FDA0003241620430000022
Means to take a real sample in the training sample x, and
Figure FDA0003241620430000023
refers to samples taken from a known noise profile.
4. The method according to claim 1, wherein in step S3, the residual feedback module is specifically:
the deep convolution generates an antagonistic neural network model training process for input data xiPass through a weight W1Transformation and activation function σ as input y for second layer neurons1=σ(W1xi) Then passes through the weight W2And activation function σ as input to neurons of the third layer y2=σ(W2y1) (ii) a The residual error feedback module then outputs y2=σ(W2y1)+xiAs the input of the third layer of neurons, when the input dimension and the output dimension are different, dimension transformation is needed, and y is adopted2=σ(W2y1)+WsxiAs input to the third layer neurons, WsFor a changed number of channels;
adding residual feedback module in discriminator D and reusing, Y obtained in final bottom layer characteristic convolution layerfinalFed back to the initial upsampling layer of the generator G.
5. The method of creating a database of bullet trails according to claim 1, wherein said step S4 comprises: in the generation of the antagonistic neural network model by the deep convolution, the generator G adopts a ReLu function as an activation function sigma, and the final output layer of the generator is activated by a tanh function; the discriminator D adopts a LeakyReLu function as an activation function in the whole process.
6. The method for creating a database of bullet trails according to claim 1, wherein said iterative training in step S5 is specifically:
taking the trace image sample as input data of a depth convolution generation countermeasure neural network model, optimizing the depth convolution generation countermeasure neural network model by adopting a cross entropy loss function and back propagation, recording a false trace image and a loss value generated by each epoch, extracting the characteristics of a final convolution layer of the discriminator D, and outputting and storing the characteristics; and comparing the generated false trace picture with a trace picture sample serving as input data, and adjusting the parameters of the initial convolution layer of the generator G to obtain a depth convolution to generate a confrontation neural network model.
7. The method of creating a database of bullet trails according to claim 1, wherein said step S6 comprises:
constructing a Siamese Network, wherein the Siamese Network comprises a Network A and a Network B; respectively putting an original bullet trace picture and a generated false trace picture into a network A and a network B, converting the pictures into a feature space by adopting a mapping function, wherein each picture corresponds to a feature vector of the feature space, and when the distance between the generated false trace picture and the feature vector of the original bullet trace picture meets a minimum Euclidean distance formula, the false trace picture generated by a depth convolution and antagonistic neural network can be used as the bullet trace picture;
the Euclidean distance formula is as follows:
min(EW(X1,X2))=||GW(X1)-GW(X2)||,
in the formula: x1 is the original bullet trace picture, X2 is the generated false trace picture, GWAs a model mapping function, EWIs the euclidean distance.
CN202111020231.5A 2021-09-01 2021-09-01 Method for establishing bullet trace database Active CN113744238B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111020231.5A CN113744238B (en) 2021-09-01 2021-09-01 Method for establishing bullet trace database

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111020231.5A CN113744238B (en) 2021-09-01 2021-09-01 Method for establishing bullet trace database

Publications (2)

Publication Number Publication Date
CN113744238A true CN113744238A (en) 2021-12-03
CN113744238B CN113744238B (en) 2023-08-01

Family

ID=78734637

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111020231.5A Active CN113744238B (en) 2021-09-01 2021-09-01 Method for establishing bullet trace database

Country Status (1)

Country Link
CN (1) CN113744238B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109191502A (en) * 2018-08-14 2019-01-11 南京工业大学 A kind of method of automatic identification shell case trace
CN110570353A (en) * 2019-08-27 2019-12-13 天津大学 Dense connection generation countermeasure network single image super-resolution reconstruction method
CN110796080A (en) * 2019-10-29 2020-02-14 重庆大学 Multi-pose pedestrian image synthesis algorithm based on generation of countermeasure network
CN112001847A (en) * 2020-08-28 2020-11-27 徐州工程学院 Method for generating high-quality image by relatively generating antagonistic super-resolution reconstruction model
CN112329832A (en) * 2020-10-27 2021-02-05 中国人民解放军战略支援部队信息工程大学 Passive positioning target track data enhancement method and system based on deep convolution generation countermeasure network
CN112381108A (en) * 2020-04-27 2021-02-19 昆明理工大学 Bullet trace similarity recognition method and system based on graph convolution neural network deep learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109191502A (en) * 2018-08-14 2019-01-11 南京工业大学 A kind of method of automatic identification shell case trace
CN110570353A (en) * 2019-08-27 2019-12-13 天津大学 Dense connection generation countermeasure network single image super-resolution reconstruction method
CN110796080A (en) * 2019-10-29 2020-02-14 重庆大学 Multi-pose pedestrian image synthesis algorithm based on generation of countermeasure network
CN112381108A (en) * 2020-04-27 2021-02-19 昆明理工大学 Bullet trace similarity recognition method and system based on graph convolution neural network deep learning
CN112001847A (en) * 2020-08-28 2020-11-27 徐州工程学院 Method for generating high-quality image by relatively generating antagonistic super-resolution reconstruction model
CN112329832A (en) * 2020-10-27 2021-02-05 中国人民解放军战略支援部队信息工程大学 Passive positioning target track data enhancement method and system based on deep convolution generation countermeasure network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ANDERS BOESEN LINDBO LARSEN 等: "Autoencoding beyond pixels using a learned similarity metric", 《JMLR: W&CP》 *
闫鑫: "使用生成对抗网络的行人姿态转换方法研究", 《中国知网》 *

Also Published As

Publication number Publication date
CN113744238B (en) 2023-08-01

Similar Documents

Publication Publication Date Title
CN110532859B (en) Remote sensing image target detection method based on deep evolution pruning convolution net
CN111476717B (en) Face image super-resolution reconstruction method based on self-attention generation countermeasure network
CN111369487B (en) Hyperspectral and multispectral image fusion method, system and medium
CN110245711B (en) SAR target identification method based on angle rotation generation network
CN106845529A (en) Image feature recognition methods based on many visual field convolutional neural networks
CN107944483B (en) Multispectral image classification method based on dual-channel DCGAN and feature fusion
CN110969606B (en) Texture surface defect detection method and system
CN105654122B (en) Based on the matched spatial pyramid object identification method of kernel function
CN110334645B (en) Moon impact pit identification method based on deep learning
CN106485651A (en) The image matching method of fast robust Scale invariant
CN115272681B (en) Ocean remote sensing image semantic segmentation method and system based on high-order feature class decoupling
CN107679539A (en) A kind of single convolutional neural networks local message wild based on local sensing and global information integration method
CN113920445A (en) Sea surface oil spill detection method based on multi-core classification model decision fusion
CN108629746B (en) Radar image speckle noise suppression method based on correlation loss convolutional neural network
CN111612906A (en) Method and system for generating three-dimensional geological model and computer storage medium
CN111027509A (en) Hyperspectral image target detection method based on double-current convolution neural network
CN113553972A (en) Apple disease diagnosis method based on deep learning
Machiraju et al. A little fog for a large turn
CN113744238A (en) Method for establishing bullet trace database
CN116611576B (en) Carbon discharge prediction method and device
CN112556682A (en) Automatic target detection algorithm for underwater composite sensor
CN117576079A (en) Industrial product surface abnormality detection method, device and system
CN116912574A (en) Multi-scale target perception classification method and system based on twin network
CN110866552A (en) Hyperspectral image classification method based on full convolution space propagation network
CN116258899A (en) Corn ear classification method based on custom light convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant