CN117079272A - Bullet bottom socket mark feature identification method combining manual features and learning features - Google Patents

Bullet bottom socket mark feature identification method combining manual features and learning features Download PDF

Info

Publication number
CN117079272A
CN117079272A CN202310122918.2A CN202310122918A CN117079272A CN 117079272 A CN117079272 A CN 117079272A CN 202310122918 A CN202310122918 A CN 202310122918A CN 117079272 A CN117079272 A CN 117079272A
Authority
CN
China
Prior art keywords
key
bullet
image
learning
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310122918.2A
Other languages
Chinese (zh)
Inventor
张�浩
沐春华
管旭
耿乐
虞浒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Tech University
Original Assignee
Nanjing Tech University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Tech University filed Critical Nanjing Tech University
Priority to CN202310122918.2A priority Critical patent/CN117079272A/en
Publication of CN117079272A publication Critical patent/CN117079272A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/695Preprocessing, e.g. image segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/36Applying a local operator, i.e. means to operate on image points situated in the vicinity of a given point; Non-linear local filtering operations, e.g. median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/766Arrangements for image or video recognition or understanding using pattern recognition or machine learning using regression, e.g. by projecting features on hyperplanes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Nonlinear Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a trace identification method combining manual features and learning features to realize key point detection and feature description of key points, which not only utilizes gradient information of images, but also reduces network structure parameters and parameter calculation, and can realize the stability of key point detection of bullet traces compared with an end-to-end neural network algorithm. Compared with the prior art, the invention has the following technical effects: the method for combining the manual features and the learning features utilizes the gradient information of the image, reduces network structure parameters and parameter calculation, and can realize the stability of detection of key points of the bullet marks. Improving the CNN structure of the Key. Net model to enable the CNN structure to be more suitable for feature learning and combination capability of bullet mark tasks; and a Key.Net model is trained by using a large-scale data set and is migrated to a bullet trace example of a small sample, so that the reliability of the model and the detection accuracy are improved.

Description

Bullet bottom socket mark feature identification method combining manual features and learning features
Technical Field
The invention relates to the field of bullet trace identification, in particular to a bullet bottom pit feature identification method combining manual features and learning features.
Background
When the bullet is fired, the rifling is used for enabling textures such as machining and the like on the surface of the bullet bottom nest to be stamped on the bottom surface of the bullet shell, so that traces are formed, the trace feature stability is strong, and the individual firearm machining features are different, so that the bullet bottom nest trace and other firearms are differentiated, and matching and tracing of the bullet can be realized by comparing the bullet bottom nest trace, and strong evidence is provided for public security authorities to detect gun-related cases. The traditional bullet mark inspection method is low in identification efficiency, large in workload and large in subjective influence through comparison by professional technicians;
with the progress of computer vision recognition technology, various countries in the world are dedicated to researching bullet mark automatic recognition systems and achieving obvious effects, such as IBIS, ALIAS and the like. The key point feature matching method of the bullet mark mainly comprises two stages: and calculating key points of bullet mark images by a feature extraction method, describing key point features, and matching the key point features to calculate the coincidence degree between the images. However, the existing feature extraction method mostly adopts manual feature algorithms such as SIFT or the end-to-end convolutional neural network technology, the former is unstable in extracting key points of images with unobvious features such as bullet marks, and the latter is difficult to guarantee the repeatability of extracting the key points in different scenes.
Disclosure of Invention
In order to solve the problem of robustness of extraction of characteristic points of the traditional bullet mark images, the invention provides a method for identifying the marks by combining manual characteristics and learning characteristics so as to realize key point detection and characteristic description of the key points.
The invention provides the following technical scheme for realizing the purpose:
a trace identification method combining manual characteristics and learning characteristics comprises the following specific steps:
step S1: cutting and filtering are carried out on bullet trace images acquired by a three-dimensional confocal microscope so as to highlight roughness characteristics of bullet trace images; constructing a bullet mark image sample after pretreatment into a bullet mark data total set, taking 80% of the bullet mark data total set as an algorithm training set and 20% as a verification set;
step S2: establishing an improved feature point extraction network model Key.Net, which comprises a manual feature point filter, a learning feature point filter and a multi-scale space index suggestion layer, wherein the method improves the original Key.Net learning feature point filter so as to adapt to bullet mark tasks;
step S3: performing image affine transformation on the ImageNet large-scale data set to generate an image pair data set, and inputting the image pair data set into the Key.Net twin model established in the step S2 to obtain optimal model parameters;
step S4: introducing the bullet mark training set in the step S1 into the optimal parameter model in the step S3 for continuous iterative training, and utilizing the verification set in the step S1 to check the accuracy of the model, and freezing Key.Net model parameters with the highest matching accuracy of the verification set to be used as a final key point detection model;
step S5: and (3) acquiring bullet mark images to be registered, inputting the bullet mark images to a Key.Net feature point detection model in the step S4, acquiring a key point response diagram, constructing feature description by using HardNet as a key point, and calculating the Euclidean distance of the feature description among the key points to realize registration with the reference bullet mark images.
Further, in the step S1, the sampling filtering process includes: removing the irrelevant area around the bullet bottom nest mark by cutting, reserving an intermediate feature concentration area, and carrying out two-dimensional Gaussian regression filtering on the bullet bottom nest mark to obtain a nest mark image with obvious roughness features; the two-dimensional Gaussian regression filtering formula is as follows:
wherein t (ζ, η) is the input surface, ζ, η are the abscissa under the t surface, respectively; s (x, y) is the filtered output surface, x, y are the abscissa under the s surface, respectively; ρ (r) is a square term;
further, in the step S2, the key.net model includes a multi-layer structure, and the CNN module uses He weight initialization and L2 kernel regularization:
step S2.1: in order to construct a three-layer scale space, carrying out downsampling processing on an input image (with the size of W multiplied by H) at three scale levels, wherein the layer-by-layer sampling coefficient and the Gaussian blur coefficient are 1.2;
step S2.2: the manual characteristic point filter provides an anchor point structure for the CNN filter, positions, scores and sorts characteristic points with strong repeatability, and references Harris and Hession corner detection algorithm: the angular point has large gradient and gradient change rate in the region with quick change of gray level, the salient angle characteristic is obtained by calculating the first and second derivatives of each scale image, and d is calculated by a soble operator x ,d yd xx ,d yy ,d xy ,/>d xx ×d yy ,d x ×d y 10 parameters in total, constitute output tensors of 10 channels;
step S2.3: the learning characteristic filter consists of 3 basic learning blocks and 1 independent convolution block, the method is improved based on the learning module of the original Key.Net, and convolution kernels of 1 multiplied by 1, 3 multiplied by 3 and 5 multiplied by 5 are used for constructing an acceptance module to replace the original convolution blocks so as to enable the model to adaptively select the convolution kernel size; the activation layer is changed into a Sigmoid function, and the function is as follows:
the basic learning block consists of an acceptance module, a batch normalization layer and an activation function layer, wherein the input channel of the 1 st learning block is 10, and the input channels and the output channels of the rest learning blocks are all set to be 8;
the single convolution block consists of a convolution layer with the convolution kernel size of 3 multiplied by 3 and a ReLU function, wherein the output channel is 1, namely, a response characteristic diagram of a bullet image is output, and the network scores each pixel point;
step S2.4: the output images obtained after the processing of the manual characteristic layer and the learning characteristic layer of the three scale level images are respectively subjected to up-sampling processing, 1 characteristic response diagram (the size is W multiplied by H) with different scales is obtained on each layer, and the images are cascaded into 3-channel scale space images;
step S2.5: the method obtains an image true value based on affine transformation, is used for training the whole model, a single index suggestion layer divides an image characteristic response diagram into grids for multiple times, the grid size is N multiplied by N, a key point of each grid is calculated based on the true value of affine transformation, and a loss function on a single scale is as follows:
wherein the method comprises the steps ofAnd->Is an image->And image->Characteristic point response diagram of (H) ab Representation of image->And->Affine transformation matrix existing between, parameter alpha i Determining the value of the response value in calculating loss based on the position of each feature point;
the loss function at multiple scales is expressed as:
where s is the scale factor, N s ∈[8,16,24,32,40]Is the window size, lambda s ∈[256,64,16,4,1]Is the control parameter when the scale factor is s, N s And lambda (lambda) s And determining by performing super-parameter searching on the verification set.
Further, in the step S4, the registration between the image to be registered and the reference image is detected and described by key.net+harrnet key points (128-dimensional feature vectors of key point coordinates and key points are obtained), the euclidean distance of each key point feature vector between the two images is calculated to be less than 0.7, namely, the matching key points are judged, and if the number of the matching key points of the two images exceeds a certain value, the correctly registered bullet mark images are judged;
the Euclidean distance formula is as follows:
min(E w (x1,x2))=||G W (x1)-G w (x2)||
wherein x1 is a reference bullet trace image,x2 is the image to be registered, G w For model mapping functions E w Is the Euclidean distance.
Compared with the prior art, the invention has the following technical effects:
1. the method for combining the manual features and the learning features utilizes the gradient information of the image, reduces network structure parameters and parameter calculation, and can realize the stability of detection of key points of the bullet marks.
2. Improving the CNN structure of the Key. Net model to enable the CNN structure to be more suitable for feature learning and combination capability of bullet mark tasks;
3. and a Key.Net model is trained by using a large-scale data set and is migrated to a bullet trace example of a small sample, so that the reliability of the model and the detection accuracy are improved.
Drawings
Fig. 1 is an overall flowchart of the method for registration between bullet tracks provided by the present invention.
Fig. 2 is a modified key.net model and scale space flow diagram of an implementation of the present invention.
Fig. 3 is a diagram of an improved key.net network architecture embodying the present invention.
Fig. 4 is a diagram of feature point detection in a multi-scale space provided by the present invention.
Fig. 5 (a) is a graph of trace key point matching across a model in the present invention.
Fig. 5 (b) is a bullet trace keypoint match diagram of the SIFT description.
Detailed Description
The embodiments of the present invention will be described in detail below with reference to the drawings so that the advantages and features of the present invention can be more easily understood by those skilled in the art, thereby making clear and unambiguous the scope of the present invention.
The invention provides a bullet bottom nest mark feature recognition method combining manual features and learning features, which aims to solve the problem of poor repeatability of extraction of an end-to-end key point detection network in different scenes and improve the stability of a bullet mark key point detection process.
The present invention provides a bullet mark feature point recognition method for solving the above technical problems, and the following details are described with reference to an embodiment:
(1) The specific flow of the bullet mark recognition method is shown in figure 1, in this example, nest mark data sets such as Fadul and Weller are used as a combined data set, and the combined data set has 158 marks, each of which has 2-5 marks; cutting a central needle impact trace in the matlab for the bullet mark image, and reserving a bullet bottom pit mark feature centralized area; then carrying out two-dimensional Gaussian regression filtering treatment on the pit mark image to highlight the roughness characteristics of the stamping; 120 samples were taken as training sets and 38 as validation sets, with traces from the same firearm as the truth pairs for training.
(3) Affine transformation of images of ImageNet ILSVRC 2012 dataset generated positive image pairs, 12000 pairs of size 192 x 192, with 9000 set as training data, 3000 set as validation data, for guiding loss function calculation of updated network weights.
(4) Establishing a characteristic point extraction network model Key.Net through a python and tensorflow framework, wherein the models of the manual characteristic point filter, the learning characteristic point rate filter and the multi-scale spatial index suggestion layer are shown in figure 3; the Key.Net model is initialized through He weight and regularized through an L2 kernel; the ImageNet dataset is input as a model to the two key.net instances to share weights and iteratively update the weights at the same time until optimal model parameters are obtained.
(5) Leading the bullet mark training set into the optimal parameter model in the step (4) for continuous iterative training, and freezing Key.Net model parameters with highest matching accuracy of the verification set to be used as a final key point detection model;
(6) The two extra nest mark samples I1 and I2 are 1000 multiplied by 1000 in size, key point detection is carried out through the trained Key.Net model, and the scale factor is set to be theta i = {0.4,0.6,0.8,1,1.2,1.4}, i=1..6, i.e. n=6, key.net detects all pixel level feature points on multiple scale planes, resulting in the number of feature points of two images on each scale plane beingImages I1, I2 are then characterised by HardNetThe matching algorithm adopts a RANSAC filtering method, as shown in fig. 5 (a) and 5 (b), and the number of correct matching key points is far higher than the matching pair number of the SIFT method.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (4)

1. A trace identification method combining manual characteristics and learning characteristics is characterized by comprising the following specific steps:
step S1: cutting and filtering are carried out on bullet trace images acquired by a three-dimensional confocal microscope so as to highlight roughness characteristics of bullet trace images; constructing a bullet mark image sample after pretreatment into a bullet mark data total set, taking 80% of the bullet mark data total set as an algorithm training set and 20% as a verification set;
step S2: establishing an improved feature point extraction network model Key.Net, which comprises a manual feature point filter, a learning feature point filter and a multi-scale space index suggestion layer, wherein the method improves the original Key.Net learning feature point filter so as to adapt to bullet mark tasks;
step S3: performing image affine transformation on the ImageNet large-scale data set to generate an image pair data set, and inputting the image pair data set into the Key.Net twin model established in the step S2 to obtain optimal model parameters;
step S4: introducing the bullet mark training set in the step S1 into the optimal parameter model in the step S3 for continuous iterative training, and utilizing the verification set in the step S1 to check the accuracy of the model, and freezing Key.Net model parameters with the highest matching accuracy of the verification set to be used as a final key point detection model;
step S5: and (3) acquiring bullet mark images to be registered, inputting the bullet mark images to a Key.Net feature point detection model in the step S4, acquiring a key point response diagram, constructing feature description by using HardNet as a key point, and calculating the Euclidean distance of the feature description among the key points to realize registration with the reference bullet mark images.
2. A method of trace identification in combination with manual and learned features as set forth hereinafter in claim 1, wherein:
in the step S1, the sampling filtering process includes: removing the irrelevant area around the bullet bottom nest mark by cutting, reserving an intermediate feature concentration area, and carrying out two-dimensional Gaussian regression filtering on the bullet bottom nest mark to obtain a nest mark image with obvious roughness features; the two-dimensional Gaussian regression filtering formula is as follows:
wherein t (ζ, η) is the input surface, ζ, η are the abscissa under the t surface, respectively; s (x, y) is the filtered output surface, x, y are the abscissa under the s surface, respectively; ρ (r) is a square term.
3. A method of trace identification in combination with manual and learned features as set forth hereinafter in claim 1, wherein:
in the step S2, the key.net model includes a multi-layer structure, and the CNN module uses He weight initialization and L2 kernel regularization:
step S2.1: in order to construct a three-layer scale space, carrying out downsampling processing on an input image (with the size of W multiplied by H) at three scale levels, wherein the layer-by-layer sampling coefficient and the Gaussian blur coefficient are 1.2;
step S2.2: the manual characteristic point filter provides an anchor point structure for the CNN filter, positions, scores and sorts characteristic points with strong repeatability, and references Harris and Hession corner detection algorithm: the angular point has large gradient and gradient change rate in the region with quick change of gray level, the salient angle characteristic is obtained by calculating the first and second derivatives of each scale image, and d is calculated by a soble operator x ,d yd xx ,d yy ,d xy ,/>d xx ×d yy ,d x ×d y 10 parameters in total, constitute output tensors of 10 channels;
step S2.3: the learning characteristic filter consists of 3 basic learning blocks and 1 independent convolution block, the method is improved based on the learning module of the original Key.Net, and convolution kernels of 1 multiplied by 1, 3 multiplied by 3 and 5 multiplied by 5 are used for constructing an acceptance module to replace the original convolution blocks so as to enable the model to adaptively select the convolution kernel size; the activation layer is changed into a Sigmoid function, and the function is as follows:
the basic learning block consists of an acceptance module, a batch normalization layer and an activation function layer, wherein the input channel of the 1 st learning block is 10, and the input channels and the output channels of the rest learning blocks are all set to be 8;
the single convolution block consists of a convolution layer with the convolution kernel size of 3 multiplied by 3 and a ReLU function, wherein the output channel is 1, namely, a response characteristic diagram of a bullet image is output, and the network scores each pixel point;
step S2.4: the output images obtained after the processing of the manual characteristic layer and the learning characteristic layer of the three scale level images are respectively subjected to up-sampling processing, 1 characteristic response diagram (the size is W multiplied by H) with different scales is obtained on each layer, and the images are cascaded into 3-channel scale space images;
step S2.5: the method obtains an image true value based on affine transformation, is used for training the whole model, a single index suggestion layer divides an image characteristic response diagram into grids for multiple times, the grid size is N multiplied by N, a key point of each grid is calculated based on the true value of affine transformation, and a loss function on a single scale is as follows:
wherein the method comprises the steps ofAnd->Is an image->And image->Characteristic point response diagram of (H) ab Representation of image->And->Affine transformation matrix existing between, parameter alpha i Determining the value of the response value in calculating loss based on the position of each feature point;
the loss function at multiple scales is expressed as:
where s is the scale factor, N s ∈[8,16,24,32,40]Is the window size, lambda s ∈[256,64,16,4,1]Is the control parameter when the scale factor is s, N s And lambda (lambda) s And determining by performing super-parameter searching on the verification set.
4. A method of trace identification in combination with manual and learned features as set forth hereinafter in claim 1, wherein: in the step S4, the registration between the image to be registered and the reference image is detected and described through key.net+harret key points, 128-dimensional feature vectors of key point coordinates and key points are obtained, euclidean distance of each key point feature vector between the two images is calculated to be less than 0.7, namely, the matching key points are judged, and if the number of the matching key points of the two images exceeds a certain value, the correctly registered bullet mark images are judged;
the Euclidean distance formula is as follows:
min(E w (x1,x2))=||G W (x1)-G w (x2)||
wherein x1 is a reference bullet trace image, x2 is an image to be registered, G W For model mapping functions E w Is the Euclidean distance.
CN202310122918.2A 2023-02-16 2023-02-16 Bullet bottom socket mark feature identification method combining manual features and learning features Pending CN117079272A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310122918.2A CN117079272A (en) 2023-02-16 2023-02-16 Bullet bottom socket mark feature identification method combining manual features and learning features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310122918.2A CN117079272A (en) 2023-02-16 2023-02-16 Bullet bottom socket mark feature identification method combining manual features and learning features

Publications (1)

Publication Number Publication Date
CN117079272A true CN117079272A (en) 2023-11-17

Family

ID=88712175

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310122918.2A Pending CN117079272A (en) 2023-02-16 2023-02-16 Bullet bottom socket mark feature identification method combining manual features and learning features

Country Status (1)

Country Link
CN (1) CN117079272A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118015056A (en) * 2024-04-09 2024-05-10 西安电子科技大学 End-to-end trace detection method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118015056A (en) * 2024-04-09 2024-05-10 西安电子科技大学 End-to-end trace detection method

Similar Documents

Publication Publication Date Title
CN110532920B (en) Face recognition method for small-quantity data set based on FaceNet method
Li et al. SAR image change detection using PCANet guided by saliency detection
CN111274916B (en) Face recognition method and face recognition device
CN111368683B (en) Face image feature extraction method and face recognition method based on modular constraint CenterFace
CN110175615B (en) Model training method, domain-adaptive visual position identification method and device
CN111310662B (en) Flame detection and identification method and system based on integrated deep network
CN111652292B (en) Similar object real-time detection method and system based on NCS and MS
CN111709313B (en) Pedestrian re-identification method based on local and channel combination characteristics
CN110490915B (en) Point cloud registration method based on convolution-limited Boltzmann machine
CN111445459A (en) Image defect detection method and system based on depth twin network
CN111242026B (en) Remote sensing image target detection method based on spatial hierarchy perception module and metric learning
CN112084895B (en) Pedestrian re-identification method based on deep learning
CN108268865A (en) Licence plate recognition method and system under a kind of natural scene based on concatenated convolutional network
CN109086350B (en) Mixed image retrieval method based on WiFi
CN111009005A (en) Scene classification point cloud rough registration method combining geometric information and photometric information
CN110610174A (en) Bank card number identification method under complex conditions
CN112861672A (en) Heterogeneous remote sensing image matching method based on optical-SAR
CN114358166B (en) Multi-target positioning method based on self-adaptive k-means clustering
CN117079272A (en) Bullet bottom socket mark feature identification method combining manual features and learning features
CN117576079A (en) Industrial product surface abnormality detection method, device and system
CN113128518B (en) Sift mismatch detection method based on twin convolution network and feature mixing
CN117557784B (en) Target detection method, target detection device, electronic equipment and storage medium
CN114861761A (en) Loop detection method based on twin network characteristics and geometric verification
CN112329662B (en) Multi-view saliency estimation method based on unsupervised learning
CN111127407B (en) Fourier transform-based style migration forged image detection device and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination