CN111523404A - Partial face recognition method based on convolutional neural network and sparse representation - Google Patents
Partial face recognition method based on convolutional neural network and sparse representation Download PDFInfo
- Publication number
- CN111523404A CN111523404A CN202010267944.0A CN202010267944A CN111523404A CN 111523404 A CN111523404 A CN 111523404A CN 202010267944 A CN202010267944 A CN 202010267944A CN 111523404 A CN111523404 A CN 111523404A
- Authority
- CN
- China
- Prior art keywords
- image
- detected
- sparse representation
- neural network
- training set
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/513—Sparse representations
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Data Mining & Analysis (AREA)
- Human Computer Interaction (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Molecular Biology (AREA)
- Evolutionary Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a partial face recognition method based on a convolutional neural network and sparse representation, which is characterized in that a mirror image of an image to be detected is added to realize sample expansion, the convolutional neural network is used for carrying out feature extraction on a training set and the image to be detected to obtain corresponding feature mapping construction feature vectors, the sparse representation and sample correction method is used for calculating residual errors, and input images are classified based on a score minimization criterion. Compared with the prior art, the method has the advantages of reducing the influence of changes such as shielding, illumination and expression of the image to be detected, improving the accuracy of classification, enabling the extracted features to be more accurate, effectively enhancing the robustness of different changes in the image and better solving the problem of recognition of less information of part of human faces.
Description
Technical Field
The invention relates to the technical field of face recognition, in particular to a partial face recognition method based on a convolutional neural network and sparse representation.
Background
Due to the wide application of artificial intelligence technology, face recognition becomes a research hotspot in recent years in the fields of computer vision and image processing. With the rapid development of neural networks and deep learning and the remarkable improvement of computing power, face recognition has been applied to real life. VGGFace was proposed by Parkhi in 2015, which has good migratability and excellent recognition based on VGGNet proposed by simony and Zisserman in 2014. Most face recognition techniques in the past are mainly directed to complete face images. In actual life, due to the influence of factors such as shielding, illumination, different postures and expressions, the face image to be recognized is often incomplete. Therefore, some face recognition is a hot spot worth discussing, and the recognition technology and recognition accuracy thereof are to be improved. Some face images usually lack more information than the whole face images, and because of the small size, the face images are generally not directly matched with the images in the image library. He et al propose dynamic feature matching based on a Full Convolutional Network (FCN), which greatly improves recognition accuracy, but when an image to be detected contains changes such as occlusion, illumination, expression, and the like, robustness is not good enough.
Disclosure of Invention
The invention aims to design a partial face recognition method based on a convolutional neural network and sparse representation aiming at the defects of the prior art, which adopts a mirror image added with an image to be detected to realize sample expansion, combines the mirror image with an original image to improve the recognition accuracy, utilizes the convolutional neural network to carry out feature extraction on a training set and the image to be detected to obtain a feature vector constructed by corresponding feature mapping, utilizes the sparse representation and a sample correction method to calculate residual errors, introduces and classifies an input image based on a score minimization criterion, predicts the image to be detected of unknown category, effectively reduces the influence of changes of the image to be detected, such as shielding, illumination, expression and the like, greatly improves the classification accuracy, introduces a change dictionary to effectively enhance the robustness of different changes in the image, can be applied to face recognition of various scenes, and has simple and convenient method, the extracted features are more accurate, the problem of recognition of less face information of a part of faces is better solved, and the method has a wide application prospect.
The specific technical scheme for realizing the purpose of the invention is as follows: a partial face recognition method based on convolution neural network and sparse representation is characterized in that a mirror image of an image to be detected is added to realize sample expansion, the convolution neural network is used for carrying out feature extraction on a training set and the image to be detected to obtain feature mapping construction feature vectors corresponding to the training set and the image to be detected, the sparse representation and sample correction method is used for calculating residual errors, and input images are classified based on a score minimization criterion, and the specific process comprises the following steps:
step a: making an image training set;
step b: obtaining a mirror image of an input image to be detected to expand a sample;
step c: calculating residual errors by using a sliding window, sparse representation and sample correction method;
step d: repeating the step c by utilizing the mirror image of the image to be detected, and calculating a residual error;
step e: and combining two residual errors obtained after sample expansion to calculate a new score, and predicting the to-be-detected image of the unknown category based on a score minimization criterion.
In the step a, a sample image set with labels is needed (an original image needs to be cut out if necessary to eliminate background interference, and only a complete face part is reserved), and features of all training images are extracted by using CNN to construct a feature matrix.
And b, performing mirror image inversion on the input original image to be detected to serve as a new image to be detected, wherein the residual error of the new image to be detected is combined with the residual error of the original image to be detected, and the two images participate in identification together.
The local matching of the image to be detected in the step c comprises the following steps:
step c1: generating feature mapping and feature vectors of the image to be detected by using the CNN;
step c2: for each training image in the training set, generating a subset with the same size as the feature mapping of the image to be detected by using a sliding window, thereby obtaining a new training set;
step c3: calculating a sparse coefficient based on the newly generated training set and a sparse representation method;
step c4: constructing a change dictionary by utilizing the newly generated training set;
step c5: subtracting a corresponding change part existing in the change dictionary from the image to be detected by using the ideas of sparse representation and a linear additive model to finish sample correction;
step c6: and calculating the residual error between the image to be detected and each training image in the training set by using the sparse coefficient and the corrected image.
Combining two residual errors obtained after sample expansion in the step e, and calculating a new score, wherein the specific steps are respectively assigning different weights to the residual errors of the original image to be detected and the mirror image, and weighting and adding the residual errors; and predicting the category of the image to be tested based on the score minimization criterion, wherein the specific steps are to calculate the minimum score between the image to be tested and each training image in the training set, and take the category of the corresponding training image as a prediction result.
Compared with the prior art, the method has the advantages that the influence of changes such as shielding, illumination and expression of the image to be detected is reduced, the classification accuracy is improved, the introduced change dictionary effectively enhances the robustness of different changes in the image, the method can be applied to face recognition of various scenes, the method is simple and convenient, the extracted features are more accurate, the recognition problem of less face information is better solved, and the method has wide application prospect.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is an exemplary diagram of a training image;
FIG. 3 is an exemplary diagram of an image to be measured;
FIG. 4 is an exemplary diagram of mirror images respectively corresponding to the images to be measured;
FIG. 5 is a schematic illustration of a first sliding on a feature map of a training image using a sliding window.
Detailed Description
The present invention will be described in further detail with reference to some embodiments of face recognition.
Referring to fig. 1, the present invention includes: the method comprises the following steps of manufacturing a training set, sample expansion, sparse representation and fusion classification, wherein the specific steps of partial face recognition are as follows:
step a: training set for making image
Referring to fig. 2, the image training set is a labeled sample image set, and for an initial image with more background parts, the initial image needs to be cut according to the coordinates of two eyes and the proportion of five sense organs of "three-family five eyes", so as to eliminate background interference, keep the image set of a complete face image as much as possible, and unify the sizes of the images. Then, using CNN to extract the feature mapping of each original training image, and constructing a feature matrix (converting the feature matrix into a feature vector), and combining the feature vectors of N training images together to form a training set G ═ G1,G2,...,GN]. If training images belonging to the same class are considered, the training set G may be rewritten to F ═ F, assuming that there are L different classes in the training set1,F2,...,FL]WhereinIs represented by having lkClass k training images of the tensor map, and k ═ 1, 2],
Step b: obtaining mirror image of input image to be measured for sample expansion
Referring to fig. 3, an image to be measured y of a partial face image is input.
Referring to fig. 4, the image y to be measured is subjected to mirror image transformation by taking the longitudinal central axis of the image as a center to obtain a mirror image y' of the image y to be measured, so as to realize sample expansion. The two graphs are processed in exactly the same way throughout the recognition process, and the residuals of y and y' are weighted and added to obtain the final score.
Step c: computing residual errors using sliding window, sparse representation, and sample rectification methods
When the dimension of the image y to be detected and the dimension of the image in the training set are scaled to the same scale, the size of the feature mapping extracted from the image to be detected by using the CNN is often smaller than that of the image in the training set G.
Referring to FIG. 5, the feature map p of the image to be measured is the feature map g of a training image using a sliding window1Last sliding. Sliding the feature map p extracted from the image y to be tested by using a sliding window on the feature map of each image in the training set by taking step as step to obtain M sub-feature maps with the same size as a new training set G ', wherein G'i=[91,92,...,gN],i=[1,2,...,M]And the step is a positive integer and is smaller than the length and width of the feature mapping p, the step is 1, and the smaller the value of the step is, the finer the features of the training set image obtained through the sliding window are.
Taking the mean value of the images belonging to the same class in G', and constructing a prototype dictionary P ═ mu1,μ2,...,μL](ii) a Then constructing a variation dictionary y ═ F1-μ1e1,F2-μ2e2,...,FL-μLeL]Wherein e iskIs all 1kThe row vectors are maintained.
Based on the sparse representation method, the image p to be measured can be represented as p ═ G' α,
in the formula:d=[d1,d2,...,dN],di=||p-gi||2the similarity between the image to be detected and the ith image in the training set G'; lambda [ alpha ]1And λ2Is two constant terms set according to the existing theory and method, and takes lambda1=3.1、λ2=0.4。
in the formula:λ3is a constant term set according to the existing theory and method, and takes lambda3=3.6。
Then, the residual r is calculated according to the following expression ai(y):
Step d: c, repeating the step c by utilizing the mirror image y' of the image to be measured to calculate the residual error ri(y′)。
Step e: when the weighted score is calculated, the residual weight of the image y to be measured is omega in consideration of the fact that the left half face and the right half face of a person are not completely the same1Then the residual weight of the mirror image y' is 1-omega1And also, a new score is calculated as b:
scorei(y)=ω1×ri(y)+(1-ω1)×ri(y′) (b);
considering that the left and right half faces of a person are not likely to be identical, the original image to be measured should be emphasized more, so the weights of the residual errors of y and y' are set to be omega respectively10.6 and 1-0.6-0.4.
Based on the score minimization criterion, calculating the label of the minimum score image to be measured between the image to be measured and each training image in the training set according to the following formula c, and taking the category of the corresponding training image as a prediction result:
wherein z isiLabels representing images of the ith in the training set.
The above embodiments are only for further illustration of the present invention and are not intended to limit the present invention, and all equivalent implementations of the present invention should be included in the scope of the claims of the present invention.
Claims (6)
1. A partial face recognition method based on convolution neural network and sparse representation is characterized in that a mirror image added with an image to be detected is adopted to realize sample expansion, the convolution neural network is utilized to carry out feature extraction on a training set and the image to be detected, corresponding feature mapping is obtained to construct feature vectors, a sparse representation and sample correction method is utilized to calculate residual errors, and input images are classified based on a score minimization criterion, and the specific process comprises the following steps:
step a: making an image training set;
step b: obtaining a mirror image of an input image to be detected and carrying out sample expansion;
step c: calculating residual errors by using a sliding window, sparse representation and sample correction method;
step d: repeating the step c by utilizing a mirror image of the image to be detected to calculate the residual error;
step e: and combining two residual errors obtained after sample expansion, calculating a new score, and predicting the unknown type of image to be detected based on a score minimization criterion.
2. The partial face recognition method based on convolutional neural network and sparse representation as claimed in claim 1, wherein the image training set is a labeled sample image set, or an image set of an original image which is cut to eliminate background interference and only retain the whole face part, the CNN is used to extract the features of all training images, and a feature matrix is constructed.
3. The partial face recognition method based on the convolutional neural network and sparse representation as claimed in claim 1, wherein the sample expansion is to mirror-transform the input original image to be tested into a new image to be tested, and the residual thereof is weighted and added with the residual of the original image to be tested to participate in the recognition together.
4. The partial face recognition method based on the convolutional neural network and the sparse representation as claimed in claim 1, wherein the residual error is calculated by using a sliding window, a sparse representation and a sample correction method, and the specific calculation comprises the following steps:
step c1: generating feature mapping and feature vectors of the image to be detected by using the CNN;
step c2: for each training image in the training set, generating a subset with the same size as the feature mapping of the image to be detected by using a sliding window, thereby obtaining a new training set;
step c3: calculating a sparse coefficient based on a newly generated training set and a sparse representation method;
step c4: constructing a change dictionary by utilizing the newly generated training set;
step c5: subtracting a corresponding change part existing in the change dictionary from the image to be detected by utilizing the sparse representation and the linear additive model to finish sample correction;
step c6: and calculating the residual error between the image to be detected and each training image in the training set by using the sparse coefficient and the corrected image.
5. The partial face recognition method based on the convolutional neural network and the sparse representation as claimed in claim 1, wherein the two residuals are combined into a weighted sum which respectively gives different weights to the two residuals of the original image to be measured and the mirror image.
6. The partial face recognition method based on the convolutional neural network and the sparse representation as claimed in claim 1, wherein the predicting of the unknown class of the image to be tested is to calculate a minimum score between the image to be tested and each training image in the training set, and to take the class of the corresponding training image as a prediction result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010267944.0A CN111523404A (en) | 2020-04-08 | 2020-04-08 | Partial face recognition method based on convolutional neural network and sparse representation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010267944.0A CN111523404A (en) | 2020-04-08 | 2020-04-08 | Partial face recognition method based on convolutional neural network and sparse representation |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111523404A true CN111523404A (en) | 2020-08-11 |
Family
ID=71901929
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010267944.0A Pending CN111523404A (en) | 2020-04-08 | 2020-04-08 | Partial face recognition method based on convolutional neural network and sparse representation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111523404A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112381070A (en) * | 2021-01-08 | 2021-02-19 | 浙江科技学院 | Fast robust face recognition method |
CN112949636A (en) * | 2021-03-31 | 2021-06-11 | 上海电机学院 | License plate super-resolution identification method and system and computer readable medium |
CN114997253A (en) * | 2021-02-23 | 2022-09-02 | 哈尔滨工业大学 | Intelligent state anomaly detection method, monitoring system and monitoring method for satellite constellation |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104392246A (en) * | 2014-12-03 | 2015-03-04 | 北京理工大学 | Inter-class inner-class face change dictionary based single-sample face identification method |
US20160034789A1 (en) * | 2014-08-01 | 2016-02-04 | TCL Research America Inc. | System and method for rapid face recognition |
CN107563328A (en) * | 2017-09-01 | 2018-01-09 | 广州智慧城市发展研究院 | A kind of face identification method and system based under complex environment |
CN108197573A (en) * | 2018-01-03 | 2018-06-22 | 南京信息工程大学 | The face identification method that LRC and CRC deviations based on mirror image combine |
CN108664917A (en) * | 2018-05-08 | 2018-10-16 | 佛山市顺德区中山大学研究院 | Face identification method and system based on auxiliary change dictionary and maximum marginal Linear Mapping |
CN108681725A (en) * | 2018-05-31 | 2018-10-19 | 西安理工大学 | A kind of weighting sparse representation face identification method |
CN108875459A (en) * | 2017-05-08 | 2018-11-23 | 武汉科技大学 | One kind being based on the similar weighting sparse representation face identification method of sparse coefficient and system |
CN109766813A (en) * | 2018-12-31 | 2019-05-17 | 陕西师范大学 | Dictionary learning face identification method based on symmetrical face exptended sample |
CN110210336A (en) * | 2019-05-16 | 2019-09-06 | 赣南师范大学 | A kind of low resolution single sample face recognition method |
-
2020
- 2020-04-08 CN CN202010267944.0A patent/CN111523404A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160034789A1 (en) * | 2014-08-01 | 2016-02-04 | TCL Research America Inc. | System and method for rapid face recognition |
CN104392246A (en) * | 2014-12-03 | 2015-03-04 | 北京理工大学 | Inter-class inner-class face change dictionary based single-sample face identification method |
CN108875459A (en) * | 2017-05-08 | 2018-11-23 | 武汉科技大学 | One kind being based on the similar weighting sparse representation face identification method of sparse coefficient and system |
CN107563328A (en) * | 2017-09-01 | 2018-01-09 | 广州智慧城市发展研究院 | A kind of face identification method and system based under complex environment |
CN108197573A (en) * | 2018-01-03 | 2018-06-22 | 南京信息工程大学 | The face identification method that LRC and CRC deviations based on mirror image combine |
CN108664917A (en) * | 2018-05-08 | 2018-10-16 | 佛山市顺德区中山大学研究院 | Face identification method and system based on auxiliary change dictionary and maximum marginal Linear Mapping |
CN108681725A (en) * | 2018-05-31 | 2018-10-19 | 西安理工大学 | A kind of weighting sparse representation face identification method |
CN109766813A (en) * | 2018-12-31 | 2019-05-17 | 陕西师范大学 | Dictionary learning face identification method based on symmetrical face exptended sample |
CN110210336A (en) * | 2019-05-16 | 2019-09-06 | 赣南师范大学 | A kind of low resolution single sample face recognition method |
Non-Patent Citations (3)
Title |
---|
L.HE: "《Dynamic Feature Matching for Partial Face Recognition》", 《IEEE TRANS. IMAGE PROCESS》 * |
Y.GAO: "《Semi-Supervised Sparse Representation Based Classification for Face Recognition with Insufficient Labeled Samples》", 《IEEE TRANS. IMAGE PROCESS》 * |
张彦: "《基于变化稀疏表示的单样本人脸识别》", 《信息工程大学学报》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112381070A (en) * | 2021-01-08 | 2021-02-19 | 浙江科技学院 | Fast robust face recognition method |
CN114997253A (en) * | 2021-02-23 | 2022-09-02 | 哈尔滨工业大学 | Intelligent state anomaly detection method, monitoring system and monitoring method for satellite constellation |
CN112949636A (en) * | 2021-03-31 | 2021-06-11 | 上海电机学院 | License plate super-resolution identification method and system and computer readable medium |
CN112949636B (en) * | 2021-03-31 | 2023-05-30 | 上海电机学院 | License plate super-resolution recognition method, system and computer readable medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108564129B (en) | Trajectory data classification method based on generation countermeasure network | |
CN105701502B (en) | Automatic image annotation method based on Monte Carlo data equalization | |
CN108268838B (en) | Facial expression recognition method and facial expression recognition system | |
CN110188228B (en) | Cross-modal retrieval method based on sketch retrieval three-dimensional model | |
CN113610126A (en) | Label-free knowledge distillation method based on multi-target detection model and storage medium | |
CN111523404A (en) | Partial face recognition method based on convolutional neural network and sparse representation | |
CN110135459B (en) | Zero sample classification method based on double-triple depth measurement learning network | |
CN109948522B (en) | X-ray hand bone maturity interpretation method based on deep neural network | |
CN107169117B (en) | Hand-drawn human motion retrieval method based on automatic encoder and DTW | |
CN112580515B (en) | Lightweight face key point detection method based on Gaussian heat map regression | |
US9158963B2 (en) | Fitting contours to features | |
US9202138B2 (en) | Adjusting a contour by a shape model | |
CN110543906B (en) | Automatic skin recognition method based on Mask R-CNN model | |
Jiang et al. | Heterogenous-view occluded expression data recognition based on cycle-consistent adversarial network and K-SVD dictionary learning under intelligent cooperative robot environment | |
CN113129234B (en) | Incomplete image fine restoration method based on intra-field and extra-field feature fusion | |
CN111028319A (en) | Three-dimensional non-photorealistic expression generation method based on facial motion unit | |
CN112380374B (en) | Zero sample image classification method based on semantic expansion | |
Lu et al. | Prediction calibration for generalized few-shot semantic segmentation | |
CN114821299A (en) | Remote sensing image change detection method | |
CN116229519A (en) | Knowledge distillation-based two-dimensional human body posture estimation method | |
CN110415261B (en) | Expression animation conversion method and system for regional training | |
CN117173449A (en) | Aeroengine blade defect detection method based on multi-scale DETR | |
CN115661539A (en) | Less-sample image identification method embedded with uncertainty information | |
CN111144462A (en) | Unknown individual identification method and device for radar signals | |
CN114973305A (en) | Accurate human body analysis method for crowded people |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20200811 |