CN110458092B - Face recognition method based on L2 regularization gradient constraint sparse representation - Google Patents

Face recognition method based on L2 regularization gradient constraint sparse representation Download PDF

Info

Publication number
CN110458092B
CN110458092B CN201910733434.5A CN201910733434A CN110458092B CN 110458092 B CN110458092 B CN 110458092B CN 201910733434 A CN201910733434 A CN 201910733434A CN 110458092 B CN110458092 B CN 110458092B
Authority
CN
China
Prior art keywords
sample
samples
recognized
training sample
identified
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910733434.5A
Other languages
Chinese (zh)
Other versions
CN110458092A (en
Inventor
张皖
高广谓
朱冬
汪焰南
吴松松
岳东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN201910733434.5A priority Critical patent/CN110458092B/en
Publication of CN110458092A publication Critical patent/CN110458092A/en
Application granted granted Critical
Publication of CN110458092B publication Critical patent/CN110458092B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

A face recognition method based on an L2 regularization gradient constraint sparse representation, the method comprising: acquiring a training sample set; calculating a representation coefficient of a sample to be recognized on a training sample of the training sample set based on the facial image gradient recovery constraint information and an L2 regularization sparse representation method; calculating residual errors of the samples to be recognized on each class of training samples of the training sample set by adopting the representation coefficients of the samples to be recognized on the training samples; and outputting the training sample category corresponding to the minimum residual error obtained by calculation as the category of the sample to be identified. By the scheme, the accuracy of face recognition can be improved.

Description

Face recognition method based on L2 regularization gradient constraint sparse representation
Technical Field
The invention belongs to the technical field of image recognition, and particularly relates to a face recognition method based on L2 regularization gradient constraint sparse representation.
Background
Human face recognition is a popular research topic in the field of computer vision, and it integrates computer image processing technology and statistical technology, and is widely applied in various fields with its advantages of non-contact and non-intruding, such as: financial field, public security system, social security field, airport frontier inspection face recognition, and the like.
Sparse representation achieves remarkable performance in the aspect of face recognition, a sparse representation method usually imposes some constraint conditions, and the most widely used constraints comprise L1 regularization, L2 regularization and L21 regularization. Among them, the representation method based on L2 regularization has a distinct advantage that it has a closed-form expression solution and takes into account the correlation between samples. However, these methods only use the features of the visual hierarchy, and do not fully use other information of the image, and besides the features of the visual hierarchy, the gradient information is also an important feature for image processing and recognition.
However, the conventional face recognition method using sparse representation has the problem of low recognition accuracy.
Disclosure of Invention
The invention solves the technical problem of how to improve the accuracy of face recognition.
In order to achieve the above object, an embodiment of the present invention provides a face recognition method based on L2 regularization gradient constraint sparse representation, where the method includes:
acquiring a training sample set;
calculating a representation coefficient of a sample to be recognized on a training sample of the training sample set based on the facial image gradient recovery constraint information and an L2 regularization sparse representation method;
calculating residual errors of the samples to be recognized on each class of training samples of the training sample set by adopting the representation coefficients of the samples to be recognized on the training samples;
and outputting the training sample category corresponding to the minimum residual error obtained by calculation as the category of the sample to be recognized.
Optionally, the facial image gradient restoration constraint information is:
Figure BDA0002161379100000021
wherein G is n Gradient map representing input samples to be recognized, G in Gradient map, w, representing restored samples to be identified n And the weight vector represents the input samples to be identified, N represents the nth image block of the input samples to be identified, and N represents the total number of the image blocks of the input samples to be identified.
Optionally, the calculating a representation coefficient of the sample to be recognized on the training sample of the training sample set includes:
dividing samples in the training sample set into overlapping image blocks;
constructing an objective function of the weight vector of the image block, and solving to obtain an optimal weight vector;
calculating an estimation graph of each type of sample to be identified based on the calculated optimal weight vector;
taking the calculated estimation image of each type of sample to be identified as a sample to be identified input in next iteration, and starting execution from the step of dividing the samples in the training sample set and the samples to be identified into overlapped image blocks respectively until the iteration times reach a preset time threshold value to obtain a final estimation image of each type of sample to be identified;
constructing a face image target function based on L2 regularization sparse representation based on the calculated final estimation graph of each type of sample to be recognized;
and solving the optimal solution of the facial image objective function based on the L2 regularized sparse representation as a representation coefficient of the sample to be recognized on the training sample of the training sample set.
Optionally, the constructing an objective function of a weight vector of the image block and solving to obtain an optimal weight vector includes:
constructing an objective function of the weight vector;
and converting the constructed objective function of the weight vector into a corresponding matrix form, and solving an optimal solution by adopting a minimum angle regression algorithm to obtain the optimal weight vector.
Optionally, the objective function of the weight vector is:
Figure BDA0002161379100000031
wherein y (p, q) represents the p row and q column image block of the sample to be identified,
Figure BDA0002161379100000032
a line-p and a column-q image block, w, representing the nth image in the ith class in the training sample set n (p, q) represents a weight vector of the image block of the p-th row and the q-th column, and α represents a reconstruction error for the equilibrium gradient mapPoor parameter, G n (p, q) represents the gradient map of the image block of the p-th row and q-th column of the originally input sample to be recognized, G in And (p, q) represents a gradient map of the image block of the p row and the q column of the original recovered sample to be recognized.
Optionally, the estimation map of each type of sample to be identified is calculated by using the following formula:
Figure BDA0002161379100000033
wherein the content of the first and second substances,
Figure BDA0002161379100000034
and representing the output estimation graph of the ith sample to be identified.
Optionally, the following formula is adopted to calculate the residual error of the sample to be identified on each type of training sample in the training sample set:
Figure BDA0002161379100000035
wherein d is i And the residual error of the sample to be identified on the ith class training sample of the training sample set is represented.
Compared with the prior art, the invention has the following beneficial effects:
according to the scheme, the representation coefficients of the samples to be recognized on the training samples of the training sample set are calculated by obtaining the training samples, based on the gradient recovery constraint information of the face images and an L2 regularization sparse representation method, the representation coefficients of the samples to be recognized on the training samples are adopted, the residual errors of the samples to be recognized on each class of training samples of the training sample set are calculated, finally, the training sample class corresponding to the minimum residual error obtained through calculation is used as the class of the samples to be recognized to be output, and as the gradient information of the face images is fully utilized, L2 regularization is integrated into the training sample class to be recognized to obtain the representation coefficients of the samples to be recognized, the recognition accuracy of the face images can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
Fig. 1 is a schematic flowchart of a face recognition method based on L2 regularization gradient constraint sparse representation according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application. The directional indications (such as up, down, left, right, front, back, etc.) in the embodiments of the present invention are only used to explain the relative positional relationship between the components, the movement, etc. in a specific posture (as shown in the drawings), and if the specific posture is changed, the directional indication is changed accordingly.
As described in the background art, the existing face recognition method using sparse representation in the prior art only uses the features of the visual hierarchy, and has the problem of low recognition accuracy.
According to the technical scheme, the representation coefficients of the samples to be recognized on the training samples of the training sample set are calculated by obtaining the training samples, based on the gradient recovery constraint information of the face images and an L2 regularization sparse representation method, the representation coefficients of the samples to be recognized on the training samples are adopted, the residual errors of the samples to be recognized on each class of training samples of the training sample set are calculated, and finally the class of the training samples corresponding to the minimum residual error obtained through calculation is output as the class of the samples to be recognized.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below.
Fig. 1 is a schematic flowchart of a face recognition method based on L2 regularization gradient constraint sparse representation according to an embodiment of the present invention. Referring to fig. 1, a face recognition method based on L2 regularization gradient constraint sparse representation may specifically include the following steps:
step S101: a training sample set is obtained.
In a specific implementation, the set of training samples includes L classes of training samples. If there are m training samples per class, there are a total of (N ═ mL) training samples. Using column vectors x m(c-1)+ 1,x m(c-1)+2, …x cm Represent m training samples in class c (c ═ 1,2, … L), respectively.
Step S102: calculating a representation coefficient of a sample to be recognized on a training sample of the training sample set based on the determined facial image gradient recovery constraint information and an L2 regularization sparse representation method;
in a specific implementation, a training sample matrix X ═ X1 … Xc … XL is defined],Xc=[x m(c-1)+1 ,x m(c-1)+2 …x cm ]And the sample to be identified is represented by a column vector y. Suppose a sample y to be identified and each training sample x m(i-1)+j (i-1 … L, j-1, K … m) are all D-dimensional column vectors, and since there are L classes and N training samples in total, X is a D × N matrix.
The test samples and training samples are divided into a grid of overlapping blocks, respectively, where the number of rows is U blocks and the number of columns is V blocks, { y (p, q) |1 < p < U,1 < q < V }, { x (p, q) |1 < p < U,1 < q < V }. For the sample y to be identified, x (p, q) and a weight vector w (p, q) can be used to generate an estimation map of the sample to be identified. Specifically, the method comprises the following steps:
the gradient image is represented as:
Figure BDA0002161379100000051
wherein the content of the first and second substances,
Figure BDA0002161379100000052
representing the nth image in the ith class in the training set.
Based on the idea of gradient constraint, that is, the face image can also be restored in the gradient domain, a gradient restoration constraint is introduced, as follows:
Figure BDA0002161379100000066
wherein G is n Gradient map representing input samples to be recognized, G in Gradient map, w, representing restored samples to be identified n And the weight vector represents the input samples to be identified, N represents the nth image block of the input samples to be identified, and N represents the total number of the image blocks of the input samples to be identified.
In an embodiment of the present invention, when calculating a representation coefficient of a sample to be recognized on a training sample of the training sample set based on the determined facial image gradient recovery constraint information and an L2 regularization sparse representation method, a weight vector w (p, q) is first solved, that is, an objective function about w (p, q) is constructed as follows:
Figure BDA0002161379100000061
wherein y (p, q) represents the image block of the p row and q column of the sample to be identified,
Figure BDA0002161379100000062
a line-p and a column-q image block, w, representing the nth image in the ith class in the training sample set n (p, q) represents a weight vector of the image block of the p-th row and the q-th column, α represents a parameter for balancing the reconstruction error of the gradient map, G n (p, q) represents the original input sample to be recognized at line pthGradient map of q columns of image blocks, G in (p, q) represents the gradient map of the image block of the p row and the q column of the original recovered sample to be identified.
The above equation (3) can be converted into a matrix form:
Figure BDA0002161379100000063
in the above formula (4):
Figure BDA0002161379100000064
through the transformation, the above formula is weighted sparse representation and can be solved through a minimum angle regression algorithm. After the optimal weight direction w (p, q) is obtained, an output estimation graph of the ith type of samples to be identified can be obtained:
Figure BDA0002161379100000065
gradient information is very important in our algorithm, and better gradient information can lead to better results. But the gradient of the original input image lacks detailed information of gradient recovery. In order to solve the problem, an iteration method can be used, a reconstructed face image obtained by using a Gradient Constraint Sparse Representation (GCSR) method each time is used as an input image of the next iteration, and the iteration step n (a preset time threshold) is repeatedly executed for n times, so as to obtain a final estimation image of the i-th class of samples to be identified.
The face image is constructed as follows based on an L2 regularized sparse representation of an objective function:
Figure BDA0002161379100000071
wherein y represents a sample to be identified, X represents a training sample matrix, B represents a coefficient matrix, a first term of the objective function is to obtain a minimum residual error, a second term represents a decorrelation effect, the correlation of the representation results between different classes can be removed, and γ is a normal number used for balancing the influence of the two terms on the objective function based on the L2 regularization sparse representation.
If B ═ B1, B2, … bN ] T, Bi ═ bm (i-1) +1, bm (i-1) +2 … bmi ] T, Bj ═ bm (j-1) +1, bm (j-1) +2 … bmj ] T, B ═ B1, B2 … BL ] T, it can be demonstrated that the objective function of the above formula is a slightly convex function, and therefore the optimal solution to the objective function is a plateau. By deriving equation (6), one can obtain:
Figure BDA0002161379100000072
if the following steps are recorded:
Figure BDA0002161379100000073
then:
Figure BDA0002161379100000074
to sum up, let:
Figure BDA0002161379100000075
comprises the following steps:
Figure BDA0002161379100000081
therefore, the smoothing point of g is
Figure BDA0002161379100000082
The point of (a), that is:
((1+2γ)X T X+2γLM)B=X T y (12)
finally, the optimal solution of the objective function is obtained as follows:
B=((1+2γ)X T X+2γLM) -1 X T y (13)
step S103: and calculating the residual error of the sample to be recognized on each type of training sample of the training sample set by adopting the representation coefficient of the sample to be recognized on the training sample.
In an embodiment of the present invention, the following formula is adopted to calculate the residual error of the sample to be identified on each type of training sample in the training sample set:
Figure BDA0002161379100000083
wherein d is i And the residual error of the sample to be identified on the ith class training sample of the training sample set is represented.
Step S104: and outputting the training sample category corresponding to the minimum residual error obtained by calculation as the category of the sample to be identified.
In specific implementation, when residual errors of the to-be-identified sample on each type of training sample of the training sample set are obtained through calculation, the training sample class corresponding to the minimum value in the calculated residual errors is used as the class of the to-be-identified sample to be output.
By adopting the scheme in the embodiment of the invention, the gradient recovery constraint information of the face image of the sample to be recognized is determined based on the obtained training sample set, the representation coefficient of the sample to be recognized on the training sample of the training sample set is calculated based on the determined gradient recovery constraint information of the face image and the L2 regularization sparse representation method, the residual error of the sample to be recognized on each class of training samples of the training sample set is calculated by adopting the representation coefficient of the sample to be recognized on the training sample, and finally, the class of the training sample corresponding to the minimum residual error obtained by calculation is output as the class of the sample to be recognized.
The foregoing shows and describes the general principles, principal features, and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the foregoing description only for the purpose of illustrating the principles of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the invention as defined by the appended claims, specification, and equivalents thereof.

Claims (7)

1. A face recognition method based on L2 regularization gradient constraint sparse representation is characterized by comprising the following steps:
acquiring a training sample set;
calculating a representation coefficient of a sample to be recognized on a training sample of the training sample set based on the facial image gradient recovery constraint information and an L2 regularization sparse representation method;
calculating residual errors of the samples to be recognized on each class of training samples in the training sample set by adopting the representation coefficients of the samples to be recognized on the training samples;
and outputting the training sample category corresponding to the minimum residual error obtained by calculation as the category of the sample to be recognized.
2. The method for recognizing the face based on the L2 regularized gradient constraint sparse representation as claimed in claim 1, wherein the facial image gradient restoration constraint information is:
Figure FDA0003726550390000011
wherein G is n Gradient map representing the input sample to be identified, G in Gradient map, w, representing restored samples to be identified n And the weight vector represents the input samples to be identified, N represents the nth image block of the input samples to be identified, and N represents the total number of the image blocks of the input samples to be identified.
3. The method for recognizing the face based on the L2 regularized gradient constraint sparse representation as claimed in claim 1, wherein the calculating the representation coefficients of the samples to be recognized on the training samples of the training sample set based on the face image gradient recovery constraint information and the L2 regularized sparse representation method comprises:
respectively dividing the samples in the training sample set and the samples to be identified into overlapped image blocks;
constructing an objective function of the weight vectors of the divided image blocks, and solving to obtain an optimal weight vector;
calculating an estimation graph of each type of sample to be identified based on the calculated optimal weight vector;
taking the calculated estimation image of each type of sample to be recognized as a sample to be recognized input in next iteration, and starting execution from the step of dividing the samples in the training sample set and the samples to be recognized into overlapped image blocks respectively until the iteration times reach a preset time threshold value, so as to obtain a final estimation image of each type of sample to be recognized;
constructing a face image target function based on L2 regularization sparse representation based on the calculated final estimation graph of each type of sample to be recognized;
and solving the optimal solution of the face image target function based on the L2 regularized sparse representation as a representation coefficient of the sample to be recognized on the training sample of the training sample set.
4. The L2 regularized gradient constraint sparse representation-based face recognition method according to claim 3, wherein constructing an objective function of weight vectors of image blocks and solving to obtain an optimal weight vector comprises:
constructing an objective function of the weight vector;
and converting the constructed objective function of the weight vector into a corresponding matrix form, and solving an optimal solution by adopting a minimum angle regression algorithm to obtain the optimal weight vector.
5. The L2 regularized gradient constraint sparse representation-based face recognition method of claim 4, wherein an objective function of the weight vector is:
Figure FDA0003726550390000021
wherein y (p, q) represents the image block of the p row and q column of the sample to be identified,
Figure FDA0003726550390000022
a line-p and a column-q image block, w, representing the nth image in the ith class in the training sample set n (p, q) denotes a weight vector of the pth row and pth column image block, α denotes a parameter for balancing a gradient map reconstruction error,
G n (p, q) represents a gradient map of the image block of the p row and the q column of the originally input sample to be identified,
G in (p, q) represents the gradient map of the image block of the p row and the q column of the original recovered sample to be identified.
6. The face recognition method based on L2 regularization gradient constraint sparse representation according to claim 5, wherein the estimation graph of each type of sample to be recognized is obtained by calculation with the following formula:
Figure FDA0003726550390000023
wherein the content of the first and second substances,
Figure FDA0003726550390000024
and representing the output estimation graph of the ith sample to be identified.
7. The L2 regularized gradient constraint sparse representation-based face recognition method according to claim 6, wherein the residual error of the sample to be recognized on each class of training samples in the training sample set is calculated by adopting the following formula:
Figure FDA0003726550390000031
wherein, d i And representing the residual error of the sample to be identified on the ith class of training sample of the training sample set, wherein X is a training sample matrix, and B is a coefficient matrix.
CN201910733434.5A 2019-08-09 2019-08-09 Face recognition method based on L2 regularization gradient constraint sparse representation Active CN110458092B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910733434.5A CN110458092B (en) 2019-08-09 2019-08-09 Face recognition method based on L2 regularization gradient constraint sparse representation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910733434.5A CN110458092B (en) 2019-08-09 2019-08-09 Face recognition method based on L2 regularization gradient constraint sparse representation

Publications (2)

Publication Number Publication Date
CN110458092A CN110458092A (en) 2019-11-15
CN110458092B true CN110458092B (en) 2022-08-30

Family

ID=68485718

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910733434.5A Active CN110458092B (en) 2019-08-09 2019-08-09 Face recognition method based on L2 regularization gradient constraint sparse representation

Country Status (1)

Country Link
CN (1) CN110458092B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104318261A (en) * 2014-11-03 2015-01-28 河南大学 Graph embedding low-rank sparse representation recovery sparse representation face recognition method
CN105574475A (en) * 2014-11-05 2016-05-11 华东师范大学 Common vector dictionary based sparse representation classification method
CN108875459A (en) * 2017-05-08 2018-11-23 武汉科技大学 One kind being based on the similar weighting sparse representation face identification method of sparse coefficient and system
CN109766813A (en) * 2018-12-31 2019-05-17 陕西师范大学 Dictionary learning face identification method based on symmetrical face exptended sample

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104318261A (en) * 2014-11-03 2015-01-28 河南大学 Graph embedding low-rank sparse representation recovery sparse representation face recognition method
CN105574475A (en) * 2014-11-05 2016-05-11 华东师范大学 Common vector dictionary based sparse representation classification method
CN108875459A (en) * 2017-05-08 2018-11-23 武汉科技大学 One kind being based on the similar weighting sparse representation face identification method of sparse coefficient and system
CN109766813A (en) * 2018-12-31 2019-05-17 陕西师范大学 Dictionary learning face identification method based on symmetrical face exptended sample

Also Published As

Publication number Publication date
CN110458092A (en) 2019-11-15

Similar Documents

Publication Publication Date Title
EP3166049B1 (en) Systems and methods for attention-based configurable convolutional neural networks (abc-cnn) for visual question answering
Yan et al. Ranking with uncertain labels
US20210158023A1 (en) System and Method for Generating Image Landmarks
Qian et al. Robust nuclear norm regularized regression for face recognition with occlusion
US20160098633A1 (en) Deep learning model for structured outputs with high-order interaction
CN113610126A (en) Label-free knowledge distillation method based on multi-target detection model and storage medium
Dawoud et al. Fast template matching method based optimized sum of absolute difference algorithm for face localization
CN111860398B (en) Remote sensing image target detection method and system and terminal equipment
CN113159143B (en) Infrared and visible light image fusion method and device based on jump connection convolution layer
Wang et al. Gaussian focal loss: Learning distribution polarized angle prediction for rotated object detection in aerial images
CN110929802A (en) Information entropy-based subdivision identification model training and image identification method and device
Wang et al. BANet: Small and multi-object detection with a bidirectional attention network for traffic scenes
WO2021217937A1 (en) Posture recognition model training method and device, and posture recognition method and device
CN111046771A (en) Training method of network model for recovering writing track
CN113095254A (en) Method and system for positioning key points of human body part
CN116363750A (en) Human body posture prediction method, device, equipment and readable storage medium
CN115577768A (en) Semi-supervised model training method and device
CN117576783A (en) Dynamic gesture recognition method based on hand key points and double-layer bidirectional LSTM network
CN111429481A (en) Target tracking method, device and terminal based on adaptive expression
CN110288026A (en) A kind of image partition method and device practised based on metric relation graphics
Zhou et al. MSFlow: Multiscale Flow-Based Framework for Unsupervised Anomaly Detection
Zhu et al. Dual-decoder transformer network for answer grounding in visual question answering
CN115862119B (en) Attention mechanism-based face age estimation method and device
CN110458092B (en) Face recognition method based on L2 regularization gradient constraint sparse representation
CN117037244A (en) Face security detection method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant