CN111582057B - Face verification method based on local receptive field - Google Patents

Face verification method based on local receptive field Download PDF

Info

Publication number
CN111582057B
CN111582057B CN202010310755.7A CN202010310755A CN111582057B CN 111582057 B CN111582057 B CN 111582057B CN 202010310755 A CN202010310755 A CN 202010310755A CN 111582057 B CN111582057 B CN 111582057B
Authority
CN
China
Prior art keywords
face
neural network
input picture
picture
verification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010310755.7A
Other languages
Chinese (zh)
Other versions
CN111582057A (en
Inventor
刘昊
花硕硕
庞伟
陆生礼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN202010310755.7A priority Critical patent/CN111582057B/en
Publication of CN111582057A publication Critical patent/CN111582057A/en
Application granted granted Critical
Publication of CN111582057B publication Critical patent/CN111582057B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The invention discloses a face verification method based on local receptive fields, and belongs to the technical field of calculation, calculation or counting. The method comprises the following steps: establishing an external data set, and performing data enhancement on samples in the data set; establishing a convolutional neural network, wherein the input of the convolutional neural network is a color picture, the output of the convolutional neural network is a characteristic vector corresponding to a face region in the picture and a prediction frame coordinate of a face position, and the characteristic vector of the corresponding region is output according to the position of the prediction frame in the picture during testing; and testing the pre-trained convolutional neural network by using the test set and finely adjusting the convolutional neural network according to the test result. According to the translation invariance of the deep neural network, the invention effectively extracts the characteristics of the face region by using one network, so that the receptive field of the characteristic vector just only contains the face, thereby effectively reducing the noise brought by background information, ensuring the accuracy of face verification, simultaneously improving the parallelism of network calculation and greatly simplifying the training process.

Description

Face verification method based on local receptive field
Technical Field
The invention discloses a face verification method based on local receptive fields, relates to a computer vision technology of face verification, and belongs to the technical field of calculation, reckoning or counting.
Background
Face recognition is an important part of computer vision technology, and aims to correctly recognize the identity of a person in a face picture. The current mainstream mode is to classify the face pictures by using a classified neural network, however, the classified network needs to be designed according to fixed categories, and the identity of a person cannot be newly added after training is completed, so that the method is very inflexible in actual use. Therefore, the human face is identified by a human face verification mode, the characteristic extraction is carried out on a human face picture by using a neural network to generate the characteristic vector of the human face in the picture, then the Euclidean distance between different human face characteristic vectors is calculated, and finally whether the human face is the same person is judged by setting a threshold value. Therefore, if an identity is added newly, only the face features of the identity need to be generated and stored by using a network, and then the identity can be identified by calculating with the newly input sample feature vector.
However, the current face verification algorithm is divided into two steps, firstly, the picture needs to pass through a face detection network to obtain the position coordinates of the face, the face part in the picture is cut off through the coordinates, noise caused by the background is reduced through the mode, and then the face picture is input into the face verification network for subsequent calculation. That is to say, in the process of face verification, two networks are required to be used for completion, which means that one face detection network and one face verification network need to be trained respectively during training, which brings inconvenience during training, and meanwhile, because two separate networks are used, the parallelism of the networks can be reduced in actual use, and the method is essentially a two-stage face verification algorithm.
The convolutional neural network has translation invariance, so that the characteristics of the designated area of the picture can be obtained only by adjusting the convolutional kernel and the step length, namely the receptive fields of the characteristics correspond to the specific area in the picture. By using the property of the convolutional neural network, only the characteristics of the face region in the original image can be acquired. The application aims to provide a face verification method based on local receptive fields to improve the parallelism of a network and reduce the complexity of training steps.
Disclosure of Invention
The invention aims to provide a human face verification method based on local receptive fields, which aims to overcome the defects of the background technology, obtains the characteristic vector of a human face region by using the receptive fields output by a network, can realize the detection and verification of the human face by using only one convolutional neural network, effectively reduces the noise caused by the picture background while improving the parallelism of the detection operation and the verification operation, and solves the technical problems of low parallelism and complex training steps of the existing two-stage human face verification method.
The invention adopts the following technical scheme for realizing the aim of the invention:
a face verification method based on local receptive fields comprises the following steps:
step 1, dividing a public face verification data set or a self-collected data set into a training set, a verification set and a test set;
step 2, performing data enhancement on the samples in the data set by adopting at least one of the following modes: translation, zooming, rotation and overturning;
step 3, establishing a face verification convolutional neural network based on local receptive fields, wherein the input of the convolutional neural network is a color picture, the output is the identity category of the face in the picture and the prediction frame coordinates of the face position during training, a loss function adopts softmax loss, and the feature vector of a corresponding area is output according to the position of the prediction frame in the picture during testing;
and 4, testing the convolutional neural network pre-trained in the step 3 by using the test set, and finely adjusting the convolutional neural network according to the test result.
In the step 1, the face verification training set adopts CASIA-Webface, and the test set adopts an LFW data set.
In the step 1, after all the pictures in the data set are scaled to the input size of the convolutional neural network, normalization processing is performed.
In the step 3, the convolutional neural network for face verification based on local receptive fields is composed of two parts, one part is used for detecting a face region in an input picture, and the other part is used for extracting a feature vector of the face region. During training, the network extracts the face features of different regions in an input picture, then selects the feature vectors corresponding to the face regions according to the face detection result, classifies and trains the selected feature vectors through a full connection layer, outputs the identity category to which the face belongs and the prediction frame coordinates of the face position, adopts softmax loss as a loss function, trains by using an Adam optimizer, and stores a convolutional neural network model when the accuracy rate does not rise any more to obtain a trained convolutional neural network.
In the step 4, when the network is tested and used, the full connection layer is removed, the test picture sample is input into the network, the face feature vector directly output by the network is the identity feature of the face in the image, and then the Euclidean distance between the feature vectors of different test picture samples is calculated, if the distance is smaller than a certain threshold value, two face pictures are considered to be the pictures of the same person; otherwise, it is a photo of a different person.
In the convolutional neural network, after the last convolutional layer passing through the network is calculated, a feature vector with dimension of N x K is generated, wherein the dimension of N x N represents that a picture is divided into N regions, the size of the feature vector of each region is 1 x K, and meanwhile, in order to solve the problem that the position where a human face appears is just positioned at the junction of the regions so that the regions cannot contain the whole human face, a large convolutional kernel is adopted, and the convolutional kernel is slid by convolution step length smaller than the width of the convolutional kernel so that local pictures aimed at by two adjacent convolutional operations are mutually overlapped; and in the training process, a full-connection layer is required to be added behind the screened feature vectors, the feature vectors screened each time pass through the full-connection layer, and the network prediction error is calculated by adopting softmax loss as a loss function.
In the convolutional neural network, when the face features are extracted, a face detection method is also needed to judge whether a face exists in an input picture, namely, the position coordinates of the face in the input picture are predicted, and then corresponding feature vectors are selected according to the face region determined by the coordinates; during training, a loss function of face detection is added to a loss function of face classification, and training is carried out at the same time.
By adopting the technical scheme, the invention has the following beneficial effects:
(1) the invention provides a method for verifying a human face by using a single deep neural network, which extracts feature vectors of different regions of an image by using translation invariance of the deep convolutional neural network, takes a detected human face region as a local receptive field of the feature vectors, and screens the feature vectors of different regions of the image by using the receptive field of the feature vectors.
(2) The invention realizes the characteristic vector screening based on the receptive field through the mask matrix, can realize the detection and verification of the human face through a neural network, simplifies the training process and improves the calculation parallelism compared with a two-stage human face verification algorithm.
Drawings
FIG. 1 is a block diagram of a neural network for verifying a human face in accordance with the present invention.
FIG. 2 is a schematic diagram of feature vector mapping according to the present invention.
Detailed Description
The technical scheme of the invention is explained in detail in the following with reference to the attached drawings.
The invention provides a face verification method based on local receptive fields, which comprises the following steps:
establishing an external data set: an external data set is established according to a public fine-grained classification database of a research institution or self-collected data, illustratively, a face verification training set can adopt CASIA-Webface, and a test set adopts an LFW data set. Each picture should contain an identity label indicating which category the picture belongs to; each picture also needs face frame coordinates, which indicate the position of the face in the picture. Faces of as many different identities should be collected, each identity containing as many samples as possible, while reducing the number of mislabeled samples in the dataset.
Data enhancement: the face verification task completed by the deep neural network easily causes overfitting, but the training sample number is usually far smaller than the required sample number, and the overfitting can be reduced by manual data enhancement. Data enhancement methods for expanding a data set typically have four of the following: translation, zooming, rotation, and flipping.
Training a model: establishing a convolutional neural network, wherein the input of the convolutional neural network is a color picture, the input of the convolutional neural network is output as the identity class of the face in the picture and the prediction frame coordinate of the face position during training, a loss function adopts softmax loss, and the feature vector of a corresponding area is output according to the position of the prediction frame in the picture during testing; the structure of the face verification network based on the local receptive field comprises two parts, wherein one part is a detection part of the face in the image; the second is a characteristic vector part of the face area; during training, the network extracts the face features aiming at different areas in the picture, and then selects the feature vectors corresponding to the face areas in the picture according to the result of face detection; adding a full-connection layer behind the screened feature vectors during training, wherein the feature vectors screened each time pass through the full-connection layer, adopting softmax loss as a loss function, training by using an Adam optimizer, and when the accuracy rate does not rise any more, storing a convolutional neural network model to obtain a trained convolutional neural network; when the network is tested and used, the full connection layer is removed, the feature vectors of the face regions in the pictures are directly output, Euclidean distances among the feature vectors of different pictures are calculated, and if the distances are smaller than a certain threshold value, the two face pictures are regarded as the pictures of the same person; otherwise, it is a photo of a different person.
Generating N x K dimensional feature vectors after calculating the last convolution layer passing through the network, wherein the N x N dimensional feature vectors represent that the picture is divided into N areas, the size of the feature vector of each area is 1 x K, and meanwhile, in order to solve the problem that the position where the human face appears is just positioned at the junction of the areas so that the areas cannot contain the whole human face, a large convolution kernel and convolution step length smaller than the width of the convolution kernel are adopted, so that the convolution kernels can be mutually overlapped in sliding calculation; and in the training process, a full-connection layer is required to be added behind the screened feature vectors, the feature vectors screened each time pass through the full-connection layer, and the network prediction error is calculated by adopting softmax loss as a loss function.
The technical solution of the present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
The face verification method based on the local receptive field comprises the following three steps.
Step one, establishing an external data set: the training set adopts CASIA-WebFace, and the testing set adopts LFW data set. The number of the identities in the CASIA-Webface data set is 10575, and each identity has a plurality of photos; a total of 13233 pictures of 5749 identities in the LFW dataset. All images were scaled to a size of 250 x 3.
Step two, data enhancement is carried out: and carrying out translation operation on the samples in the obtained data set, and carrying out random translation on each picture. Overfitting is easy to generate in the deep neural network training process, and meanwhile, the face is located at any position in the picture through random translation. The robustness of the network is increased by adopting a random translation mode.
Step three, establishing a convolutional neural network: the deep neural network used in the invention is shown in fig. 1, the color picture with 250 × 3 pixels is input into the neural network, the color picture is firstly subjected to a series of convolution layers to obtain a feature map of 13 × 896, then the color picture is divided into two branches, one of the two branches is used for extracting image features continuously, corresponding feature vectors are generated for different regions of the picture, for example, 3 × 3 128-dimensional feature vectors are generated in fig. 1, namely, the original image is divided into 9 fixed regions, each region corresponds to one 128-dimensional vector, if the number of the generated feature vectors is increased, the corresponding regions in the original image are increased, and the size of each region is reduced; and the other branch is used for predicting the position of the face, as shown in fig. 1, the feature map of 13 × 896 is convolved to generate a prediction frame of face coordinates and a classification prediction of whether the face or the background is in the prediction frame, the frame with the highest face confidence is the frame where the face is located, the pixel point where the center point of the frame where the face is located is set to 1, and the rest of the pixel points are set to 0 to obtain a mask matrix of 15 × 15, and then the mask matrix is changed into 3 × 3 through a maximum pooling operation with the size of 5 and the step length of 5, wherein the position where 1 is the position where the feature vector corresponding to the face region is located in the vector of 3 × 128 dimensions, so that the feature vector corresponding to the face region can be selected through the mask matrix. During training, a full connection layer is required to be added after the feature vectors are screened out for classifying and calculating the softmax loss, it is noted that the softmax loss is calculated only by the feature vector corresponding to the face region of each picture, and the loss function of the network is the sum of the softmax loss and the error in the face detection branch. After the network training is finished, the full connection layer is removed, the network directly outputs the feature vectors of the test pictures, the Euclidean distance between the feature vectors of different input samples is calculated, if the Euclidean distance is larger than a certain threshold value, the test pictures are not the same person, and otherwise, the test pictures are the same person.
The invention also provides a method for solving the problem that the position of the face is at the boundary of the region, the mapping relation between 9 feature vectors of the face and the corresponding region in the picture is shown in fig. 2, if the face is in the middle of a certain region, the feature can be extracted well, however, if the face is at the junction of two regions, the feature extraction is influenced. In the formula (1), feature is the size of the finally generated feature vector, which is 3 in fig. 1, input is the size of the input feature map, which is 11 in fig. 1, and the size kernel of the selected convolution kernel is 7, so that the size stride can be calculated to be 2.
Figure BDA0002457530300000061
For example, the input signature of the last layer is 11 × 11, the convolution kernel takes 7 × 7 steps as 2, and the output is 3 × 3; when convolution calculation is carried out, convolution kernels only slide for a small step, so that convolution is overlapped, and therefore the face features at the boundary of the regions can be well extracted.
In summary, the present invention provides a face verification method based on local receptive fields, which utilizes the translational invariance of a deep convolutional neural network to perform feature extraction on different regions in an image, so as to effectively reduce the influence of background information on the face feature extraction, and simultaneously utilizes a face detection method to predict the region where a face is located, and selects a feature vector corresponding to the face region for face verification. Compared with the two-stage face verification method, the face detection network is not needed to cut out the face part and then the face verification is carried out, the parallelism of the face verification is effectively improved, two networks are not needed to be trained during network training, and the training process of the networks is greatly simplified.
The above embodiments are only for illustrating the technical idea of the present invention, and the protection scope of the present invention is not limited thereby, and any modifications made on the basis of the technical idea proposed by the present invention fall within the protection scope of the present invention.

Claims (8)

1. A human face verification method based on local receptive fields is characterized in that a neural network for performing human face verification on an input picture is trained, wherein the neural network comprises; the system comprises a convolutional layer, a face position detection branch and a feature vector extraction branch, wherein the convolutional layer reads a feature map and face prediction frame information from an input picture, the face position detection branch performs face detection on a region selected by a face prediction frame to generate a mask matrix for representing the face position, and the feature vector extraction branch extracts feature vectors of regions of an input picture and then selects feature vectors of the face region of the input picture according to the mask matrix; the neural network detects the face area of the input picture by reading the face prediction frame information of the input picture, when the area selected by the face prediction frame is a face classification result, the area selected by the face prediction frame with the highest confidence coefficient is selected as the face area of the input picture, a feature vector is extracted from each area of the input picture, the feature vector of the face area of the input picture is screened out from all the extracted feature vectors by adopting a mask matrix representing the face position, the feature vectors of the face areas of different test pictures are extracted by utilizing the trained neural network, and a verification result that the two test pictures are the same face picture is output when the Euclidean distance of the feature vectors of the face areas of the two test pictures is smaller than a threshold value.
2. The method of claim 1, wherein in the course of training the neural network for face verification, the feature vectors of the face regions of the input pictures are input into a full connection layer to obtain classification results.
3. The method according to claim 1, wherein during training of the neural network for face verification of the input picture, softmax loss is used as a loss function to reversely propagate the sum of the classification error of the feature vector of the face region of the input picture and the detection error of the face region of the input picture to correct the network parameters.
4. The local receptive field-based face verification method as claimed in claim 1, characterized in that the input picture face region is detected by sliding a large convolution kernel with a convolution step smaller than the convolution kernel width.
5. The method for verifying human face based on local receptive field as claimed in claim 1, wherein the input picture and the test picture are subjected to size scaling, normalization and data enhancement, and the data enhancement includes but is not limited to translation, scaling, rotation and flipping.
6. The method of claim 1, wherein the mask matrix has 1 element at a position corresponding to the face and 0 elements at the rest.
7. A computer-readable storage medium on which a computer program is stored, the program, when executed by a processor, implementing the face verification method of claim 1.
8. Terminal equipment, characterized by, includes: a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the face verification method of claim 1 when executing the program.
CN202010310755.7A 2020-04-20 2020-04-20 Face verification method based on local receptive field Active CN111582057B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010310755.7A CN111582057B (en) 2020-04-20 2020-04-20 Face verification method based on local receptive field

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010310755.7A CN111582057B (en) 2020-04-20 2020-04-20 Face verification method based on local receptive field

Publications (2)

Publication Number Publication Date
CN111582057A CN111582057A (en) 2020-08-25
CN111582057B true CN111582057B (en) 2022-02-15

Family

ID=72116807

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010310755.7A Active CN111582057B (en) 2020-04-20 2020-04-20 Face verification method based on local receptive field

Country Status (1)

Country Link
CN (1) CN111582057B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112132092A (en) * 2020-09-30 2020-12-25 四川弘和通讯有限公司 Fire extinguisher and fire blanket identification method based on convolutional neural network
CN113240430A (en) * 2021-06-16 2021-08-10 中国银行股份有限公司 Mobile payment verification method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106503656A (en) * 2016-10-24 2017-03-15 厦门美图之家科技有限公司 A kind of image classification method, device and computing device
CN106778527A (en) * 2016-11-28 2017-05-31 中通服公众信息产业股份有限公司 A kind of improved neutral net pedestrian recognition methods again based on triple losses
CN106845421A (en) * 2017-01-22 2017-06-13 北京飞搜科技有限公司 Face characteristic recognition methods and system based on multi-region feature and metric learning
CN107341447A (en) * 2017-06-13 2017-11-10 华南理工大学 A kind of face verification mechanism based on depth convolutional neural networks and evidence k nearest neighbor
CN109242032A (en) * 2018-09-21 2019-01-18 桂林电子科技大学 A kind of object detection method based on deep learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10818007B2 (en) * 2017-05-31 2020-10-27 The Procter & Gamble Company Systems and methods for determining apparent skin age

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106503656A (en) * 2016-10-24 2017-03-15 厦门美图之家科技有限公司 A kind of image classification method, device and computing device
CN106778527A (en) * 2016-11-28 2017-05-31 中通服公众信息产业股份有限公司 A kind of improved neutral net pedestrian recognition methods again based on triple losses
CN106845421A (en) * 2017-01-22 2017-06-13 北京飞搜科技有限公司 Face characteristic recognition methods and system based on multi-region feature and metric learning
CN107341447A (en) * 2017-06-13 2017-11-10 华南理工大学 A kind of face verification mechanism based on depth convolutional neural networks and evidence k nearest neighbor
CN109242032A (en) * 2018-09-21 2019-01-18 桂林电子科技大学 A kind of object detection method based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Local receptive field constrained deep networks;DianaTurcsany等;《Information Sciences》;20160701;第229-247页 *
卷积神经网络特征重要性分析及增强特征选择模型;卢泓宇等;《软件学报》;20171231;第28卷(第11期);第2879-2889页 *

Also Published As

Publication number Publication date
CN111582057A (en) 2020-08-25

Similar Documents

Publication Publication Date Title
CN110287960B (en) Method for detecting and identifying curve characters in natural scene image
CN110363182B (en) Deep learning-based lane line detection method
CN110348319B (en) Face anti-counterfeiting method based on face depth information and edge image fusion
CN110532920B (en) Face recognition method for small-quantity data set based on FaceNet method
CN107358242B (en) Target area color identification method and device and monitoring terminal
CN112150493B (en) Semantic guidance-based screen area detection method in natural scene
CN111612008B (en) Image segmentation method based on convolution network
CN107633226B (en) Human body motion tracking feature processing method
CN112801146B (en) Target detection method and system
CN110796026A (en) Pedestrian re-identification method based on global feature stitching
CN112070044B (en) Video object classification method and device
CN111353544B (en) Improved Mixed Pooling-YOLOV 3-based target detection method
CN111709313B (en) Pedestrian re-identification method based on local and channel combination characteristics
CN109903339B (en) Video group figure positioning detection method based on multi-dimensional fusion features
CN109165658B (en) Strong negative sample underwater target detection method based on fast-RCNN
CN116645592B (en) Crack detection method based on image processing and storage medium
CN111582057B (en) Face verification method based on local receptive field
CN110781980A (en) Training method of target detection model, target detection method and device
CN113344000A (en) Certificate copying and recognizing method and device, computer equipment and storage medium
CN113487610B (en) Herpes image recognition method and device, computer equipment and storage medium
CN116469020A (en) Unmanned aerial vehicle image target detection method based on multiscale and Gaussian Wasserstein distance
CN114299383A (en) Remote sensing image target detection method based on integration of density map and attention mechanism
CN113780145A (en) Sperm morphology detection method, sperm morphology detection device, computer equipment and storage medium
CN111275694B (en) Attention mechanism guided progressive human body division analysis system and method
CN113642520B (en) Double-task pedestrian detection method with head information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant