CN112084913A - End-to-end human body detection and attribute identification method - Google Patents

End-to-end human body detection and attribute identification method Download PDF

Info

Publication number
CN112084913A
CN112084913A CN202010889969.4A CN202010889969A CN112084913A CN 112084913 A CN112084913 A CN 112084913A CN 202010889969 A CN202010889969 A CN 202010889969A CN 112084913 A CN112084913 A CN 112084913A
Authority
CN
China
Prior art keywords
human body
attribute identification
attributes
constraint
attribute
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010889969.4A
Other languages
Chinese (zh)
Other versions
CN112084913B (en
Inventor
陈爱国
赵太银
朱大勇
罗光春
谷俊霖
杨栋栋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Publication of CN112084913A publication Critical patent/CN112084913A/en
Application granted granted Critical
Publication of CN112084913B publication Critical patent/CN112084913B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items

Abstract

The invention provides an end-to-end human body detection and attribute identification method based on deep learning, and aims to improve the network operation efficiency and the generalization performance. The network structure is composed of two modules of target detection and human body attribute identification, and the target detection module is used for identifying and positioning human body objects. The human body attribute identification module is a multi-output network and is used for finishing the judgment of a plurality of human body attributes. The model can accurately detect a plurality of people in a real scene and detect the attributes of the people, and meanwhile, a method for guiding network training by taking attribute correlation as prior knowledge is further arranged by combining the characteristics of the model.

Description

End-to-end human body detection and attribute identification method
Technical Field
The invention relates to the field of target detection and human body attribute identification, in particular to human body attribute identification in a real scene.
Background
The human body attribute recognition is to judge human body attributes such as sex, age, hairstyle, dressing, and the like of a person in a real scene. These attributes have many applications in pedestrian identification and retrieval. For example, the pedestrian is authenticated when the video quality is poor; in a criminal investigation case, similar suspects can be retrieved in surveillance video by the extrinsic properties of the criminal suspect.
The existing human body attribute identification method is characterized in that target detection and attribute identification are mainly used as two independent tasks, deep convolution neural networks are respectively built for the two tasks to learn, and then the two networks are connected in series.
Disclosure of Invention
The invention aims to: in order to solve the problems that the existing human body attribute identification method is low in model training efficiency, the experience knowledge is not effectively utilized and the like, the invention provides an end-to-end human body detection and attribute identification method based on multi-task learning. The neural network is used for simultaneously realizing two tasks of human body detection and attribute identification, can quickly realize human body detection and attribute identification in a real scene, and has better operation efficiency and generalization performance.
The invention discloses an end-to-end human body detection and attribute identification method, which comprises the following steps:
constructing and training a multi-task network model for human body detection and attribute identification:
the network structure of the multitask network model comprises:
the characteristic extractor is composed of a convolution neural network and is used for extracting a characteristic diagram of the input image;
the human body detection module comprises a classifier and a regressor, and is used for inputting a feature map extracted by the feature extractor, wherein the classifier is used for judging whether the human body is the human body or not, and the regressor is used for predicting the position of the human body;
the attribute identification module is composed of a plurality of attribute identification branches, wherein the number of the attribute identification branches is consistent with the number of the attributes to be identified; the human body position predicted by a regressor of a human body detection module is mapped to a feature map in proportion, a feature block corresponding to the mapped human body position is extracted from the feature map and input to each attribute identification branch;
namely, the image features extracted by the feature extractor of the invention are used as the input of the human body detection module and the attribute identification module;
setting a training data set for training the multitask network model, preprocessing the training data set, then training the network model, and storing the multitask network model meeting the training requirement;
and in training, the adopted loss function comprises the following steps:
the characteristic extractor comprises a batch normalization regularization term, namely a regularization term of a convolutional neural network;
the human body detection module comprises classification loss and regression loss;
the attribute identification module comprises a multitask loss and constraint functions aiming at different attribute relation types;
and inputting the image to be processed into the saved multitask network model, and obtaining a human body attribute identification result based on the network output value of the attribute identification module.
Further, the data set preprocessing mode comprises:
filtering samples which do not contain human body objects in the human body detection data set;
the attribute identification dataset is pre-set for default attributes.
The training and reasoning modes of the multi-task network model are respectively as follows:
during training, different data sets are used for training branches of respective tasks, and the branches are fed back to the main convolutional network together; by training a feature extractor which can be used for both human body detection and attribute identification, the features obtained by the feature extractor can be used for both human body detection tasks and attribute identification tasks;
during reasoning, an information channel is added to connect human body detection and attribute identification, so that the information butt joint of human body detection and attribute identification is realized; the invention reduces redundant convolution calculations by intercepting the feature blocks directly on the feature map.
Further, the invention determines the correlation between the attributes based on the confidence between the attributes, and establishes a constraint domain and a constraint function according to the correlation between the attributes:
defining different attribute relations by setting a group of thresholds alpha and beta, wherein alpha is the lower limit of positive correlation, beta is the upper limit of negative correlation, and the defined interval meeting the correlation among the attributes is adjusted by adjusting alpha and beta; determining the correlation among the attributes as positive correlation, one-way positive correlation and negative correlation;
different constraint functions are set for different correlation relations, and the constraint functions meet the condition that the cost of the result outside the constraint domain is higher, and the cost of the result inside the constraint domain is lower; the constraint function comprises a parameter lambda for adjusting the strength of the constraint function, the greater lambda the greater the strength of the constraint.
In summary, due to the adoption of the technical scheme, the invention has the beneficial effects that: the recognition processing efficiency is high, and the generalization performance is good.
Drawings
FIG. 1 is an overall framework diagram of the body attribute identification system of the present invention;
FIG. 2 is a diagram of a multitasking network architecture in accordance with the present invention;
FIG. 3 is a schematic flow chart of human body attribute identification during reasoning.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the following embodiments and accompanying drawings.
The invention comprises a human body detection and attribute identification multitask network, a training method and an inference mode thereof, and an attribute correlation analysis model. The technical problem to be solved is that: how to organically combine two tasks of target detection and human body attribute identification, design an efficient network structure, skillfully share parameters in the network, reduce repeated operation and improve the time efficiency of a model. Specifically, the method mainly comprises three parts: the method comprises the steps of neural network construction, attribute correlation analysis, training and reasoning.
(1) And (5) constructing a neural network.
The invention discloses an end-to-end human body attribute identification method based on a multitask learning idea. The implementation of the present invention relies on building a deep neural network. The network comprises a feature extractor consisting of a convolutional neural network, a human body detection module M1An attribute identification module M2. Wherein the target detection module M1The method comprises two parts of a classifier and a regressor. The classifier is used for outputting a class label, namely judging whether a detected object is a person or not; the regressor outputs position information, namely, regresses the accurate position of the human body object. Attribute recognition module M2Also designed as a multitasking learning network, M2After features are obtained from the backbone convolution layer, a plurality of human attribute outputs are connected.
Convolutional neural network extraction of image features, M1And M2Sharing this feature, logically speaking, M2Needs to wait for M1After the position frame is obtained, the position frame is mapped to the feature graph, and the feature block of the object is intercepted to be used as the feature of the object.
(2) And (5) analyzing attribute relevance.
And analyzing the human body attribute correlation, obtaining a confidence coefficient matrix among attributes in the attribute data set according to the correlation definition of the confidence coefficient, setting a group of rules for defining attribute relationship, and classifying the correlation relationship among the human body attributes according to the rules. And then setting different relation constraint functions according to the characteristics of various relations among the attributes. The method comprises the following specific steps.
1) Counting the frequency of occurrence of different attributes in the attribute identification dataset, calculating the confidence between two attributes according to formula (1)
Confidence(X→Y)=P(Y|X) (1)
Wherein X, Y represent different human attributes, i.e. the confidence between X, Y represents the conditional probability that attribute Y exists in the presence of attribute X.
2) And forming a Confidence matrix T by using the value of Confidence (X, Y).
3) And setting a group of rules for defining the relationship according to the T. As shown in table 1. Alpha is the lower limit of the preset positive correlation, and beta is the upper limit of the preset negative correlation.
Table 1: attribute dependency relationships and rule definitions
Figure BDA0002656608990000031
Figure BDA0002656608990000041
Where Cfds () is a simplified representation of Confidence between two attributes, i.e., Confidence (X → Y).
4) And determining the constraint domain D according to the characteristics of different relations.
5) Different constraint functions F are determined from different constraint fields. See table 2. Wherein, the lambda is used for adjusting the constraint intensity of the constraint function and takes the value as an empirical value.
TABLE 2 constraint Domain and constraint function of Attribute relationships
Figure BDA0002656608990000042
Where e represents a natural base number, and X, Y represent the network outputs of attributes X, Y, respectively.
In addition, to further simplify the calculation, the constraint function F may also adopt the following manner:
for positive correlation, the constraint function is: (x-y)2
For a one-way positive correlation, the constraint function is: x is the number of2+y2-xy-y;
For negative correlation, the constraint function is: x is the number of2+y2+xy-x-y。
After obtaining attribute pairs with significant relevance, the constraint term for attribute relevance can be expressed as:
Figure BDA0002656608990000043
wherein N isresNumber of pairs of attributes, xi,yiNetwork outputs, L (x), representing two attributes respectivelyi,yi) Is a corresponding constraint function.
6) And taking the constraint function as a constraint item between the loss functions of the attribute identification module to participate in training.
It should be noted that the constraint domain represents the result conforming to the prior knowledge in the relationship, and the other than the constraint domain represents the result not conforming to the prior knowledge. Where 0 means no occurrence and 1 means occurrence. For example, (1,1) indicates that X occurs and Y also occurs. Designing corresponding constraint functions according to the constraint domain, wherein the constraint functions act between two attributes (x, y) with correlation, the higher the cost of the results outside the constraint domain is, the results inside the constraint domain are not processed, and the constraint functions are used as a constraint item of the overall loss function to participate in training. The purpose of this is to use the attribute correlation as a priori knowledge to constrain the training of the model of the present invention, which is similar to regularization constraint and can guide the training of the model, and in the process of optimizing the loss function, the parameters tend to select the direction optimization of gradient reduction satisfying the constraint, so that the finally trained model tends to conform to the priori knowledge.
(3) Training and reasoning.
Since none of the currently known target detection data sets contains attribute information of a target, two data sets, namely, an existing target detection data set and an attribute identification data set, are used as training data. And therefore the training process and the reasoning process of the network have certain differences.
During training, the detection module is trained by using the detection data set, the attribute identification module is trained by using the attribute identification data set, and a synchronous relation does not exist between human body detection and an attribute identification network, namely the training of the attribute identification module does not depend on the result of the detection module. It should be noted that, in the present invention, when the attribute module is trained, the mapping block corresponding to the bounding box marked in the attribute data set on the feature map is used as the training data of the attribute module, instead of using the truncated block of the bounding box on the original drawing as the training data of the attribute branch.
During reasoning, target detection and attribute identification need to be combined, namely, the attribute identification module depends on the output of the detection module. The method comprises the following specific steps:
1) firstly, inputting a whole picture into a trunk convolution layer to obtain a characteristic diagram;
2) obtaining a detection frame of a human body through a target detection module, and inputting a mapping block corresponding to the detection frame on a feature map as a convolution feature of each person into an attribute identification module;
3) and the mapping blocks in the previous step are used as the characteristics of the corresponding human body objects and are respectively input into the identifiers with the various attributes for identification.
As can be seen from the above steps, the features of multiple persons in one image are all obtained from the same feature map, so that the features of multiple persons in the map can be obtained simultaneously by performing convolution on the whole image only once. Therefore, the calculation scale of the network model is greatly reduced, and the time performance of the network is improved.
Examples
Referring to fig. 1, the specific implementation process of the present embodiment includes:
step 101: a test data set containing a human body is acquired. Deleting samples without human from a plurality of detection data sets containing human objects, and only preservingKeeping a sample containing human body, unifying the labeling formats of a plurality of data sets, and randomly arranging to obtain a data set DB1
Step 102: acquiring a human body attribute identification data set, performing attribute alignment on a plurality of human body attribute identification data sets, namely, taking a union set S of attribute sets of all the data sets, taking the attributes in the union set S as the attribute set of the integrated data set, and setting default values such as-1 for the missing attributes in the data set. Unifying the labeling formats of the multiple data sets, and randomly arranging to obtain a data set DB2
Step 103: the two data sets obtained in step 101-102 are deleted some samples with larger or smaller sizes to keep the size of the picture in a uniform range.
And 101-103, constituting a preprocessing process of the data set, and using the processed data set for network training.
Step 201: and (3) constructing a convolution neural backbone network, and obtaining a feature map (feature map) of the image through the network. The characteristic diagram is used for inputting of a subsequent human body detection module.
Step 202: and establishing a classifier for identifying whether the detected object is a human or not, wherein the classifier is a two-classification network.
Step 203: and establishing a regressor for predicting the coordinate position of the human body object.
Step 204: and adopting multitask cross entropy loss, wherein the loss comprises classification loss of a classifier and regression loss of a regressor.
Steps 201-204 constitute the construction process of the human detection module of the present invention. The structure is shown in figure 2.
Step 301: and counting and acquiring a confidence matrix among human body attributes in the attribute data set.
Step 302: and defining rules of attribute relevance according to the data in the confidence coefficient matrix.
Step 303: and obtaining the relation between the attributes according to the rules.
Step 304: and determining a constraint function corresponding to the attribute association relation according to the characteristics of the relation between the attributes.
Step 301 and step 304 constitute an attribute association analysis model. The method establishes the correlation that exists between attributes.
Step 401: and step 201, a convolutional neural network is shared to obtain a characteristic map of the image. The feature map will be used as input to the attribute identification module.
Step 402: and if the training is carried out, obtaining the position coordinates of the human body according to the attribute data marking information. If it is inference, human body position information is obtained according to the result of step 203.
Step 403: the position P of the human body on the original image acquired in the last step is calculated according to the ratio of the original image to the feature map1Zooming onto position P on the feature map2
Step 404: position P obtained from the previous step2And obtaining a feature block from the screenshot on the feature diagram. And directly inputting the characteristic block into a human body attribute identification module as the characteristic of the corresponding human body object. The flow chart can be seen in fig. 3.
Step 405: a human attribute recognition subnetwork is established, the number of which is determined by the number of elements in the set S in step 102.
Step 406: and obtaining the human body attribute value. If the inference is positive, the steps 401 and 406 complete the whole process of human body attribute recognition. If training, step 407 is also included.
Step 407: according to the conclusion of the correlation analysis of the human body attributes in the step 303-304, the constraint function is used as a constraint item of the multitask loss function to participate in the training.
Steps 401 to 407 constitute the attribute identification module of the present invention. The module takes the output of the human body detection module as input to obtain the prediction result of the human body attribute.
During attribute identification processing, an object marked in a certain image in a data set and a label thereof are marked as follows:
Figure BDA0002656608990000071
wherein xiIndicating the i (i ═ 1,2, …, n) th labeled human object in the image,
Figure BDA0002656608990000072
defining for corresponding attribute labels, n represents the number of labeled human objects
Figure BDA0002656608990000073
M is 1, …, M indicates the number of attribute quantities.
The attribute loss function adopts cross entropy loss, and then the loss function of a human object can be expressed as:
Figure BDA0002656608990000074
wherein, yiRepresents a true attribute tag, and yi=[ai1,ai2,…,aiM]。
Definition of LdetRepresenting the target detection loss (the sum of classification loss and regression loss) of the human body detection module, and the parameter μ representing the balance coefficient, the loss function of the multi-task network model for human body detection and attribute identification of the present invention can be represented as:
Figure BDA0002656608990000075
during training, by minimizing LossjointTo obtain the best prediction value.
While the invention has been described with reference to specific embodiments, any feature disclosed in this specification may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise; all of the disclosed features, or all of the method or process steps, may be combined in any combination, except mutually exclusive features and/or steps.

Claims (3)

1. An end-to-end human body detection and attribute identification method is characterized by comprising the following steps:
constructing and training a multi-task network model for human body detection and attribute identification:
the network structure of the multitask network model comprises:
the characteristic extractor is composed of a convolution neural network and is used for extracting a characteristic diagram of the input image;
the human body detection module comprises a classifier and a regressor, and is used for inputting a feature map extracted by the feature extractor, wherein the classifier is used for judging whether the human body is the human body or not, and the regressor is used for predicting the position of the human body;
the attribute identification module is composed of a plurality of attribute identification branches, wherein the number of the attribute identification branches is consistent with the number of the attributes to be identified; the human body position predicted by a regressor of a human body detection module is mapped to a feature map in proportion, a feature block corresponding to the mapped human body position is extracted from the feature map and input to each attribute identification branch;
setting a training data set for training the multitask network model, preprocessing the training data set, then training the network model, and storing the multitask network model meeting the training requirement;
and in training, the adopted loss function comprises the following steps:
the characteristic extractor comprises a batch normalization regularization term, namely a regularization term of a convolutional neural network;
the human body detection module comprises classification loss and regression loss;
the attribute identification module comprises a multitask loss and constraint functions aiming at different attribute relation types;
and inputting the image to be processed into the saved multitask network model, and obtaining a human body attribute identification result based on the network output value of the attribute identification module.
2. The method of claim 1, wherein the dataset preprocessing mode comprises:
filtering samples which do not contain human body objects in the human body detection data set;
the attribute identification dataset is pre-set for default attributes.
3. The method of claim 1, wherein the correlation between attributes is determined based on confidence between the attributes, and wherein the establishing of the constraint domain and the constructing of the constraint function are based on the correlation between the attributes:
calculating the confidence degree between any two attributes, and determining the correlation relationship between the attributes as positive correlation, one-way positive correlation and negative correlation three correlation relationships based on a group of threshold values alpha and beta; wherein alpha is the lower limit of positive correlation, beta is the upper limit of negative correlation, and the defined interval meeting the correlation between the attributes is adjusted by adjusting alpha and beta;
different constraint functions are set for the three types of correlation relations, and the constraint functions meet the condition that the cost of the result outside the constraint domain is higher, and the cost of the result inside the constraint domain is lower; the constraint function comprises a parameter lambda for adjusting the strength of the constraint function, the greater lambda the greater the strength of the constraint.
CN202010889969.4A 2020-08-15 2020-08-28 End-to-end human body detection and attribute identification method Active CN112084913B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2020108217657 2020-08-15
CN202010821765 2020-08-15

Publications (2)

Publication Number Publication Date
CN112084913A true CN112084913A (en) 2020-12-15
CN112084913B CN112084913B (en) 2022-07-29

Family

ID=73729330

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010889969.4A Active CN112084913B (en) 2020-08-15 2020-08-28 End-to-end human body detection and attribute identification method

Country Status (1)

Country Link
CN (1) CN112084913B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112926427A (en) * 2021-02-18 2021-06-08 浙江智慧视频安防创新中心有限公司 Target user dressing attribute identification method and device
CN115131825A (en) * 2022-07-14 2022-09-30 北京百度网讯科技有限公司 Human body attribute identification method and device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108510000A (en) * 2018-03-30 2018-09-07 北京工商大学 The detection and recognition methods of pedestrian's fine granularity attribute under complex scene
US20190205643A1 (en) * 2017-12-29 2019-07-04 RetailNext, Inc. Simultaneous Object Localization And Attribute Classification Using Multitask Deep Neural Networks
CN111178251A (en) * 2019-12-27 2020-05-19 汇纳科技股份有限公司 Pedestrian attribute identification method and system, storage medium and terminal
CN111191526A (en) * 2019-12-16 2020-05-22 汇纳科技股份有限公司 Pedestrian attribute recognition network training method, system, medium and terminal

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190205643A1 (en) * 2017-12-29 2019-07-04 RetailNext, Inc. Simultaneous Object Localization And Attribute Classification Using Multitask Deep Neural Networks
CN108510000A (en) * 2018-03-30 2018-09-07 北京工商大学 The detection and recognition methods of pedestrian's fine granularity attribute under complex scene
CN111191526A (en) * 2019-12-16 2020-05-22 汇纳科技股份有限公司 Pedestrian attribute recognition network training method, system, medium and terminal
CN111178251A (en) * 2019-12-27 2020-05-19 汇纳科技股份有限公司 Pedestrian attribute identification method and system, storage medium and terminal

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李雪: "关联规则对监控下行人属性识别影响的研究", 《计算机与现代化》 *
石方炎: "人体检测与外观属性识别一体化算法研究", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112926427A (en) * 2021-02-18 2021-06-08 浙江智慧视频安防创新中心有限公司 Target user dressing attribute identification method and device
CN115131825A (en) * 2022-07-14 2022-09-30 北京百度网讯科技有限公司 Human body attribute identification method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112084913B (en) 2022-07-29

Similar Documents

Publication Publication Date Title
CN111860506B (en) Method and device for recognizing characters
CN113065558A (en) Lightweight small target detection method combined with attention mechanism
CN110532970B (en) Age and gender attribute analysis method, system, equipment and medium for 2D images of human faces
US20230119593A1 (en) Method and apparatus for training facial feature extraction model, method and apparatus for extracting facial features, device, and storage medium
CN108460356A (en) A kind of facial image automated processing system based on monitoring system
CN109993102B (en) Similar face retrieval method, device and storage medium
CN106599800A (en) Face micro-expression recognition method based on deep learning
CN109767422A (en) Pipe detection recognition methods, storage medium and robot based on deep learning
CN106126585B (en) The unmanned plane image search method combined based on quality grading with perceived hash characteristics
CN114241548A (en) Small target detection algorithm based on improved YOLOv5
CN112132197A (en) Model training method, image processing method, device, computer equipment and storage medium
US10943157B2 (en) Pattern recognition method of autoantibody immunofluorescence image
CN112084913B (en) End-to-end human body detection and attribute identification method
CN113761259A (en) Image processing method and device and computer equipment
CN108133197B (en) Method and apparatus for generating information
CN113283368B (en) Model training method, face attribute analysis method, device and medium
CN114627502A (en) Improved YOLOv 5-based target recognition detection method
CN112270681B (en) Method and system for detecting and counting yellow plate pests deeply
CN110222718A (en) The method and device of image procossing
CN109948429A (en) Image analysis method, device, electronic equipment and computer-readable medium
CN114360067A (en) Dynamic gesture recognition method based on deep learning
CN111524140B (en) Medical image semantic segmentation method based on CNN and random forest method
CN112488003A (en) Face detection method, model creation method, device, equipment and medium
CN109670423A (en) A kind of image identification system based on deep learning, method and medium
CN115223239A (en) Gesture recognition method and system, computer equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant