CN110287835A - A kind of Asia face database Intelligent Establishment method - Google Patents

A kind of Asia face database Intelligent Establishment method Download PDF

Info

Publication number
CN110287835A
CN110287835A CN201910514779.1A CN201910514779A CN110287835A CN 110287835 A CN110287835 A CN 110287835A CN 201910514779 A CN201910514779 A CN 201910514779A CN 110287835 A CN110287835 A CN 110287835A
Authority
CN
China
Prior art keywords
face
data
asian
intelligent
clustering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910514779.1A
Other languages
Chinese (zh)
Inventor
刘鹏
张真
汪良楠
曹骝
秦恩泉
武郑浩
夏如超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Innovative Data Technologies Inc
Original Assignee
Nanjing Innovative Data Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Innovative Data Technologies Inc filed Critical Nanjing Innovative Data Technologies Inc
Priority to CN201910514779.1A priority Critical patent/CN110287835A/en
Publication of CN110287835A publication Critical patent/CN110287835A/en
Priority to PCT/CN2020/091145 priority patent/WO2020248782A1/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/50Maintenance of biometric data or enrolment thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a kind of Asia face database Intelligent Establishment methods, comprising the following steps: chooses data source;Video decoding;Face datection;Remove blurred picture;It arranges and clear data set of classifying.Inventive process avoids excessive financial resources, material resources, manpowers to spend, and the human face data collection established is multi-pose, more backgrounds mostly, therefore facilitates the raising of model generalization ability.Advantage based on Asia film quantity simultaneously makes to establish million grades of data Kuchengs' possibility.

Description

Intelligent establishing method for Asia face library
Technical Field
The invention belongs to the technical field of computer vision, and particularly relates to an intelligent Asian face library establishing method.
Background
In recent years, the security industry has raised a wave of hot tide of face recognition, and many manufacturers have launched corresponding products, so that face recognition becomes a hot point of the industry at one time. According to statistics, at least 40 enterprises display own face recognition products at the public safety fair of the international society in 2017. The security manufacturer has large security manufacturers such as Dahua shares and Haikangwei, and also has intelligent manufacturers such as Hanwang and Yin Chen. Meanwhile, a great deal of media successively report the huge achievements of the face recognition technology in academia and industry: for example, Tengcong has obtained higher recognition rate on LFW face recognition data set before, has refreshed the record of Google of the beginning of the year; the application of the combination of face recognition and payment is demonstrated by the airbus group executing the chairman maryun at the german exhibition, and the 'face brushing payment' will go to life. These exciting messages seem to clearly tell us that face recognition has looked into reality from the "dream".
However, the recognition rate of the face is mainly determined by the algorithm and the face data set. For the algorithm, the deep learning model is pointed at uniformly at present, and the face recognition models of several big-headed companies with higher face recognition rate are formed or published, which can be implemented to bring about the spirit of significance. Unfortunately, the face database disclosed below is almost a western data set, and due to ethnic differences, the trained model is more adaptive to the western data set, and the performance of face recognition for asia is low. However, the face database of the domestic huge company is not open, so that the expected target is difficult to achieve even if the same algorithm is used. Therefore, how to build own asian face database becomes a core problem of the present company.
Disclosure of Invention
The invention aims to solve the technical problem of providing an intelligent Asian face bank establishing method with low cost and low manpower aiming at the defects of the prior art.
In order to achieve the technical purpose, the technical scheme adopted by the invention is as follows:
an Asian face library intelligent establishment method comprises the following steps:
selecting a data source;
video decoding;
detecting a human face;
removing the blurred picture;
and sorting and classifying the clear data set.
In order to optimize the technical scheme, the specific measures adopted further comprise:
the data source is an asian movie.
The video decoding adopts a frame extraction mode and is used for reducing the time complexity and the space complexity of video data; the face detection adopts an improved yolov3-tiny face detection algorithm.
The designed face detection model is improved based on yolov3-tiny design concept, and specifically comprises the following steps:
because the number of target detection categories is less, the number of the convolutional layers is set to be eight layers;
secondly, the improved yolov3-tiny is trained according to a classification mode in the early training, namely, the improved yolov3-tiny is utilized behind the feature layer
softmax carries out two classifications to obtain an initialization model;
and finally, initializing improved yolov3-tiny by using the trained classification model to carry out large-scale face detection training.
The texture detection of each human face is realized by the aid of a rapid convolution function by means of a texture detection mechanism of a Sobel operator in the deblurred picture.
The sorting and classifying clear data set comprises a designed human face feature extraction model, human face data inter-class clustering, human face data intra-class clustering, human face data inter-class merging, human face data secondary cleaning and manual naming;
the designed human face feature extraction model is improved based on a residual error network ResNet-18, and specifically comprises the following steps:
adding one block at conv4_ x and two blocks at conv5_ x;
reducing the number of filters of each layer by half;
the loss layer of the last layer is designed by triple loss;
and (3) carrying out model training by using the CASIA-Webface data set to realize extraction of the human face features.
The clustering among the human face data classes adopts a K-Means clustering mode to realize clustering differentiation on a video data human face mixed set, and finally a K personal face collecting box is generated, wherein K takes the value of 40.
The face data type cohesion specifically includes: and respectively screening the main body categories of the K face collecting boxes by adopting a ResNet _ clustering algorithm, and cleaning the previous round of error data, wherein the number of K is determined by the algorithm in a self-adaptive manner.
The above merging between the face data classes specifically includes: the similarity among different collecting boxes is judged by a method of solving the mean characteristic of samples in each collecting box, and reasonable combination is carried out according to the size of a similarity threshold value, so that the same type of faces of different collecting boxes are combined.
The secondary cleaning of the face data comprises the following steps:
(1) calculating the mean characteristic in the face collection box;
(2) calculating the distance between each face feature in the collection box and the face mean value feature and sorting according to the distance;
(3) extracting the face features corresponding to the median indexes according to the sorting, and calculating the average value of the face features and the median indexes if the face features are even numbers;
(4) and calculating the distances between all the face features in the collection box and the median face feature, and removing the face data with the distances larger than a judgment threshold value.
The manual naming is that ID naming is carried out on the collecting box through hundred-degree recognition, so that the subsequent face collecting boxes are effectively combined.
The invention has the following beneficial effects:
the method avoids excessive financial, material and labor costs, and the established face data set is mostly multi-pose and multi-background, so that the method is beneficial to improving the generalization capability of the model. Meanwhile, a million-level database is easily established based on the advantage of the number of the Asian movies.
Drawings
FIG. 1 is a schematic flow diagram of the present invention;
FIG. 2 is a schematic flow chart of an embodiment of the present invention.
Detailed Description
Embodiments of the present invention are described in further detail below with reference to the accompanying drawings.
As shown in fig. 1 and 2, the intelligent asian face bank establishing method of the present invention includes the following steps:
s1: selecting a data source;
the data source selected by the embodiment of the invention is an Asian movie for the following reasons: (1) asia belongs to the high-yielding regions of movies. For example, China, Korea and Japan all belong to high-yield cinematography countries, so that the quantity is guaranteed; (2) the scene change in the film is frequent, the types of actors are more, and the posture change of the human face is rich, so that the quality of the film is guaranteed.
S2: video decoding;
the video decoding is mainly used for face detection, because the number of frames of pictures in the video is too large and the redundancy is too large, in order to reduce the time complexity and the space complexity, the embodiment of the invention adopts a frame extraction mode to perform video decoding, takes a movie as a unit, and performs decoding operation on movie video in a mode of extracting one frame per second, wherein about 7200 pictures can be obtained by one movie on average, and the time is about 8 minutes.
S3: detecting a human face;
the current face detection algorithm is mature, and the comparison of the SeetaFace face detection test result and the Dlib face detection test result shows that the Dlib face detection performance is in a descending trend when the resolution of an image is higher than 800 due to different detection mechanisms of the SeetaFace face detection test result and the Dlib face detection test result. The data source of the Asian face library is taken from a movie, and the resolution ratio of the Asian face library is often higher, so that SeetaFace detection is slightly superior. The experimental results were tested as follows:
setaface parameter settings: the minimum face is set to 40 × 40, and the confidence of the face is: f, scale pyramid scaling factor: 0.8f, step size of sliding window: 4, the collection work of all the faces in the picture is realized based on the parameter setting, 1080p movie data is processed on average, 20 minutes is needed, the processing speed is slow, and an improved yolov3-tiny algorithm is adopted.
In the embodiment, the face detection adopts an improved yolov3-tiny face detection algorithm to perform face detection on each extracted frame of picture, and the time spent on processing 1080p movie data is reduced to about one fourth of the original time.
S4: removing the blurred picture;
based on the attribute of the movie data, the motion and the posture in the video are always continuously changed, so that the motion blur phenomenon exists in the acquired human face. And filtering and screening the face pictures one by adopting a fuzzy judgment mechanism.
In an embodiment, the blur removal picture adopts a texture detection mechanism using a Sobel operator, and the texture detection of each face is realized by using a fast convolution function, which specifically includes:
before detection, the face image A is normalized to have the size: 150 x 150 to achieve a unified decision criterion. And setting the judgment threshold as Tm, and directly removing the face image smaller than the threshold into a fuzzy data set for subsequent development to obtain a face data set with higher definition. The specific formula is as follows:
wherein G isx,GyRepresenting the image grey values of the transverse and longitudinal edge detection maps, respectively.
Each pixel in the image is combined laterally and longitudinally by the following formula:
generally, to improve efficiency, an approximation that is not squared is used:
|G|=|Gx|+|Gy| (2)
the image edge binarization processing formula is as follows:
where T110 is a specified threshold.
The ambiguity calculation formula is as follows:
and FU is a blurring value of the face image, the larger the value is, the higher the definition of the image is, if FU is less than Tm, the blurred face image is considered to be directly removed, and the blurred face image is reserved on the contrary.
S5: and sorting and classifying the clear data set.
In the embodiment, the sorting and classifying clear data set comprises a designed face feature extraction model, face data inter-class clustering, face data intra-class clustering, face data inter-class merging, face data secondary cleaning and manual naming;
the design of the human face feature extraction model comprises the following steps:
the extraction of the human face features is the key of data sorting, but the requirements on the recognition rate of the human face are not high based on the requirements established by a human face database. Therefore, the deep convolutional network designed in the embodiment of the present invention is a residual learning network (residual network) and includes 24 convolutional layers. The last layer of loss layer is triple loss. The training data set was the published CASIA-WebFace data set, which was finally tested on LFW with an accuracy of 95.43% sufficient for classification of human faces.
Specifically, the face feature extraction model is improved from a residual error network ResNet-18, and the implementation method is as follows:
respectively adding one block at conv4_ x, adding two blocks at conv5_ x, reducing the number of filters of each layer by half, wherein a triple loss function is adopted in a loss layer, and a 150 multiplied by 150 three-channel face image is input.
The core size of the convolution layer in the network is 3 multiplied by 3, and the initialization method is MSRA; the size of the first pooling layer core is 3 x 3, and the step length is 2; the kernel size of the next three pooling layers is 2 × 2, the step size is 2, the maximum pooling is adopted, the last one is a global average pooling layer, the kernel size is 2 × 2, the step size is 2, and the final output characteristic length is 128. Model training used the CASIA-WebFace dataset, which was finally tested on LFW with an accuracy of 95.43%. The loss function is as follows:
wherein,the α edge over-parameters mainly control the class inner distance and the class interval, which are the characteristics of positive and negative samples respectively.
Clustering among the face data classes:
the data set after the fuzzy image cleaning is still a mixed face collection box, how to gather the same faces together, and the key of the step is the separation of different faces. In the embodiment of the invention, a special deep convolution network is adopted to extract the features of all the faces in the collection box, and a K-Means strategy is adopted to perform clustering operation on all the face features, wherein the K value is 40 (empirical value) because the face clustering takes one movie as a unit.
Specifically, the feature extraction is performed on the faces of the whole set by adopting a ResNet _ face model based on the face picture set of one movie by about N, and the feature dimension is 128, that is, a feature matrix of (N,128) is generated. And then carrying out K-Means clustering on the face, wherein the value of K is 40, and finally generating a collecting box with 40 face clusters. The K-Means cost function is as follows:
wherein f isiIs a feature of a human face, mukA feature of a central cluster.
Clustering in the face data classes:
the K-Means clustering only realizes the integral distinguishing of different classes of aggregates, or only ensures that most similar individuals are gathered in the same collecting box as much as possible, namely 40 corresponding collecting boxes. However, the face data volume in the collection box of each category is large, the complexity is high, and the key still has the phenomenon of wrong division. To get rid of this phenomenon, the quasi-cohesive class is essential. Therefore, the embodiment of the invention adopts a ResNet _ clustering algorithm to realize the intra-class clustering. In the process, a ResNet network is mainly adopted to extract the characteristics of the human faces in the collecting box, and meanwhile, the collecting box is taken as a unit to respectively realize K-Means intra-class clustering. The size of the clustering center K in the clustering process is not specified, and the clustering process is realized by an algorithm, so that the cluster with the largest number of samples is finally filtered out to serve as the main body category of the collecting box.
The examples are as follows: and (3) clustering 40 face collecting boxes obtained based on inter-class clustering, wherein each face collecting box is clustered, the clustering center is determined by algorithm self-adaption, and finally each collecting box generates M clusters. And (4) screening the number of samples of the M clusters, wherein the cluster with the largest number of samples is used as the main body category of the corresponding collection box. The self-adaptive algorithm adopts a binary K-means algorithm, namely two samples with the farthest distance are selected before clustering and serve as two initial clustering centers, then one sample in the clusters is selected to be split continuously, and the like, and the splitting is stopped when the distance between the two samples with the farthest distance in the clusters is smaller than a threshold value.
Merging the face data classes:
through the three operations, the face data set of one movie is basically formed. However, because the main character in the movie appears frequently and has large appearance, posture and scene changes, the main character in the movie is easy to cluster into different categories, so that the categories are indispensably combined. The implementation algorithm of the step in the embodiment of the invention is as follows: and judging the similarity between different collecting boxes by a method of solving the mean characteristic of the samples in each collecting box, and reasonably combining the samples according to the size of a similarity threshold.
The examples are as follows: 40 main body face collecting boxes obtained based on similar aggregation are used for respectively extracting the features of the faces in each collecting box and solving the mean value feature, namely mean _ feature1,mean_feature2,…mean_feature40. And then, performing Euclidean distance on every two 40 mean value characteristics, and merging the collection boxes corresponding to the two mean value characteristics when the distance is smaller than dis. The mean characteristic formula is as follows:
wherein i refers to the corresponding collection bin, representing the characteristics of the nth sample in the ith collection bin, fi nRefer toTotal number of samples in the collection box.
Wherein the distance formula is as follows:
wherein D isijRefers to the distance between collection bin i and collection bin j, and 128 refers to the dimension of the feature.
When D is presentijDis is not equalled, then merge collecting box i with collecting box j, and is 0.31 for the appointed threshold value.
And (3) secondary cleaning of the face data:
to further ensure the cleanliness of the data set, the second cleaning is a similar median filtering operation on the previous basis for each collection bin. And reasonably screening according to the distance between the median feature and each sample feature.
Specifically, the P face collection boxes obtained based on inter-class combination are subjected to secondary cleaning, the adopted strategy is similar to median filtering, and the algorithm is as follows:
(1) calculating the mean value characteristic in the face collection box by adopting a formula (7);
(2) calculating the distance between each face feature in the collection box and the face mean feature and sorting according to the distance, wherein the distance formula is similar to the formula (8);
(3) extracting the face features corresponding to the median indexes according to the sorting, and calculating the average value of the face features and the median indexes if the face features are even numbers;
(4) and calculating the distances between all the face features in the collection box and the median face feature, and directly removing the face features if the distances are more than 0.41 (empirical value).
And the four steps are respectively completed for all the collecting boxes.
The manual naming is as follows:
based on all the steps, the face library sorting and classification of one movie is basically finished, but the IDs of the collection boxes are virtual and need to be named manually. The implemented strategy is as follows: the face recognition is carried out by randomly extracting a picture in the collection box and using a hundredth picture recognition tool, and the corresponding face collection box is named according to the specific recognition result.
The above is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above-mentioned embodiments, and all technical solutions belonging to the idea of the present invention belong to the protection scope of the present invention. It should be noted that modifications and embellishments within the scope of the invention may be made by those skilled in the art without departing from the principle of the invention.

Claims (10)

1. An Asian face library intelligent establishment method is characterized by comprising the following steps:
selecting a data source;
video decoding;
detecting a human face;
removing the blurred picture;
and sorting and classifying the clear data set.
2. The intelligent Asian face library building method as claimed in claim 1, wherein the data source is an Asian movie.
3. The intelligent Asian face bank building method according to claim 1, wherein the video decoding adopts a frame extraction mode for reducing the time complexity and the space complexity of video data; the face detection adopts improved yolov3-tiny to design a face detection model, and the method specifically comprises the following steps:
the number of the convolution layers of the face detection model is eight;
during early training, training the improved yolov3-tiny according to a classification mode, namely classifying the improved yolov3-tiny by utilizing softmax behind a characteristic layer to obtain an initialization model;
and finally, initializing the improved yolov3-tiny by using the trained initialization model to carry out large-scale face detection training.
4. The intelligent Asian face bank building method according to claim 1, wherein the deblurred picture uses a texture detection mechanism of Sobel operator and uses a fast convolution function to realize texture detection of each face.
5. The intelligent Asian face library establishing method according to claim 1, wherein the sorting and classifying of the clear data set comprises designing a face feature extraction model, face inter-class clustering, face intra-class clustering, face inter-class merging, face secondary cleaning and manual naming;
the designed human face feature extraction model is improved based on a residual error network ResNet-18, and specifically comprises the following steps:
adding one block at conv4_ x and two blocks at conv5_ x;
the number of filters in each layer is reduced by half;
the loss layer of the last layer is designed by triple loss;
and (3) carrying out model training by using the CASIA-Webface data set to realize extraction of the human face features.
6. The intelligent Asian face bank establishing method according to claim 5, wherein the inter-face data cluster adopts a K-Means clustering mode to realize cluster differentiation of a video data face mixed set, and finally a K personal face collecting box is generated, wherein K takes a value of 40.
7. The intelligent Asian face database establishing method according to claim 5, wherein the face data category clustering specifically is: and respectively screening the main body categories of the K face collecting boxes by adopting a ResNet _ clustering algorithm, and cleaning the previous round of error data, wherein the number of K is determined by the algorithm in a self-adaptive manner.
8. The intelligent Asian face database establishing method as claimed in claim 5, wherein said face data inter-class merging specifically comprises: the similarity among different collecting boxes is judged by a method of solving the mean characteristic of samples in each collecting box, and reasonable combination is carried out according to the size of a similarity threshold value, so that the same type of faces of different collecting boxes are combined.
9. The intelligent Asian face bank building method according to claim 5, wherein the secondary face data cleaning comprises the following steps:
(1) calculating the mean characteristic in the face collection box;
(2) calculating the distance between each face feature in the collection box and the face mean value feature and sorting according to the distance;
(3) extracting the face features corresponding to the median indexes according to the sorting, and calculating the average value of the face features and the median indexes if the face features are even numbers;
(4) and calculating the distances between all the face features in the collection box and the median face feature, and removing the face data with the distances larger than a judgment threshold value.
10. The Asian face library intelligent building method as claimed in claim 5, wherein the manual naming is that ID naming is performed on the collection boxes through hundred-degree recognition, so that subsequent face collection boxes are effectively merged.
CN201910514779.1A 2019-06-14 2019-06-14 A kind of Asia face database Intelligent Establishment method Pending CN110287835A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910514779.1A CN110287835A (en) 2019-06-14 2019-06-14 A kind of Asia face database Intelligent Establishment method
PCT/CN2020/091145 WO2020248782A1 (en) 2019-06-14 2020-05-20 Intelligent establishment method for asian face database

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910514779.1A CN110287835A (en) 2019-06-14 2019-06-14 A kind of Asia face database Intelligent Establishment method

Publications (1)

Publication Number Publication Date
CN110287835A true CN110287835A (en) 2019-09-27

Family

ID=68004830

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910514779.1A Pending CN110287835A (en) 2019-06-14 2019-06-14 A kind of Asia face database Intelligent Establishment method

Country Status (2)

Country Link
CN (1) CN110287835A (en)
WO (1) WO2020248782A1 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110569827A (en) * 2019-09-28 2019-12-13 华南理工大学 Face recognition reminding system based on convolutional neural network
WO2020248782A1 (en) * 2019-06-14 2020-12-17 南京云创大数据科技股份有限公司 Intelligent establishment method for asian face database
CN112232410A (en) * 2020-10-15 2021-01-15 浙江凌图科技有限公司 Multi-region large-scale feature-oriented matching method
CN112287753A (en) * 2020-09-23 2021-01-29 武汉天宝莱信息技术有限公司 System for improving face recognition precision based on machine learning and algorithm thereof
CN112381077A (en) * 2021-01-18 2021-02-19 南京云创大数据科技股份有限公司 Method for hiding face image information
CN112597862A (en) * 2020-12-16 2021-04-02 北京芯翌智能信息技术有限公司 Method and equipment for cleaning face data
CN112800840A (en) * 2020-12-28 2021-05-14 上海万雍科技股份有限公司 Face recognition management system and method
CN113469321A (en) * 2020-03-30 2021-10-01 聚晶半导体股份有限公司 Object detection device and object detection method based on neural network
CN113779290A (en) * 2021-09-01 2021-12-10 杭州视洞科技有限公司 Camera face recognition aggregation optimization method
CN114373212A (en) * 2022-01-10 2022-04-19 中国民航信息网络股份有限公司 Face recognition model construction method, face recognition method and related equipment
US11495015B2 (en) 2020-03-30 2022-11-08 Altek Semiconductor Corp. Object detection device and object detection method based on neural network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102938065A (en) * 2012-11-28 2013-02-20 北京旷视科技有限公司 Facial feature extraction method and face recognition method based on large-scale image data
CN109117803A (en) * 2018-08-21 2019-01-01 腾讯科技(深圳)有限公司 Clustering method, device, server and the storage medium of facial image
CN109684913A (en) * 2018-11-09 2019-04-26 长沙小钴科技有限公司 A kind of video human face mask method and system based on community discovery cluster

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6697504B2 (en) * 2000-12-15 2004-02-24 Institute For Information Industry Method of multi-level facial image recognition and system using the same
CN108921875B (en) * 2018-07-09 2021-08-17 哈尔滨工业大学(深圳) Real-time traffic flow detection and tracking method based on aerial photography data
CN109871751A (en) * 2019-01-04 2019-06-11 平安科技(深圳)有限公司 Attitude appraisal procedure, device and storage medium based on facial expression recognition
CN110287835A (en) * 2019-06-14 2019-09-27 南京云创大数据科技股份有限公司 A kind of Asia face database Intelligent Establishment method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102938065A (en) * 2012-11-28 2013-02-20 北京旷视科技有限公司 Facial feature extraction method and face recognition method based on large-scale image data
CN109117803A (en) * 2018-08-21 2019-01-01 腾讯科技(深圳)有限公司 Clustering method, device, server and the storage medium of facial image
CN109684913A (en) * 2018-11-09 2019-04-26 长沙小钴科技有限公司 A kind of video human face mask method and system based on community discovery cluster

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
REDMON JOSEPH 等: "YOLOv3: An Incremental Improvement", 《ARXIV》, 8 August 2018 (2018-08-08), pages 1 - 6 *
浦昭邦 等: "《MATLAB图像处理从入门到精通》", 西安电子科技大学出版社, pages: 176 - 177 *
黄叶珏: "一种时效相关的在线人脸聚类方法", 《计算机时代》 *
黄叶珏: "一种时效相关的在线人脸聚类方法", 《计算机时代》, no. 11, 15 November 2018 (2018-11-15), pages 76 - 78 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020248782A1 (en) * 2019-06-14 2020-12-17 南京云创大数据科技股份有限公司 Intelligent establishment method for asian face database
CN110569827A (en) * 2019-09-28 2019-12-13 华南理工大学 Face recognition reminding system based on convolutional neural network
CN110569827B (en) * 2019-09-28 2024-01-05 华南理工大学 Face recognition reminding system based on convolutional neural network
US11495015B2 (en) 2020-03-30 2022-11-08 Altek Semiconductor Corp. Object detection device and object detection method based on neural network
CN113469321A (en) * 2020-03-30 2021-10-01 聚晶半导体股份有限公司 Object detection device and object detection method based on neural network
CN112287753A (en) * 2020-09-23 2021-01-29 武汉天宝莱信息技术有限公司 System for improving face recognition precision based on machine learning and algorithm thereof
CN112232410B (en) * 2020-10-15 2023-08-29 苏州凌图科技有限公司 Multi-region-oriented large-scale feature matching method
CN112232410A (en) * 2020-10-15 2021-01-15 浙江凌图科技有限公司 Multi-region large-scale feature-oriented matching method
CN112597862A (en) * 2020-12-16 2021-04-02 北京芯翌智能信息技术有限公司 Method and equipment for cleaning face data
CN112800840A (en) * 2020-12-28 2021-05-14 上海万雍科技股份有限公司 Face recognition management system and method
CN112800840B (en) * 2020-12-28 2022-07-01 上海万雍科技股份有限公司 Face recognition management system and method
CN112381077A (en) * 2021-01-18 2021-02-19 南京云创大数据科技股份有限公司 Method for hiding face image information
CN112381077B (en) * 2021-01-18 2021-05-11 南京云创大数据科技股份有限公司 Method for hiding face image information
CN113779290A (en) * 2021-09-01 2021-12-10 杭州视洞科技有限公司 Camera face recognition aggregation optimization method
CN114373212A (en) * 2022-01-10 2022-04-19 中国民航信息网络股份有限公司 Face recognition model construction method, face recognition method and related equipment
WO2023130613A1 (en) * 2022-01-10 2023-07-13 中国民航信息网络股份有限公司 Facial recognition model construction method, facial recognition method, and related device

Also Published As

Publication number Publication date
WO2020248782A1 (en) 2020-12-17

Similar Documents

Publication Publication Date Title
CN110287835A (en) A kind of Asia face database Intelligent Establishment method
CN108562589B (en) Method for detecting surface defects of magnetic circuit material
CN111915530B (en) End-to-end-based haze concentration self-adaptive neural network image defogging method
CN107133943B (en) A kind of visible detection method of stockbridge damper defects detection
CN103366180B (en) A kind of cell image segmentation method based on automated characterization study
CN110009622B (en) Display panel appearance defect detection network and defect detection method thereof
CN112837282A (en) Small sample image defect detection method based on cloud edge cooperation and deep learning
CN113870128B (en) Digital mural image restoration method based on depth convolution countermeasure network
CN112950477A (en) High-resolution saliency target detection method based on dual-path processing
CN109191430A (en) A kind of plain color cloth defect inspection method based on Laws texture in conjunction with single classification SVM
CN113870286B (en) Foreground segmentation method based on multi-level feature and mask fusion
CN112241939B (en) Multi-scale and non-local-based light rain removal method
CN103886585A (en) Video tracking method based on rank learning
CN110852199A (en) Foreground extraction method based on double-frame coding and decoding model
CN108345835B (en) Target identification method based on compound eye imitation perception
CN110363156A (en) A kind of Facial action unit recognition methods that posture is unrelated
CN113658108A (en) Glass defect detection method based on deep learning
CN116030396A (en) Accurate segmentation method for video structured extraction
CN107133579A (en) Based on CSGF (2D)2The face identification method of PCANet convolutional networks
CN111612803B (en) Vehicle image semantic segmentation method based on image definition
CN117911437A (en) Buckwheat grain adhesion segmentation method for improving YOLOv x
CN113298857A (en) Bearing defect detection method based on neural network fusion strategy
CN103324956B (en) A kind of seat statistical method based on distributed video detection
CN108537266A (en) A kind of cloth textured fault sorting technique of depth convolutional network
CN110276260B (en) Commodity detection method based on depth camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190927