CN110796101A - Face recognition method and system of embedded platform - Google Patents

Face recognition method and system of embedded platform Download PDF

Info

Publication number
CN110796101A
CN110796101A CN201911055083.3A CN201911055083A CN110796101A CN 110796101 A CN110796101 A CN 110796101A CN 201911055083 A CN201911055083 A CN 201911055083A CN 110796101 A CN110796101 A CN 110796101A
Authority
CN
China
Prior art keywords
face
image information
face image
recognition
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911055083.3A
Other languages
Chinese (zh)
Inventor
安民洙
葛晓东
林玉娟
姜贺
梁立宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Light Speed Intelligent Equipment Co.,Ltd.
Tenghui Technology Building Intelligence (Shenzhen) Co.,Ltd.
Original Assignee
Guangdong Light Speed Intelligent Equipment Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Light Speed Intelligent Equipment Co Ltd filed Critical Guangdong Light Speed Intelligent Equipment Co Ltd
Priority to CN201911055083.3A priority Critical patent/CN110796101A/en
Publication of CN110796101A publication Critical patent/CN110796101A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention discloses a face recognition method and a face recognition system of an embedded platform, wherein the method comprises the steps of acquiring face image information acquired by peripheral equipment, preprocessing the face image information, constructing a feature fusion network, establishing a face feature recognition model, inputting preprocessed face image information A and face image information B into the feature fusion network, extracting feature vectors of the face image information A and the face image information B through a trained face feature recognition model, and acquiring a face feature vector F of the face image information A1And face feature vector F of face image information B2Calculating a face feature vector F1And face feature vector F2And outputting the identification result according to the cosine distance. The system is used for realizing the face recognition method. The invention can realize high-precision face recognition facing embedded equipment under a deep neural network.

Description

Face recognition method and system of embedded platform
[ technical field ] A method for producing a semiconductor device
The invention relates to the technical field of face recognition, in particular to a face recognition method of an embedded platform and a system applied to the face recognition method.
[ background of the invention ]
The face recognition technology is a technology of multiple disciplines such as image processing, pattern recognition and the like, and the face image is processed and analyzed by using a computer to obtain effective characteristic information for identity recognition. Compared with other biological recognition technologies, the face recognition technology has the characteristics of non-contact and non-mandatory collection, simplicity in operation, visual result, good concealment and the like, and is more acceptable to people. The human face is a collection of patterns containing rich information, is one of main signs for mutual identification and recognition of human beings, and is one of objects of visual interest in images and videos. Compared with other human body biological characteristics such as fingerprints, irises, voice and the like, the human face recognition is more direct, and the recognition effect can be better achieved without interfering with normal behaviors of people.
The early face recognition is based on geometric features and statistical features, is greatly influenced by factors such as ambient illumination, face angles and the like in an actual application scene, and the application occasions of the face recognition are greatly limited due to the accuracy and stability of the face recognition. At present, with the introduction of deep learning in face recognition, the accuracy of the algorithm is greatly improved, and most practical applications can be supported.
However, the computation amount of deep learning is huge, and the current complex face recognition task mainly depends on the support of cloud computing. The face recognition product is developed towards an embedded end with lower cost, and the existing face recognition algorithm for deep learning is difficult to deploy in an embedded device.
[ summary of the invention ]
The invention mainly aims to provide a face recognition method of an embedded platform based on deep learning and embedded equipment.
The invention further aims to provide a face recognition system based on an embedded platform of deep learning and embedded equipment.
In order to achieve the above main object, the face recognition method of the embedded platform provided by the invention comprises the steps of acquiring a peripheral deviceThe collected face image information is preprocessed; constructing a feature fusion network, and establishing a face feature recognition model; inputting the preprocessed face image information A and the preprocessed face image information B into a feature fusion network, and extracting feature vectors of the face image information A and the face image information B through a trained face feature recognition model; obtaining a face feature vector F of face image information A1And face feature vector F of face image information B2(ii) a Calculating face feature vector F1And face feature vector F2And outputting the identification result according to the cosine distance.
The method comprises the following steps of inputting face image information into a face detection network for face detection, detecting the area where a face is located in the face image information and the coordinates of five key points including a left eye, a right eye, a nose, a left mouth corner and a right mouth corner on the face, and carrying out affine transformation according to the coordinates of the five key points to obtain a corrected face recognition cutting image, wherein the face detection network is an MTCNN network.
In a further aspect, the feature fusion network is a MobileNetV2 deep network.
A further scheme is that an MS-Celeb-1M data set is constructed as a face training data set, the face training data set is input into a softmax loss function for training, and a pre-training model is obtained through the ArcFace loss function for training; and training the pre-training model by the acquired Asian face data set through an ArcFace loss function, and further obtaining a face feature recognition model.
The method further comprises the steps of determining that the face image information A and the face image information B are the same person if the cosine distance is larger than a face similarity threshold, and determining that the face image information A and the face image information B are different persons if the cosine distance is smaller than the face similarity threshold.
Therefore, the method can input the shot face picture into the depth recognition network for recognition after the face picture is subjected to face detection and position adjustment through affine transformation, so that the whole recognition system is lighter and has no loss of precision, and the method has higher practicability.
In addition, because the public data set of the face recognition mainly comes from European and American countries, the invention adopts Asian face data to train in order to improve the precision of the model facing Asian faces.
Therefore, the face recognition method of the invention adopts a deep network suitable for embedded equipment, and improves the model precision by utilizing Asian face data.
In order to achieve the other object, the invention further provides a face recognition system of an embedded platform, which comprises a preprocessing module, a face recognition module and a face recognition module, wherein the face recognition module is used for acquiring face image information acquired by peripheral equipment and preprocessing the face image information; the model training module is used for constructing a feature fusion network and establishing a human face feature recognition model; a feature extraction module used for inputting the preprocessed face image information A and the face image information B into a feature fusion network and extracting feature vectors of the face image information A and the face image information B through a trained face feature recognition model, and the feature extraction module is also used for obtaining the face feature vector F of the face image information A1And face feature vector F of face image information B2(ii) a A recognition result output module for calculating the face feature vector F1And face feature vector F2And outputting the identification result according to the cosine distance.
The method comprises the following steps of inputting face image information into a face detection network for face detection, detecting the area where a face is located in the face image information and the coordinates of five key points including a left eye, a right eye, a nose, a left mouth corner and a right mouth corner on the face, and carrying out affine transformation according to the coordinates of the five key points to obtain a corrected face recognition cutting image, wherein the face detection network is an MTCNN network.
In a further aspect, the feature fusion network is a MobileNetV2 deep network.
A further scheme is that an MS-Celeb-1M data set is constructed as a face training data set, the face training data set is input into a softmax loss function for training, and a pre-training model is obtained through the ArcFace loss function for training; and training the pre-training model by the acquired Asian face data set through an ArcFace loss function, and further obtaining a face feature recognition model.
The method further comprises the steps of determining that the face image information A and the face image information B are the same person if the cosine distance is larger than a face similarity threshold, and determining that the face image information A and the face image information B are different persons if the cosine distance is smaller than the face similarity threshold.
Therefore, the face recognition system provided by the invention can realize face recognition under a deep neural network, is a high-precision face recognition system facing embedded equipment, and a shot face picture can be input into the deep recognition network for recognition after being subjected to face detection and position adjustment through affine transformation, so that the whole recognition system is lighter in weight, has no loss of precision and has higher practicability.
In addition, because the public data set of the face recognition mainly comes from European and American countries, the invention adopts Asian face data to train in order to improve the precision of the model facing Asian faces.
Therefore, the system of the invention adopts a deep network suitable for embedded equipment, and improves the model precision by utilizing Asian face data.
[ description of the drawings ]
Fig. 1 is a flow chart of an embodiment of a face recognition method of an embedded platform according to the present invention.
Fig. 2 is a schematic block diagram of an embodiment of a face recognition system of an embedded platform according to the present invention.
[ detailed description ] embodiments
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention.
The embodiment of a face recognition method of an embedded platform comprises the following steps:
referring to fig. 1, when detecting and recognizing the face of a user, in the face recognition method of this embodiment, first, step S1 is executed to acquire face image information acquired by a peripheral device, and pre-process the face image information, where in the embodiment of the present invention, the peripheral device may be a mobile phone, a camera, an access control system, or other devices with an image acquisition function.
Specifically, in the face recognition method of this embodiment, when face image information is preprocessed, the face image information is input into a face detection network for face detection, coordinates of five key points including a left eye, a right eye, a nose, a left mouth corner and a right mouth corner on a face and an area where the face is located in the face image information are detected, and affine transformation is performed according to the coordinates of the five key points, so that a corrected face recognition tangential graph can be obtained, wherein the face detection network is an MTCNN network.
Therefore, in the preprocessing part, in order to improve the speed of face detection, the MTCNN network with the cascade concept is used to detect the face image and obtain 5 contour points, namely, a left eye point, a right eye point, a nose tip, a left mouth corner and a right mouth corner. And adopting a fixed template for the contour points of the target face, solving an affine transformation equation of the image through the relation between the detection points and the target points, and then obtaining a corrected face recognition cutting image through affine transformation. The size of the face recognition cut is 112x 112.
Then, step S2 is executed to construct a feature fusion network and establish a face feature recognition model. Wherein, the feature fusion network is a MobileNet V2 deep network. In the face feature extraction part of the embodiment, MobileNetV2 is used as a backbone network, and the network adopts a depth separable convolution and an inventedresiduals module, so that the speed and the precision are well balanced. In addition, the global average pooling layer is adopted to replace the full-connection layer of the original MobileNet V2, so that the model parameter quantity can be reduced, and the calculation speed is improved.
When the face feature recognition model is established, firstly, an MS-Celeb-1M data set is established as a face training data set, the face training data set is input into a softmax loss function for training, and a pre-training model is obtained through the training of the ArcFace loss function; and training the pre-training model by the acquired Asian face data set through an ArcFace loss function, and further obtaining a face feature recognition model.
Specifically, the training method of the face feature recognition model of the embodiment is as follows: firstly, network training is performed on an MS-Celeb-1M data set, parameters of the network are initialized, and the network is not easy to overfit on a single data set. Then, fine tuning is carried out on the self-collected Asian face data set, so that the model learns the distribution on the data set, and the Asian face-oriented scene has higher precision. Specifically, the method can be divided into two stages, in the first stage, an MS-Celeb-1M data set is adopted, in order to enable the network to be easy to converge, a softmax loss function is used for training, and then an ArcFace loss function is used for training, so that the features extracted by the network can obtain smaller intra-class distance and larger inter-class distance. In the second stage, an Asian face data set acquired by self is adopted, the model obtained in the first stage is used as a pre-training model, and training is carried out through an ArcFace loss function, so that the distribution of the model is closer to the Asian face data.
Then, step S3 is executed, the preprocessed face image information a and face image information B are input into the feature fusion network, and feature vector extraction is performed on the face image information a and face image information B through the trained face feature recognition model.
Then, step S4 is executed to obtain the face feature vector F of the face image information a1And face feature vector F of face image information B2
Therefore, in the face feature extraction part, the deep network designed and trained by the invention is used for extracting the feature vector from the face recognition tangent image, the length of the feature vector is 128, and the feature vector contains enough information for describing the face attribute.
Then, step S5 is executed to calculate a face feature vector F1And face feature vector F2And outputting the identification result according to the cosine distance.
Specifically, if the cosine distance is greater than the face similarity threshold, it is determined that the face image information a and the face image information B are the same person, and if the cosine distance is less than the face similarity threshold, it is determined that the face image information a and the face image information B are different persons.
It can be seen that, in the feature comparison part, the cosine distances of two feature vectors obtained by calculating the two face recognition tangent maps are calculated, if the distance is greater than the face similarity threshold (such as 0.6), the two face recognition tangent maps are judged to be the same person, and if the distance is less than the face similarity threshold, the two face recognition tangent maps are judged to be different persons.
Therefore, the invention can realize face recognition under a deep neural network, is a high-precision face recognition algorithm which can be oriented to embedded equipment, and can input a face picture obtained by shooting into the deep recognition network for recognition after the face picture is subjected to face detection and position adjustment through affine transformation, so that the whole recognition system is lighter in weight and has high practicability without losing precision.
In addition, because the public data set of the face recognition mainly comes from European and American countries, the invention adopts Asian face data to train in order to improve the precision of the model facing Asian faces.
Therefore, the face recognition method of the invention adopts a deep network suitable for embedded equipment, and improves the model precision by utilizing Asian face data.
The embodiment of the face recognition system of the embedded platform comprises the following steps:
as shown in fig. 2, fig. 2 is a schematic block diagram of an embodiment of a face recognition system of an embedded platform according to the present invention. The system comprises a preprocessing module 10, a model training module 20, a feature extraction module 30 and a recognition result output module 40. The face recognition system further comprises an upper computer and an embedded device end (such as a Rayleigh micro RK3399CPU), wherein the upper computer is used for transplanting a driving program and a preset face recognition program to the embedded device end, and the embedded device end is used for running the face recognition program and displaying a recognition result on a display. The face recognition program comprises the steps of obtaining face image information, preprocessing the face image information, establishing a face feature recognition model, extracting face feature vectors and outputting a face recognition result.
The preprocessing module 10 is configured to acquire face image information acquired by a peripheral device, and preprocess the face image information.
The model training module 20 is used for constructing a feature fusion network and establishing a face feature recognition model.
The feature extraction module 30 is configured to input the preprocessed face image information a and face image information B into a feature fusion network, and perform feature vector extraction on the face image information a and the face image information B through a trained face feature recognition model.
The feature extraction module 30 is further configured to obtain a face feature vector F of the face image information a1And face feature vector F of face image information B2
The recognition result output module 40 is used for calculating the face feature vector F1And face feature vector F2And outputting the identification result according to the cosine distance.
Further, the preprocessing module 10 is configured to preprocess the face image information, and includes: inputting the face image information into a face detection network for face detection, detecting the area of the face in the face image information and the coordinates of five key points including a left eye, a right eye, a nose, a left mouth corner and a right mouth corner on the face, and performing affine transformation according to the coordinates of the five key points to obtain a corrected face identification cutting image, wherein the face detection network is an MTCNN network.
Further, the feature fusion network is a MobileNetV2 deep network.
Further, the model training module 20 is configured to establish a face feature recognition model, including: an MS-Celeb-1M data set is constructed as a face training data set, the face training data set is input into a softmax loss function for training, and a pre-training model is obtained through the training of the ArcFace loss function; and training the pre-training model by the acquired Asian face data set through an ArcFace loss function, and further obtaining a face feature recognition model.
Further, the recognition result output module 40 is configured to output a recognition result according to the cosine distance, and includes: and if the cosine distance is greater than the face similarity threshold, determining that the face image information A and the face image information B are the same person, and if the cosine distance is less than the face similarity threshold, determining that the face image information A and the face image information B are different persons.
Specifically, the accuracy of 3000+ positive face pair judgment and 3000+ negative face pair judgment constructed from an actual data set reaches 97.3%, and the actual measurement speed on an embedded equipment end Rui core micro RK3399CPU reaches about 50 ms.
Therefore, the face recognition system provided by the invention can realize face recognition under a deep neural network, is a high-precision face recognition system facing embedded equipment, and a shot face picture can be input into the deep recognition network for recognition after being subjected to face detection and position adjustment through affine transformation, so that the whole recognition system is lighter in weight, has no loss of precision and has higher practicability.
In addition, because the public data set of the face recognition mainly comes from European and American countries, the invention adopts Asian face data to train in order to improve the precision of the model facing Asian faces.
Therefore, the system of the invention adopts a deep network suitable for embedded equipment, and improves the model precision by utilizing Asian face data.
It should be noted that the above is only a preferred embodiment of the present invention, but the design concept of the present invention is not limited thereto, and any insubstantial modifications made by using the design concept also fall within the protection scope of the present invention.

Claims (10)

1. A face recognition method of an embedded platform is characterized by comprising the following steps:
acquiring face image information acquired by peripheral equipment, and preprocessing the face image information;
constructing a feature fusion network, and establishing a face feature recognition model;
inputting the preprocessed face image information A and the preprocessed face image information B into a feature fusion network, and extracting feature vectors of the face image information A and the face image information B through a trained face feature recognition model;
obtaining a face feature vector F of face image information A1And face feature vector F of face image information B2
Calculating face feature vector F1And face feature vector F2And outputting an identification result according to the cosine distance.
2. The method according to claim 1, wherein the preprocessing the face image information comprises:
inputting the face image information into a face detection network for face detection, detecting the area of the face in the face image information and the coordinates of five key points including a left eye, a right eye, a nose, a left mouth corner and a right mouth corner on the face, and performing affine transformation according to the coordinates of the five key points to obtain a corrected face recognition cutting image, wherein the face detection network is an MTCNN network.
3. The face recognition method of claim 1, wherein the constructing a feature fusion network comprises:
the feature fusion network is a MobileNet V2 deep network.
4. The face recognition method according to any one of claims 1 to 3, wherein the establishing of the face feature recognition model specifically includes:
an MS-Celeb-1M data set is constructed as a face training data set, the face training data set is input into a softmax loss function for training, and a pre-training model is obtained through the training of the ArcFace loss function; and training the pre-training model by the acquired Asian face data set through an ArcFace loss function, and further obtaining a face feature recognition model.
5. The face recognition method according to any one of claims 1 to 3, wherein outputting a recognition result according to the cosine distance comprises:
and if the cosine distance is greater than the face similarity threshold, determining that the face image information A and the face image information B are the same person, and if the cosine distance is less than the face similarity threshold, determining that the face image information A and the face image information B are different persons.
6. A face recognition system of an embedded platform, comprising:
the preprocessing module is used for acquiring face image information acquired by peripheral equipment and preprocessing the face image information;
the model training module is used for constructing a feature fusion network and establishing a human face feature recognition model;
the feature extraction module is used for inputting the preprocessed face image information A and the preprocessed face image information B into a feature fusion network and extracting feature vectors of the face image information A and the face image information B through a trained face feature recognition model;
the feature extraction module is also used for obtaining a face feature vector F of the face image information A1And face feature vector F of face image information B2
A recognition result output module for calculating the face feature vector F1And face feature vector F2And outputting an identification result according to the cosine distance.
7. The face recognition system of claim 6, wherein the preprocessing module is configured to preprocess the face image information and comprises:
inputting the face image information into a face detection network for face detection, detecting the area of the face in the face image information and the coordinates of five key points including a left eye, a right eye, a nose, a left mouth corner and a right mouth corner on the face, and performing affine transformation according to the coordinates of the five key points to obtain a corrected face recognition cutting image, wherein the face detection network is an MTCNN network.
8. The face recognition system of claim 6, wherein the model training module is configured to construct a feature fusion network, and comprises:
the feature fusion network is a MobileNet V2 deep network.
9. The face recognition system of any one of claims 6 to 8, wherein the model training module is configured to build a face feature recognition model, and comprises:
an MS-Celeb-1M data set is constructed as a face training data set, the face training data set is input into a softmax loss function for training, and a pre-training model is obtained through the training of the ArcFace loss function; and training the pre-training model by the acquired Asian face data set through an ArcFace loss function, and further obtaining a face feature recognition model.
10. The face recognition system according to any one of claims 6 to 8, wherein the recognition result output module is configured to output a recognition result according to the cosine distance, and includes:
and if the cosine distance is greater than the face similarity threshold, determining that the face image information A and the face image information B are the same person, and if the cosine distance is less than the face similarity threshold, determining that the face image information A and the face image information B are different persons.
CN201911055083.3A 2019-10-31 2019-10-31 Face recognition method and system of embedded platform Pending CN110796101A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911055083.3A CN110796101A (en) 2019-10-31 2019-10-31 Face recognition method and system of embedded platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911055083.3A CN110796101A (en) 2019-10-31 2019-10-31 Face recognition method and system of embedded platform

Publications (1)

Publication Number Publication Date
CN110796101A true CN110796101A (en) 2020-02-14

Family

ID=69440682

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911055083.3A Pending CN110796101A (en) 2019-10-31 2019-10-31 Face recognition method and system of embedded platform

Country Status (1)

Country Link
CN (1) CN110796101A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111242097A (en) * 2020-02-27 2020-06-05 腾讯科技(深圳)有限公司 Face recognition method and device, computer readable medium and electronic equipment
CN111339983A (en) * 2020-03-05 2020-06-26 四川长虹电器股份有限公司 Method for fine-tuning face recognition model
CN111402143A (en) * 2020-06-03 2020-07-10 腾讯科技(深圳)有限公司 Image processing method, device, equipment and computer readable storage medium
CN112149571A (en) * 2020-09-24 2020-12-29 深圳龙岗智能视听研究院 Face recognition method based on neural network affine transformation
CN112651389A (en) * 2021-01-20 2021-04-13 北京中科虹霸科技有限公司 Method and device for training, correcting and identifying correction model of non-orthoptic iris image
CN115311705A (en) * 2022-07-06 2022-11-08 南京邮电大学 Face cloud recognition system based on deep learning
CN117710700A (en) * 2024-02-05 2024-03-15 厦门她趣信息技术有限公司 Similar image detection method and system based on deep learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108875602A (en) * 2018-05-31 2018-11-23 珠海亿智电子科技有限公司 Monitor the face identification method based on deep learning under environment
CN109948568A (en) * 2019-03-26 2019-06-28 东华大学 Embedded human face identifying system based on ARM microprocessor and deep learning
CN110197146A (en) * 2019-05-23 2019-09-03 招商局金融科技有限公司 Facial image analysis method, electronic device and storage medium based on deep learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108875602A (en) * 2018-05-31 2018-11-23 珠海亿智电子科技有限公司 Monitor the face identification method based on deep learning under environment
CN109948568A (en) * 2019-03-26 2019-06-28 东华大学 Embedded human face identifying system based on ARM microprocessor and deep learning
CN110197146A (en) * 2019-05-23 2019-09-03 招商局金融科技有限公司 Facial image analysis method, electronic device and storage medium based on deep learning

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111242097A (en) * 2020-02-27 2020-06-05 腾讯科技(深圳)有限公司 Face recognition method and device, computer readable medium and electronic equipment
CN111242097B (en) * 2020-02-27 2023-04-18 腾讯科技(深圳)有限公司 Face recognition method and device, computer readable medium and electronic equipment
CN111339983A (en) * 2020-03-05 2020-06-26 四川长虹电器股份有限公司 Method for fine-tuning face recognition model
CN111402143A (en) * 2020-06-03 2020-07-10 腾讯科技(深圳)有限公司 Image processing method, device, equipment and computer readable storage medium
CN112149571A (en) * 2020-09-24 2020-12-29 深圳龙岗智能视听研究院 Face recognition method based on neural network affine transformation
CN112651389A (en) * 2021-01-20 2021-04-13 北京中科虹霸科技有限公司 Method and device for training, correcting and identifying correction model of non-orthoptic iris image
CN112651389B (en) * 2021-01-20 2023-11-14 北京中科虹霸科技有限公司 Correction model training, correction and recognition method and device for non-emmetropic iris image
CN115311705A (en) * 2022-07-06 2022-11-08 南京邮电大学 Face cloud recognition system based on deep learning
CN115311705B (en) * 2022-07-06 2023-08-15 南京邮电大学 Face cloud recognition system based on deep learning
CN117710700A (en) * 2024-02-05 2024-03-15 厦门她趣信息技术有限公司 Similar image detection method and system based on deep learning

Similar Documents

Publication Publication Date Title
CN110796101A (en) Face recognition method and system of embedded platform
CN108921100B (en) Face recognition method and system based on visible light image and infrared image fusion
Islam et al. Real time hand gesture recognition using different algorithms based on American sign language
Ahmed et al. Vision based hand gesture recognition using dynamic time warping for Indian sign language
CN105335722B (en) Detection system and method based on depth image information
JP4241763B2 (en) Person recognition apparatus and method
CN102938065B (en) Face feature extraction method and face identification method based on large-scale image data
CN112766159A (en) Cross-database micro-expression identification method based on multi-feature fusion
CN105740781B (en) Three-dimensional human face living body detection method and device
CN105740778B (en) Improved three-dimensional human face in-vivo detection method and device
CN104850825A (en) Facial image face score calculating method based on convolutional neural network
KR101937323B1 (en) System for generating signcription of wireless mobie communication
CN111428689B (en) Face image feature extraction method based on multi-pool information fusion
CN108171223A (en) A kind of face identification method and system based on multi-model multichannel
Ekbote et al. Indian sign language recognition using ANN and SVM classifiers
Rao et al. Sign Language Recognition System Simulated for Video Captured with Smart Phone Front Camera.
Chansri et al. Hand gesture recognition for Thai sign language in complex background using fusion of depth and color video
KR102005150B1 (en) Facial expression recognition system and method using machine learning
CN107911643A (en) Show the method and apparatus of scene special effect in a kind of video communication
CN111079465A (en) Emotional state comprehensive judgment method based on three-dimensional imaging analysis
CN104573628A (en) Three-dimensional face recognition method
CN116386118B (en) Drama matching cosmetic system and method based on human image recognition
Agrawal et al. A Tutor for the hearing impaired (developed using Automatic Gesture Recognition)
JP2013218605A (en) Image recognition device, image recognition method, and program
CN106406507B (en) Image processing method and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210831

Address after: 701, 7 / F, No. 60, Chuangxin Third Road, Tangjiawan Town, high tech Zone, Zhuhai City, Guangdong Province, 519000

Applicant after: Guangdong Light Speed Intelligent Equipment Co.,Ltd.

Applicant after: Tenghui Technology Building Intelligence (Shenzhen) Co.,Ltd.

Address before: 701, 7 / F, No. 60, Chuangxin Third Road, Tangjiawan Town, high tech Zone, Zhuhai City, Guangdong Province, 519000

Applicant before: Guangdong Light Speed Intelligent Equipment Co.,Ltd.