CN111666866A - Cross-platform off-line multi-thread face recognition method based on OpenCV - Google Patents

Cross-platform off-line multi-thread face recognition method based on OpenCV Download PDF

Info

Publication number
CN111666866A
CN111666866A CN202010490322.4A CN202010490322A CN111666866A CN 111666866 A CN111666866 A CN 111666866A CN 202010490322 A CN202010490322 A CN 202010490322A CN 111666866 A CN111666866 A CN 111666866A
Authority
CN
China
Prior art keywords
face
data
network
opencv
algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010490322.4A
Other languages
Chinese (zh)
Inventor
刘强
吕少鹏
邱益亮
吴坤雄
洪华峰
吴敏
江雨
林猛
江晓岚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Telecom Fufu Information Technology Co Ltd
Original Assignee
China Telecom Fufu Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Telecom Fufu Information Technology Co Ltd filed Critical China Telecom Fufu Information Technology Co Ltd
Priority to CN202010490322.4A priority Critical patent/CN111666866A/en
Publication of CN111666866A publication Critical patent/CN111666866A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/40Transformation of program code
    • G06F8/41Compilation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The invention discloses a cross-platform offline multithreading face recognition method based on OpenCV, which comprises the following steps: (1) establishing a database: pre-writing the face feature data into a database to form a face feature set; (2) pretreatment: acquiring image data, converting the image data into an image with a preset format and specification, and setting the image as data A; (3) face detection: classifying, framing and positioning face features of the face in the image of the data A, and setting the face as data B; (4) characteristic extraction: calling ResNet program to perform feature extraction on the data B to form an OpenCV data matrix object CV, wherein Mat is set as data C; (5) calling a face comparison module program of the ResNet network model, comparing the face feature set in the database with the data C through the face comparison module program, returning a similarity result, and extracting comparison result data with the maximum similarity and larger than a preset value; (6) the recognition result is output, and the scheme is reliable and efficient to implement.

Description

Cross-platform off-line multi-thread face recognition method based on OpenCV
Technical Field
The invention relates to the technical field of face recognition, in particular to a cross-platform offline multi-thread face recognition method based on OpenCV.
Background
Along with the popularization of face recognition technology, more and more projects can be used for face recognition, face recognition equipment is more and more diversified (Windows, Android, Linux and the like), in order to meet the requirements of different equipment using algorithms, a cross-platform offline face recognition algorithm capable of being rapidly adapted to various equipment is urgently needed at present, and the corresponding system implemented at present has two difficulties:
1. how to quickly adapt to various platforms to realize a quick and simple compiling method;
2. and local offline identification is realized by utilizing the computing resources of the single CPU without depending on other equipment.
There are also problems as follows:
the existing face recognition hardware is varied, in order to meet the performance requirement, a face recognition algorithm needs to be realized by using Native codes, in order to quickly bring a product online, the algorithm is often put into a server to be realized, however, the server has the following defects:
1. the reliability problem is as follows: particularly, when the server side is in the internet, the service cannot be carried out due to network fluctuation or interruption;
2. rental property problem: the online algorithm needs to build a huge server support for the online algorithm, the online algorithm is not special equipment, but the online algorithm is provided by purchasing a third-party algorithm in a renting mode, and in some items with a cut-off property, the yearly generated renting cost causes the cost to be difficult to control;
3. some existing off-line algorithms are implemented, but generally only for a single platform, and lack of optimization, and for items with limited hardware cost, hardware type selection is very difficult.
Disclosure of Invention
Aiming at the situation of the prior art, the invention aims to provide the cross-platform offline multithreading face recognition method based on the OpenCV, which has high precision and quick response, effectively overcomes the problem of slow feature extraction speed of ResNet by using a multithreading technology, and can fully utilize the performance of a multi-core CPU (Central processing Unit) to perform optimized face recognition.
In order to achieve the technical purpose, the technical scheme adopted by the invention is as follows:
a cross-platform off-line multithreading face recognition method based on OpenCV comprises the following steps:
(1) establishing a database: pre-writing the face feature data into a database to form a face feature set;
(2) pretreatment: acquiring image data, converting the image data into an image with a preset format and specification, and setting the image as data A;
(3) face detection: classifying, framing and positioning face features of the face in the image of the data A, and setting the face as data B;
(4) characteristic extraction: calling ResNet program to perform feature extraction on the data B to form an OpenCV data matrix object CV, wherein Mat is set as data C;
(5) calling a face comparison module program adopting a ResNet network model, comparing the face feature set in the database with the data C processed in the step (4) through the face comparison module program, returning a similarity result, and extracting comparison result data with the maximum similarity and larger than a preset value;
(6) and outputting the recognition result.
As a possible implementation manner, further, in step (1), the facial feature data pre-written into the database is facial feature data with label information, and the label information at least includes personal information corresponding to the facial feature data.
As a possible implementation manner, further, in the step (2), the image data at least includes a picture or a video stream, and when the image data is a video stream, the image data is transcoded into a preset specified format, then a picture frame in the video stream is called, the picture frame is converted into an RGB format, and finally a picture scaling frame with a preset specification is generated, and then the picture scaling frame is written into the shared memory.
As a preferred embodiment, the format of the picture frame in the video stream is preferably converted by calling the cv:: cvtColor interface of OpenCV.
As a preferred embodiment, the format of transcoding the video stream is YUV or NV 12.
As a preferred embodiment, preferably, in step (3), the MTCNN algorithm is used to classify, frame and locate facial features of the face in the picture, where the MTCNN algorithm includes a pro-social Network sub-Network algorithm, a Refine Network sub-Network algorithm, and an Output Network sub-Network algorithm, and the specific processing includes: the picture is processed in parallel through a Proposal Network sub-Network algorithm, a Refine Network sub-Network algorithm and an Output Network sub-Network algorithm respectively, the result obtained by the Proposal Network sub-Network algorithm is input into the Refine Network sub-Network algorithm again, the result obtained by the Refine Network sub-Network algorithm is input into the Output Network sub-Network algorithm again for further processing, and finally, a picture scaling frame and feature data with preset specifications are obtained.
As a preferred embodiment, preferably, the propofol Network subnet algorithm is used to generate candidate boxes on the picture;
the Refine Network subnet algorithm is used for removing the non-face box in the output end of the Proposal Network subnet algorithm;
the Output Network subnet algorithm is used for regression of the landmark position, so that the specification of the picture of the Output result is adjusted to 48 × 48, and the Output result contains 4 coordinate information, score and key point information of a plurality of candidate frames.
As a preferred embodiment, in step (5), the preset value of the similarity is preferably 60%.
As one of the schemes using parallel processing, it is preferable that a separate processing thread is called to process the step (2) and output the data a to the shared memory, and a separate processing thread is called to sequentially process the step (3), the step (4), the step (5), and the step (6), and both threads are processed in parallel.
As one of the schemes using parallel processing, preferably, a separate processing thread is invoked to process the step (2), the step outputs data a to the shared memory, a separate processing thread is invoked to process the step (3), the step outputs data B to the shared memory, a separate processing thread is invoked to sequentially process the step (4), the step (5) and the step (6), and all three threads are processed in parallel.
Native codes (C/C + +) are used for compiling, one-time compiling operation cannot be realized like Java, and therefore the computer vision library OpenCV capable of supporting cross-platform cross compiling is used as a bottom layer support.
The principle of the off-line face recognition process is that a MTCNN (Multi-task Cascade connected convolutional network) network model based on Deep learning is used for face detection, a ResNet (Deep residual network Deep residual error network) network model based on Deep learning is used for face feature comparison, and an expansion dnn (Deep Neural Networks Deep Neural network) module of OpenCV is used for model loading and operation.
The key point of the invention is how to realize the face detection by using MTCNN on an OpenCV framework, the feature extraction of the face detection result by using ResNet and the 1: N comparison with the existing library, and the comparison result values are sorted, and the maximum value is taken as the matching result.
At present, most of CPUs are multi-core, and the computing power of the multi-core CPU is fully utilized by using a multithreading technology to achieve higher efficiency.
The general implementation thought is as follows:
the face detection process comprises face detection, face extraction and feature extraction and face comparison.
1. Face Detection (Face Detection):
the method comprises the steps of obtaining an original picture, wherein the picture usually comes from a video device such as a camera, the video stream is usually a YUV format (NV 12 and the like) frame after decoding, and the picture frame format needs to call a cv interface of OpenCV for format conversion and conversion into an RGB format.
The cascaded structure of the MTCNN algorithm mainly comprises three sub-networks: proposal Network (P-Net), Refinenetwork (R-Net), Output Network (O-Net), wherein,
P-Net is mainly used to generate some candidate boxes (bounding boxes). During training, 3 branches are arranged at the top of the network and are used for face classification, face frame regression and face key point positioning respectively;
R-Net is mainly used for removing a large number of non-face frames. The input of this step is the bounding box generated by the previous P-Net, and the size of each bounding box is 24 × 24, which can be obtained by resize operation;
O-Net and R-Net are somewhat like, except that this step also increases the regression of the landmark positions. The input is resized to 48 x 48, and 4 coordinate information including P bounding boxes, score, and keypoint information are output.
The finally obtained data are face coordinate frames bounding boxes and face mark points landmark, and are both float type.
2. Face Extraction Feature (Face Feature Extraction)
And calling ResNet to perform feature extraction on the face detection result to finally form an OpenCV data matrix object cv:: Mat.
3. Face comparison (Face Compare)
And the face comparison module adopts a ResNet network model to compare the face feature set with the face feature data in the step 2, and ResNet returns a similarity result, and the extracted maximum value which is greater than 0.6 is taken as a comparison result.
By utilizing three steps of face recognition, the face recognition service is divided into two modules:
1. face collection
The face collection is a process of executing face detection and face extraction features and performing serialization warehousing on extraction feature results, the feature extraction generates cv of OpenCV, the Mat object is not a traditional two-dimensional matrix object, the cv is used for serialization, the FileStorage class is used for serialization, data are coded into Base64 text to be warehoused, and the deserialization is the same as and reverse to the Base64 text.
2. Human face comparison
The human face comparison is divided into 1:1 and 1: and N two ways, wherein 1:1 is that 2 extracted feature results are compared to obtain double type data between (- ∞, 1), and a single case is larger than 0.6 to be considered as matching.
1: n identification is a circular match of 1 extracted feature data with all feature data in the library, taking a maximum value greater than 0.6.
And (3) multi-thread implementation:
in order to fully utilize multi-core resources, the face recognition steps can be realized by being divided into different threads, and according to a use scene, the face recognition method is designed into a half-separation mode and a full-separation mode, and different threads are scheduled to process one or more different steps.
By adopting the technical scheme, compared with the prior art, the invention has the beneficial effects that: the scheme provides a cross-platform offline multi-thread face recognition method based on OpenCV, and through the method, the cross-platform cross compiling is conveniently and quickly realized by using the OpenCV as a basic library support; the MTCNN is used as the face detection, so that higher accuracy can be brought, the problem of low characteristic extraction speed of ResNet is effectively solved by using the multithreading technology, the performance of the multi-core CPU is fully utilized to optimize and realize the whole set of off-line algorithm of face recognition, two multithreading schemes are provided, and selection can be performed according to the practical situation of the application environment.
Drawings
The invention will be further explained with reference to the drawings and the detailed description below:
FIG. 1 is a schematic process flow diagram of an MTCNN algorithm in accordance with aspects of the present invention;
FIG. 2 is a schematic flow chart of a process for carrying out embodiment 1 of the present invention;
FIG. 3 is a schematic flow chart of the process of example 2 in the embodiment of the present invention.
Detailed Description
Example 1
As shown in fig. 2, the cross-platform offline multithreading face recognition method based on OpenCV in the embodiment includes the following steps:
(1) establishing a database: pre-writing the face feature data into a database to form a face feature set, wherein the face feature data pre-written into the database is face feature data with mark information, and the mark information at least comprises personal information corresponding to the face feature data;
(2) pretreatment: the method comprises the steps of obtaining image data, converting the image data into an image with a preset format and specification, and setting the image data as data A, wherein the image data at least comprises a picture or a video stream, when the image data is the video stream, transcoding the image data into a preset specified format, calling a picture frame in the video stream, converting the picture frame into an RGB format, finally generating a picture scaling frame with the preset specification, writing the picture scaling frame into a shared memory, and preferably, converting the format of the picture frame in the video stream by calling a cv (cv) cvtColor interface of OpenCV, wherein the transcoding format of the video stream is YUV or NV 12;
(3) face detection: with reference to fig. 1, the step of classifying, framing and locating facial features of a face in an image is performed by using an MTCNN algorithm to classify, frame and locate facial features of the face in the image, wherein the MTCNN algorithm includes a pro-social Network sub-Network algorithm, a Refine Network sub-Network algorithm and an Output Network sub-Network algorithm, and the specific processing includes: respectively carrying out parallel processing on the pictures through a Proposal Network sub-Network algorithm, a Refine Network sub-Network algorithm and an Output Network sub-Network algorithm, inputting results obtained by the Proposal Network sub-Network algorithm into the Refine Network sub-Network algorithm, inputting results obtained by the Refine Network sub-Network algorithm into the Output Network sub-Network algorithm for further processing, and finally obtaining picture scaling frames and characteristic data with preset specifications; as a preferred embodiment, preferably, the propofol Network subnet algorithm is used to generate a candidate box (bounding box) on the picture;
the Refine Network subnet algorithm is used for removing the non-face box in the output end of the Proposal Network subnet algorithm;
the Output Network sub-Network algorithm is used for regression of the landmark position, so that the specification of an image of an Output result is adjusted to 48 x 48, the Output result comprises 4 pieces of coordinate information, score and key point information of a plurality of candidate frames, and finally obtained data in the step are face coordinate frames bounding boxes and face mark points Landmarks, and both the face coordinate frames bounding boxes and the face mark points Landmarks are of a float type;
when the step is processed, the currently detected face image can be output in real time to carry out UI display;
(4) characteristic extraction: calling ResNet program to perform feature extraction on the data B to form an OpenCV data matrix object CV, wherein Mat is set as data C;
(5) calling a face comparison module program adopting a ResNet network model, comparing the face feature set in the database with the data C processed in the step (4) through the program, returning a similarity result, and extracting comparison result data with the maximum similarity and larger than a preset value (the preset value of the similarity is 60%);
(6) and outputting the recognition result, namely UI display of the result.
In this embodiment, the step (2) is processed by calling an individual processing thread and the step outputs the data a to the shared memory, and the step (3), the step (4), the step (5) and the step (6) are sequentially processed by calling the individual processing thread, that is, the corresponding tasks are processed by using two threads, and both the two threads are processed in parallel.
Example 2
As shown in fig. 3, the cross-platform offline multithreading face recognition method based on OpenCV in the embodiment includes the following steps:
(1) establishing a database: pre-writing the face feature data into a database to form a face feature set, wherein the face feature data pre-written into the database is face feature data with mark information, and the mark information at least comprises personal information corresponding to the face feature data;
(2) pretreatment: the method comprises the steps of obtaining image data, converting the image data into an image with a preset format and specification, and setting the image data as data A, wherein the image data at least comprises a picture or a video stream, when the image data is the video stream, transcoding the image data into a preset specified format, calling a picture frame in the video stream, converting the picture frame into an RGB format, finally generating a picture scaling frame with the preset specification, writing the picture scaling frame into a shared memory, and preferably, converting the format of the picture frame in the video stream by calling a cv (cv) cvtColor interface of OpenCV, wherein the transcoding format of the video stream is YUV or NV 12;
(3) face detection: with reference to fig. 1, the step of classifying, framing and locating facial features of a face in an image is performed by using an MTCNN algorithm to classify, frame and locate facial features of the face in the image, wherein the MTCNN algorithm includes a pro-social Network sub-Network algorithm, a Refine Network sub-Network algorithm and an Output Network sub-Network algorithm, and the specific processing includes: respectively carrying out parallel processing on the pictures through a Proposal Network sub-Network algorithm, a Refine Network sub-Network algorithm and an Output Network sub-Network algorithm, inputting results obtained by the Proposal Network sub-Network algorithm into the Refine Network sub-Network algorithm, inputting results obtained by the Refine Network sub-Network algorithm into the Output Network sub-Network algorithm for further processing, and finally obtaining picture scaling frames and characteristic data with preset specifications; as a preferred embodiment, preferably, the propofol Network subnet algorithm is used to generate a candidate box (bounding box) on the picture;
the Refine Network subnet algorithm is used for removing the non-face box in the output end of the Proposal Network subnet algorithm;
the Output Network sub-Network algorithm is used for regression of the landmark position, so that the specification of an image of an Output result is adjusted to 48 x 48, the Output result comprises 4 pieces of coordinate information, score and key point information of a plurality of candidate frames, and finally obtained data in the step are face coordinate frames bounding boxes and face mark points Landmarks, and both the face coordinate frames bounding boxes and the face mark points Landmarks are of a float type;
when the step is processed, the currently detected face image can be output in real time to carry out UI display;
(4) characteristic extraction: calling ResNet program to perform feature extraction on the data B to form an OpenCV data matrix object CV, wherein Mat is set as data C;
(5) calling a face comparison module program adopting a ResNet network model, comparing the face feature set in the database with the data C processed in the step (4) through the program, returning a similarity result, and extracting comparison result data with the maximum similarity and larger than a preset value (the preset value of the similarity is 60%);
(6) and outputting the recognition result, namely UI display of the result.
In this embodiment, an individual processing thread is invoked to process the step (2) and output the data a to the shared memory, an individual processing thread is invoked to process the step (3) and output the data B to the shared memory, an individual processing thread is invoked to sequentially process the step (4), the step (5) and the step (6), that is, three threads are invoked to process, and all three threads are processed in parallel.
The foregoing is directed to embodiments of the present invention, and equivalents, modifications, substitutions and variations such as will occur to those skilled in the art, which fall within the scope and spirit of the appended claims.

Claims (10)

1. A cross-platform off-line multithreading face recognition method based on OpenCV is characterized in that: which comprises the following steps:
(1) establishing a database: pre-writing the face feature data into a database to form a face feature set;
(2) pretreatment: acquiring image data, converting the image data into an image with a preset format and specification, and setting the image as data A;
(3) face detection: classifying, framing and positioning face features of the face in the image of the data A, and setting the face as data B;
(4) characteristic extraction: calling ResNet program to perform feature extraction on the data B to form an OpenCV data matrix object CV, wherein Mat is set as data C;
(5) calling a face comparison module program adopting a ResNet network model, comparing the face feature set in the database with the data C processed in the step (4) through the face comparison module program, returning a similarity result, and extracting comparison result data with the maximum similarity and larger than a preset value;
(6) and outputting the recognition result.
2. The OpenCV-based cross-platform offline multithreading face recognition method according to claim 1, characterized in that: in the step (1), the face feature data pre-written into the database is face feature data with mark information, and the mark information at least comprises personal information corresponding to the face feature data.
3. The OpenCV-based cross-platform offline multithreading face recognition method according to claim 1, characterized in that: in the step (2), the image data at least includes a picture or a video stream, when the image data is a video stream, the image data is transcoded into a preset specified format, then a picture frame in the video stream is called and converted into an RGB format, and finally a picture zoom frame with a preset specification is generated and written into the shared memory.
4. The OpenCV-based cross-platform offline multithreading face recognition method according to claim 3, wherein: the format of the picture frame in the video stream is converted by calling cv (cvtColor) interface of OpenCV.
5. The OpenCV-based cross-platform offline multithreading face recognition method according to claim 3, wherein: the format of the video stream for transcoding is YUV or NV 12.
6. The OpenCV-based cross-platform offline multithreading face recognition method according to claim 3, wherein: in the step (3), an MTCNN algorithm is adopted to classify, frame and locate facial features in the picture, wherein the MTCNN algorithm comprises a propusal Network sub-Network algorithm, a Refine Network sub-Network algorithm and an OutputNetwork sub-Network algorithm, and the specific processing comprises the following steps: the picture is processed in parallel through a Proposal Network sub-Network algorithm, a Refine Network sub-Network algorithm and an Output Network sub-Network algorithm respectively, the result obtained by the Proposal Network sub-Network algorithm is input into the Refine Network sub-Network algorithm again, the result obtained by the Refine Network sub-Network algorithm is input into the Output Network sub-Network algorithm again for further processing, and finally, a picture scaling frame and feature data with preset specifications are obtained.
7. The OpenCV-based cross-platform offline multithreading face recognition method according to claim 6, wherein: the Proposal Network subnet algorithm is used for generating a candidate frame on the picture;
the Refine Network subnet algorithm is used for removing the non-face box in the output end of the Proposal Network subnet algorithm;
the Output Network subnet algorithm is used for regression of the landmark position, so that the specification of the picture of the Output result is adjusted to 48 × 48, and the Output result contains 4 coordinate information, score and key point information of a plurality of candidate frames.
8. The OpenCV-based cross-platform offline multithreading face recognition method according to claim 7, wherein: in the step (5), the preset value of the similarity is 60%.
9. The OpenCV-based cross-platform offline multithreading face recognition method according to one of claims 1 to 8, wherein: and (3) calling an independent processing thread to process the step (2) and outputting data A to a shared memory, calling the independent processing thread to sequentially process the step (3), the step (4), the step (5) and the step (6), wherein the two threads are processed in parallel.
10. The OpenCV-based cross-platform offline multithreading face recognition method according to one of claims 1 to 8, wherein: and (3) calling an independent processing thread to process the step (2) and output data A to the shared memory, calling an independent processing thread to process the step (3) and output data B to the shared memory, calling an independent processing thread to sequentially process the step (4), the step (5) and the step (6), and all three threads are processed in parallel.
CN202010490322.4A 2020-06-02 2020-06-02 Cross-platform off-line multi-thread face recognition method based on OpenCV Pending CN111666866A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010490322.4A CN111666866A (en) 2020-06-02 2020-06-02 Cross-platform off-line multi-thread face recognition method based on OpenCV

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010490322.4A CN111666866A (en) 2020-06-02 2020-06-02 Cross-platform off-line multi-thread face recognition method based on OpenCV

Publications (1)

Publication Number Publication Date
CN111666866A true CN111666866A (en) 2020-09-15

Family

ID=72385455

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010490322.4A Pending CN111666866A (en) 2020-06-02 2020-06-02 Cross-platform off-line multi-thread face recognition method based on OpenCV

Country Status (1)

Country Link
CN (1) CN111666866A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102226908A (en) * 2011-05-30 2011-10-26 苏州两江科技有限公司 Face discrimination method based on MPEG-7
CN102880729A (en) * 2012-11-02 2013-01-16 深圳市宜搜科技发展有限公司 Figure image retrieval method and device based on human face detection and recognition
WO2015037973A1 (en) * 2013-09-12 2015-03-19 Data Calibre Sdn Bhd A face identification method
CN109447053A (en) * 2019-01-09 2019-03-08 江苏星云网格信息技术有限公司 A kind of face identification method based on dual limitation attention neural network model
CN109993061A (en) * 2019-03-01 2019-07-09 珠海亿智电子科技有限公司 A kind of human face detection and tracing method, system and terminal device
CN110399844A (en) * 2019-07-29 2019-11-01 南京图玩智能科技有限公司 It is a kind of to be identified and method for tracing and system applied to cross-platform face key point

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102226908A (en) * 2011-05-30 2011-10-26 苏州两江科技有限公司 Face discrimination method based on MPEG-7
CN102880729A (en) * 2012-11-02 2013-01-16 深圳市宜搜科技发展有限公司 Figure image retrieval method and device based on human face detection and recognition
WO2015037973A1 (en) * 2013-09-12 2015-03-19 Data Calibre Sdn Bhd A face identification method
CN109447053A (en) * 2019-01-09 2019-03-08 江苏星云网格信息技术有限公司 A kind of face identification method based on dual limitation attention neural network model
CN109993061A (en) * 2019-03-01 2019-07-09 珠海亿智电子科技有限公司 A kind of human face detection and tracing method, system and terminal device
CN110399844A (en) * 2019-07-29 2019-11-01 南京图玩智能科技有限公司 It is a kind of to be identified and method for tracing and system applied to cross-platform face key point

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
朱晨阳等: "《基于YOLO3的人脸自动跟踪摄像机器人系统研究》", 《 电视技术 》, pages 1 - 7 *

Similar Documents

Publication Publication Date Title
Yuan et al. VSSA-NET: Vertical spatial sequence attention network for traffic sign detection
CN111860506B (en) Method and device for recognizing characters
WO2021203863A1 (en) Artificial intelligence-based object detection method and apparatus, device, and storage medium
Vaidya et al. Handwritten character recognition using deep-learning
Yaseen et al. Cloud-based scalable object detection and classification in video streams
CN111753727A (en) Method, device, equipment and readable storage medium for extracting structured information
Arkin et al. A survey: object detection methods from CNN to transformer
CN113704531A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
EP3816858A2 (en) Character recognition method and apparatus, electronic device and computer readable storage medium
CN111507354B (en) Information extraction method, device, equipment and storage medium
JP2022091123A (en) Form information extracting method, apparatus, electronic device and storage medium
CN103824075A (en) Image recognition system and method
CN111160387B (en) Graph model based on multi-view dictionary learning
CN116052193A (en) RPA interface dynamic form picking and matching method and system
JP2020009442A (en) Systems, methods, and programs for real-time end-to-end capturing of ink strokes from video
CN112560854A (en) Method, apparatus, device and storage medium for processing image
CN111414889A (en) Financial statement identification method and device based on character identification
CN111666866A (en) Cross-platform off-line multi-thread face recognition method based on OpenCV
CN115601586A (en) Label information acquisition method and device, electronic equipment and computer storage medium
Tong et al. Robust facial expression recognition based on local tri-directional coding pattern
Zhang et al. Research and implementation of database operation recognition based on YOLO v5 algorithm
CN110427920B (en) Real-time pedestrian analysis method oriented to monitoring environment
CN113657230B (en) Method for training news video recognition model, method for detecting video and device thereof
Yu et al. RWYI: Reading What You Are Interested in with a Learning-Based Text Interactive System
Yu et al. Research Article RWYI: Reading What You Are Interested in with a Learning-Based Text Interactive System

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination