CN113869210A - Face recognition following method and intelligent device adopting same - Google Patents
Face recognition following method and intelligent device adopting same Download PDFInfo
- Publication number
- CN113869210A CN113869210A CN202111144329.1A CN202111144329A CN113869210A CN 113869210 A CN113869210 A CN 113869210A CN 202111144329 A CN202111144329 A CN 202111144329A CN 113869210 A CN113869210 A CN 113869210A
- Authority
- CN
- China
- Prior art keywords
- face
- face recognition
- frame
- value
- following method
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 33
- 230000000007 visual effect Effects 0.000 claims abstract description 4
- 239000002245 particle Substances 0.000 claims description 24
- 230000008569 process Effects 0.000 claims description 7
- 238000013528 artificial neural network Methods 0.000 claims description 3
- 230000006872 improvement Effects 0.000 description 6
- 230000006978 adaptation Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Abstract
The invention discloses a face recognition following method and an intelligent device adopting the method, wherein the method comprises the following steps: carrying out face recognition on the face in the visual field; and tracking the face after face recognition. The principle of the invention is as follows: the person after face recognition still exists in the following frames at a high probability, so that the purpose of following is achieved by tracking the face without carrying out recognition work with high system overhead on each frame.
Description
Technical Field
The invention relates to a face recognition following method and intelligent equipment carrying the face recognition method.
Background
Face recognition is a biometric technology for identity recognition based on facial feature information of a person. A series of related technologies, also commonly called face recognition and face recognition, are used to collect images or video streams containing faces by using a camera or a video camera, automatically detect and track the faces in the images, and then perform face recognition on the detected faces.
For intelligent devices, especially intelligent robots, the purpose of face recognition is not only identification, but also follow-up after identification. Most of the existing face recognition algorithms use Dilb to perform real-time face recognition, that is, face recognition is performed once for each frame of video. Although this algorithm is guaranteed in terms of accuracy, the overhead of repeated recognition is extremely large, resulting in seizure.
Disclosure of Invention
In view of the above, the present invention provides a face recognition method and an intelligent device using the face recognition method, which change real-time face recognition into a face recognition and then tracking manner, thereby greatly reducing system overhead and improving system fluency.
In order to solve the above technical problems, the present invention provides a face recognition following method, comprising: carrying out face recognition on the face in the visual field; and tracking the face after face recognition. The principle of the invention is as follows: the person after face recognition still exists in the following frames at a high probability, so that the purpose of following is achieved by tracking the face without carrying out recognition work with high system overhead on each frame.
As an improvement, the face recognition is carried out in an initial frame, and tracking is carried out in a subsequent frame. In the following activity of recognizing a certain face, only the first frame is subjected to face recognition to determine the identity, and the rest of the subsequent frames are only subjected to tracking work, so that the system overhead is reduced to the maximum extent.
As an improvement, the face recognition comprises: calculating a 128D characteristic value of each photo face according to a plurality of pictures of the same face, and calculating a 128D characteristic mean value of all photo faces as a reference value; and calculating a 128D characteristic value of the face picture acquired in real time as a comparison value, comparing the comparison value with a reference value, judging as the same person if the difference value is smaller than a threshold value, and judging as a stranger if the difference value is larger than the threshold value.
Preferably, the calculating the 128D feature value of each photo face according to a plurality of pictures of the same face comprises: positioning a face in the collected video, drawing a face frame by taking the face as a center, and storing an image in the face frame; the 128D feature values of the faces in each image are calculated.
As an improvement, the 128D characteristic value of each photo face is calculated through a Dlib resnet50 depth residual neural network.
Preferably, the euclidean distance between the comparison value and the reference value is calculated by comparing the comparison value with the reference value.
As an improvement, the tracking according to the face recognition result includes: setting and registering feature frames for all human faces in each frame of image in the video, and calculating the mass center of each feature frame; sequentially calculating Euclidean distances of all particles in adjacent frames, and judging that two particles with the shortest Euclidean distance between the adjacent frames represent the same face.
As an improvement, when a new face appears in the tracking process, setting a feature frame for the new face, registering, and calculating feature frame particles; sequentially calculating Euclidean distances of all particles in adjacent frames from the frame, and judging the two particles with the shortest Euclidean distance between the adjacent frames as representing the same face; and judging the face represented by the particles which are far away from the particles in the previous frame as a new face, and identifying and tracking the new face again.
As an improvement, when the number of frames of any face disappearing in the tracking process exceeds a threshold value, the face is cancelled.
The invention also provides intelligent equipment carrying the face recognition following method.
The invention has the advantages that: the face recognition following method with the steps replaces the existing real-time recognition by a recognition and tracking mode, reduces the recognition process which consumes most resources, and greatly increases the system fluency on the premise of not reducing the accuracy basically.
Drawings
FIG. 1 is a flow chart of the present invention.
Detailed Description
In order that those skilled in the art will better understand the technical solutions of the present invention, the present invention will be further described in detail with reference to the following embodiments.
As shown in fig. 1, the present invention provides a face recognition following method, which comprises the following steps:
s1, carrying out face recognition on the face in the visual field; face recognition is performed at the initial frame.
And S2, tracking the face by the following frames after face recognition.
For example, two persons exist in the Nth frame, and Zhang III and Lile IV are known through traversal comparison; then for the subsequent frames (N +1, N +2, etc.) in the video stream, if there are two people, it can be determined approximately whether they are Zhang three or Li four, only the position in the frame has changed (this is also a precondition that object tracking can be applied); therefore, only the corresponding relationship between two persons in the frame N and the two persons in the frame N +1 needs to be determined, and the feature descriptor extraction for the frame N +1 does not need to be performed again, and then traversal comparison is performed, so that a large amount of system resources are consumed.
The face recognition in step S1 specifically includes the following steps:
s11, calculating a 128D characteristic value of each photo face according to a plurality of pictures of the same face, and calculating a 128D characteristic mean value of all photo faces as a reference value;
s12, a 128D characteristic value is calculated for the face picture collected in real time to serve as a comparison value, the comparison value is compared with a reference value, if the difference value is smaller than a threshold value, the same person is judged, and if the difference value is larger than the threshold value, a stranger is judged. Specifically, the euclidean distance between the comparison value and the reference value is calculated.
Step S11 specifically includes:
s111, positioning the face in the collected video by using a Dilb algorithm, drawing a face frame by taking the face as a center, preferably 480 × 640 in the embodiment, and storing the image in the face frame;
s112 calculates a 128D feature value of the face in each image. In this embodiment, the 128D feature value of each photo face is calculated by a Dlib resnet50 depth residual neural network.
The step S2 of tracking according to the face recognition result specifically includes:
s21, setting and registering feature frames for all faces in each frame of image in the video, and calculating the mass center of each feature frame;
s22 sequentially calculates the euclidean distances of all particles in adjacent frames, and determines that two particles having the shortest euclidean distance between adjacent frames represent the same face.
If two faces three and four are recognized in the N frame, three faces and four faces are registered in the system in the N +1 frame for continuous tracking, a feature frame is set for each face, and mass points, P, of each feature frame are calculated11And P12The two particles represent Zhang three and Li four, respectively. Similarly, a feature frame is set for each face in the N +2 frames, and the mass point P of each feature frame is calculated21And P22. Respectively calculate P11To P21And P22Distance and P12To P21And P22The distance of (c). If P is11To P21Is less than P11To P22Is determined, then P is determined21Zhang III. In the same way, if P12To P22Is less than P12To P21Is determined, then P is determined22Is Liqu. And so on for N + N frames.
When a new face appears in the tracking process, setting a feature frame for the new face, registering, and calculating feature frame particles; and sequentially calculating Euclidean distances of all particles in the adjacent frames from the frame, and judging that the two particles with the shortest Euclidean distance between the adjacent frames represent the same face. And judging the face represented by the particles which are far away from the particles in the previous frame as a new face, and identifying and tracking the new face again. For example, Wang Wu appears in the field of view at the time of N +3 frames, and three particles P appear31、P32、P33At this time, P is required to be calculated separately21And P22To P31、P32、P33Of Euclidean distance between, from, P21And P22And the remote point is a new face, which needs to be subjected to face recognition again, and the face is continuously tracked according to the tracking method after the face is recognized.
In addition, when the number of the lost frames of any face exceeds the threshold value in the tracking process, the face is cancelled. For example, particle P representing Liqu at frame N +552By exceeding a set number of frames from view (depending on the application scenario)Instead, the set frame number may be 1 to n), then lee is injected from the system and not tracked.
The invention also provides intelligent equipment carrying the face recognition following method, and the intelligent equipment can realize following transportation, protection and other functions by the face following method.
The above is only a preferred embodiment of the present invention, and it should be noted that the above preferred embodiment should not be considered as limiting the present invention, and the protection scope of the present invention should be subject to the scope defined by the claims. It will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the spirit and scope of the invention, and these modifications and adaptations should be considered within the scope of the invention.
Claims (10)
1. A face recognition following method is characterized by comprising the following steps:
carrying out face recognition on the face in the visual field;
and tracking the face after face recognition.
2. The face recognition following method according to claim 1, wherein:
the face recognition is carried out in the initial frame, and the tracking is carried out in the subsequent frame.
3. The face recognition following method according to claim 1, wherein the face recognition comprises:
calculating a 128D characteristic value of each photo face according to a plurality of pictures of the same face, and calculating a 128D characteristic mean value of all photo faces as a reference value;
and calculating a 128D characteristic value of the face picture acquired in real time as a comparison value, comparing the comparison value with a reference value, judging as the same person if the difference value is smaller than a threshold value, and judging as a stranger if the difference value is larger than the threshold value.
4. The face recognition following method according to claim 3, wherein the calculating the 128D feature value of each photo face from a plurality of pictures of the same face comprises:
positioning a face in the collected video, drawing a face frame by taking the face as a center, and storing an image in the face frame;
the 128D feature values of the faces in each image are calculated.
5. The face recognition following method according to claim 3, wherein the 128D characteristic value of each photo face is calculated by a Dlib resnet50 depth residual neural network.
6. The face recognition following method according to claim 3, wherein the comparing the comparison value with the reference value calculates the Euclidean distance between the comparison value and the reference value.
7. The face recognition following method according to claim 1, wherein the tracking according to the face recognition result comprises:
setting and registering feature frames for all human faces in each frame of image in the video, and calculating the mass center of each feature frame;
sequentially calculating Euclidean distances of all particles in adjacent frames, and judging that two particles with the shortest Euclidean distance between the adjacent frames represent the same face.
8. The face recognition following method according to claim 7, wherein: when a new face appears in the tracking process, setting a feature frame for the new face, registering, and calculating feature frame particles; sequentially calculating Euclidean distances of all particles in adjacent frames from the frame, and judging the two particles with the shortest Euclidean distance between the adjacent frames as representing the same face; and judging the face represented by the particles which are far away from the particles in the previous frame as a new face, and identifying and tracking the new face again.
9. The face recognition following method according to claim 7, wherein: and when the number of the lost frames of any face exceeds a threshold value in the tracking process, the face is cancelled.
10. An intelligent device equipped with the face recognition following method according to any one of claims 1 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111144329.1A CN113869210A (en) | 2021-09-28 | 2021-09-28 | Face recognition following method and intelligent device adopting same |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111144329.1A CN113869210A (en) | 2021-09-28 | 2021-09-28 | Face recognition following method and intelligent device adopting same |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113869210A true CN113869210A (en) | 2021-12-31 |
Family
ID=78992002
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111144329.1A Pending CN113869210A (en) | 2021-09-28 | 2021-09-28 | Face recognition following method and intelligent device adopting same |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113869210A (en) |
-
2021
- 2021-09-28 CN CN202111144329.1A patent/CN113869210A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109819208B (en) | Intensive population security monitoring management method based on artificial intelligence dynamic monitoring | |
CN106997629B (en) | Access control method, apparatus and system | |
US20170032182A1 (en) | System for adaptive real-time facial recognition using fixed video and still cameras | |
US20210065381A1 (en) | Target tracking method, device, system and non-transitory computer readable medium | |
EP1542155B1 (en) | Object detection | |
CN108229297B (en) | Face recognition method and device, electronic equipment and computer storage medium | |
CN109145742B (en) | Pedestrian identification method and system | |
US7421149B2 (en) | Object detection | |
Kukharev et al. | Visitor identification-elaborating real time face recognition system | |
CN110516623B (en) | Face recognition method and device and electronic equipment | |
US20050129275A1 (en) | Object detection | |
US20050129277A1 (en) | Object detection | |
US20050128306A1 (en) | Object detection | |
CN111353338A (en) | Energy efficiency improvement method based on business hall video monitoring | |
CN111832434A (en) | Campus smoking behavior recognition method under privacy protection and processing terminal | |
CN111046769A (en) | Queuing time detection method, device and system | |
CN109345427A (en) | The classroom video point of a kind of combination recognition of face and pedestrian's identification technology is to method | |
CN113536849A (en) | Crowd gathering identification method and device based on image identification | |
CN113869210A (en) | Face recognition following method and intelligent device adopting same | |
WO2022134916A1 (en) | Identity feature generation method and device, and storage medium | |
CN109873987B (en) | Target searching method and system based on monitoring video | |
CN113779290A (en) | Camera face recognition aggregation optimization method | |
Fradi et al. | Crowd context-dependent privacy protection filters | |
Reddy et al. | Development of security system to prevent tail-gating | |
CN112257831A (en) | Positioning system based on RFID and face recognition technology |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |