CN114898443A - Face data acquisition method and device - Google Patents

Face data acquisition method and device Download PDF

Info

Publication number
CN114898443A
CN114898443A CN202210630438.2A CN202210630438A CN114898443A CN 114898443 A CN114898443 A CN 114898443A CN 202210630438 A CN202210630438 A CN 202210630438A CN 114898443 A CN114898443 A CN 114898443A
Authority
CN
China
Prior art keywords
face
information
filtering
image
face data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210630438.2A
Other languages
Chinese (zh)
Inventor
黄业桃
樊雨茂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Madv Technology Co ltd
Original Assignee
Beijing Madv Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Madv Technology Co ltd filed Critical Beijing Madv Technology Co ltd
Priority to CN202210630438.2A priority Critical patent/CN114898443A/en
Publication of CN114898443A publication Critical patent/CN114898443A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/763Non-hierarchical techniques, e.g. based on statistics of modelling distributions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The invention discloses a face data acquisition method and a face data acquisition device, wherein a cloud server acquires image or video information to be processed and carries out the following steps: step S1, detecting human face based on the image or video information to be processed, and extracting the detected human face information; the face information comprises face position coordinate information and face key point coordinate information; step S2, filtering the extracted face information according to preset filtering conditions; step S3, extracting the face characteristics and grading the face quality of the filtered face information, and storing the corresponding face data; step S4, clustering the face data according to the extracted face features, and obtaining the optimal image of each type of face data according to the face quality score; the invention can carry out non-inductive snapshot and face recognition on the information of the people living in, can filter irrelevant people and improves the accuracy and convenience of identity verification of the people living in.

Description

Face data acquisition method and device
Technical Field
The invention relates to the technical field of image processing and face recognition, in particular to a face data acquisition method and device.
Background
In recent years, with the heat of a shared economy, more and more people choose to live in economical and practical lodging when going out, and many landlords with idle lodging also choose to rent own lodging to rent lodging.
Because the flow of the people living in is not like a hotel which needs to strictly verify the identity of the living people, the people often have the online ordering and the actual living people are inconsistent, so that some lawless persons have opportunities to take the machine undoubtedly, and the public security personnel have loopholes for the management and control of the personnel flow.
At present, in the market, the identity authentication methods for people living in and living in are mainly as follows: (1) manual verification, when a tenant lives in, the landlord manually verifies on site, which wastes time and labor; (2) the face identity is verified at the mobile terminal, and before the user stays in the residence, the user is verified by scanning the face on the mobile phone, but the method cannot completely solve the problem of inconsistent testimonies, and lawbreakers can still stay in by using the information of other people; (3) the method is high in cost and often needs to modify the existing door.
Disclosure of Invention
The invention aims to provide a face data acquisition method and a face data acquisition device, which can detect and identify people living in and people passing by, filter irrelevant people and improve the accuracy and convenience of identity verification of people living in and people passing by.
In order to solve the technical problem, the invention provides a face data acquisition method, wherein a cloud server acquires image or video information to be processed, and performs the following steps:
step S1, detecting human face based on the image or video information to be processed, and extracting the detected human face information; the face information comprises face position coordinate information and face key point coordinate information;
step S2, filtering the extracted face information according to preset filtering conditions;
step S3, extracting the face characteristics and grading the face quality of the filtered face information, and storing the corresponding face data;
and step S4, clustering the face data according to the extracted face features, and obtaining the optimal image of each type of face data according to the face quality score.
Preferably, the step S2, filtering the extracted face information according to a preset filtering condition, includes:
step S21, calculating the pixel value of the face image through the face position coordinate information, calculating the ratio of the pixel value of the face image to the pixel of the whole picture, and filtering the corresponding face information if the ratio is larger than a preset ratio threshold;
step S22, calculating a human face yaw angle through the coordinate information of the human face key points, and if the human face yaw angle is larger than a preset range, filtering corresponding human face information;
step S23, detecting whether a specified target appears in the picture of the face information by adopting a target detection model, and if the specified target appears, filtering the corresponding face information; wherein the designated target comprises a take-away helmet and/or clothing.
Preferably, the step S3, performing face feature extraction and face quality scoring on the filtered face information, and storing corresponding face data, includes:
calculating face feature codes of the face information by adopting a face recognition model;
calculating face quality scores of the face information by adopting a quality evaluation model;
and storing the face feature codes, the face quality scores and the images as corresponding face data.
Preferably, in step S4, the clustering the face data according to the extracted face features, and obtaining an optimal image of each type of face data according to the face quality score includes:
clustering the face data according to the face feature codes;
and taking the image of the face data with the highest face quality score in each type of face data as the optimal image of the type of face data.
Preferably, before the cloud server obtains the image or video information to be processed, the method further includes step S0: when a sensor at the door senses that a person appears or passes by, a camera is triggered to start recording a video, and recorded video information is uploaded to the cloud server, or frames of the recorded video are extracted at fixed time intervals, and the extracted image information is uploaded to the cloud server.
The invention also provides a face data acquisition device, which comprises an intelligent doorbell and a cloud server, wherein,
the intelligent doorbell further comprises: the device comprises a sensor, a camera and a first communication module;
the sensor is used for triggering the camera to start recording videos when sensing that a person appears or the person passes by;
the first communication module is used for uploading the video information recorded by the camera to the cloud server;
the cloud server further comprises: a second communication module, a face detection module, a filtering processing module and a face recognition module,
the second communication module is used for receiving the video information uploaded by the intelligent doorbell;
the face detection module is used for detecting the face and extracting the detected face information; the face information comprises face position coordinate information and face key point coordinate information;
the filtering processing module is used for filtering the extracted face information according to a preset filtering condition and then sending the face information to the face recognition module;
the face recognition module is used for carrying out face feature extraction and face quality grading on the filtered face information and storing corresponding face data; and clustering the face data according to the extracted face features, and obtaining the optimal image of each type of face data according to the face quality score.
Preferably, the filtering processing module filters the extracted face information according to the following method:
calculating the pixel value of the face image according to the face position coordinate information, calculating the ratio of the pixel value of the face image to the pixel of the whole picture, and filtering the corresponding face information if the ratio is greater than a preset ratio threshold;
calculating a human face yaw angle according to the coordinate information of the human face key points, and filtering corresponding human face information if the human face yaw angle is larger than a preset range;
detecting whether a specified target appears in a picture of the face information by adopting a target detection model, and if the specified target appears, filtering the corresponding face information; wherein the designated target comprises a take-away helmet and/or clothing.
Preferably, the face recognition module is configured to perform face feature extraction and face quality scoring on the filtered face information according to the following modes, and store corresponding face data:
calculating the face feature code of the face information by adopting a face recognition model;
calculating face quality scores of the face information by adopting a quality evaluation model;
and storing the face feature codes, the face quality scores and the images as corresponding face data.
Preferably, the face recognition module is configured to cluster the face data according to the following method, and obtain an optimal image of each type of face data according to the face quality score:
clustering the face data according to the face feature codes;
and taking the image of the face data with the highest face quality score in each type of face data as the optimal image of the type of face data.
The technical scheme of the invention has the following beneficial effects:
1. the face data acquisition method and the face data acquisition device provided by the invention can be used for filtering people living in and people passing by aiming at the problems existing in the existing people living in, especially for filtering irrelevant people, so that the identity verification accuracy of people living in is improved, and the monitoring and management safety of mobile personnel is also greatly improved;
2. according to the face data acquisition method and device provided by the invention, the intelligent doorbell is used for face snapshot, in the whole snapshot process, a user is completely insensitive, the active cooperation of the user is not needed like the traditional face recognition entrance guard, the complicated entrance authentication process is avoided, the convenience of identity verification is improved, and the user is insensitive in the snapshot process, so that the user experience is high;
3. according to the face data acquisition device provided by the invention, the intelligent doorbell only needs to be provided with a basic camera for shooting and video recording and a communication module, the rest of face detection, filtering identification and optimization processing are carried out by the cloud server, one set of intelligent doorbell only needs two to three hundred yuan, and compared with thousands of yuan of entrance guard equipment with a face identification function at a very low price, the cost is low, and the cost can be greatly reduced for many landlors; and the installation of intelligence doorbell need not to carry out any transformation to current door, adopts the mode of 3m gum to install, only needs one to tear a subsides, can accomplish the installation, and the doorbell uses the battery power supply, need not external cable.
Drawings
Fig. 1 is a schematic diagram of main processing steps of a cloud server of a face data acquisition method according to the present invention;
fig. 2 is a schematic diagram of a main processing flow of a cloud server in the face data acquisition method according to the first embodiment of the present invention;
fig. 3 is a schematic composition diagram of a face data acquisition apparatus according to a second embodiment of the present invention.
Detailed Description
The face data acquisition method and the face data acquisition device are based on an intelligent doorbell as a personnel monitoring and capturing device, face images and/or videos of the persons who live in are automatically captured in a local and cloud cooperative mode, irrelevant persons and takeaway persons are automatically filtered, then face data information after preferential treatment can be sent to a landlord for identity verification, or the identity of the persons who live in a public security system is verified, and safety and convenience of people who live in are improved.
As shown in fig. 1, the method for acquiring face data provided by the present invention includes: the cloud server acquires image or video information to be processed, and performs the following steps:
step S1, detecting human face based on the image or video information to be processed, and extracting the detected human face information; the face information comprises face position coordinate information and face key point coordinate information;
step S2, filtering the extracted face information according to preset filtering conditions;
step S3, extracting the face characteristics and grading the face quality of the filtered face information, and storing the corresponding face data;
and step S4, clustering the face data according to the extracted face features, and obtaining the optimal image of each type of face data according to the face quality score.
Further, in the step S2, the filtering of the extracted face information according to a preset filtering condition may specifically include at least one of the following manners or any combination thereof:
the first filtration mode: calculating the pixel value of the face image according to the face position coordinate information, calculating the ratio of the pixel value of the face image to the pixel of the whole picture, and filtering the corresponding face information if the ratio is greater than a preset ratio threshold;
the second filtering mode: calculating a human face yaw angle according to the coordinate information of the human face key points, and filtering corresponding human face information if the human face yaw angle is larger than a preset range;
the third filtering mode: detecting whether a specified target appears in a picture of the face information by adopting a target detection model, and if the specified target appears, filtering the corresponding face information; wherein the designated target comprises a take-away helmet and/or clothing.
Further, in step S3, the face feature extraction and the face quality score are performed on the filtered face information, and corresponding face data are stored, including:
calculating the face feature code of the face information by adopting a face recognition model;
calculating face quality scores of the face information by adopting a quality evaluation model;
and storing the face feature codes, the face quality scores and the images as corresponding face data.
Further, in step S4, clustering the face data according to the extracted face features, and obtaining an optimal image of each type of face data according to the face quality score, includes:
clustering the face data according to the face feature codes;
and taking the image of the face data with the highest face quality score in each type of face data as the optimal image of the type of face data.
Further, before the cloud server obtains the image or video information to be processed, the method further includes step S0: when a sensor at the door senses that a person appears or passes by, a camera is triggered to start recording a video, and recorded video information is uploaded to the cloud server, or frames of the recorded video are extracted at fixed time intervals, and the extracted image information is uploaded to the cloud server.
The embodiments of the present invention will be described in further detail with reference to the drawings and examples. The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
Example one
As shown in fig. 2, in the face data acquiring method according to the first embodiment, when a person passes through a door, a PIR (Passive InfraRed detection) sensor on an intelligent doorbell is triggered, the doorbell is waken to start recording a video, and the video is automatically uploaded to a cloud server; the cloud server performs the following main steps:
step 100, the cloud server starts when receiving the uploaded video;
step 101, performing frame extraction on the uploaded video at fixed intervals (for example, 1s), and sequentially performing subsequent processing on the extracted images;
102, detecting a human face based on the extracted image, judging whether a human face appears in the image, if so, outputting human face information, and carrying out the next step 103; if not, returning;
preferably, the face information in the first embodiment specifically includes position coordinates of the face and key point coordinates (five key point coordinates: eyes, nose, mouth corner) of the face.
Preferably, in the first embodiment, an mtcn (Multi-task masked connected Convolutional neural network) model is used to perform face detection on an image, and determine whether a face appears in the image, and if so, the model outputs position coordinates of the face and five key point coordinates.
The MTCNN model is a classic face detection model, and can realize the detection of face position coordinates and key point coordinates in a picture through three cascaded networks.
Step 103, calculating the ratio of the face picture, judging whether the ratio meets a preset ratio threshold value, and if so, continuing to execute step 104; otherwise, returning;
in step 103, it is determined whether a predetermined ratio threshold is met, specifically: obtaining the number of pixels (width-height multiplication) of the face image through the face position coordinates in the step 102, then calculating the face image ratio X of the face pixels to the whole image pixels (1920X 1080), judging whether X is more than or equal to 0.04 or not, and if so, continuing to execute downwards; otherwise, returning;
preferably, in this step, the people away from the door by a certain distance are filtered by calculating the face picture ratio according to a predetermined first filtering mode. The human face picture ratio is 0.04, which is the minimum ratio of a tester standing in front of the intelligent doorbell by 1m in an actual scene, and if X is greater than or equal to 0.04, the tester is shown to stand in front of the intelligent doorbell by 1 m; if the number of the people passing the door is less than 0.04, the people are indicated to pass the door or not to go to the door, and therefore the corresponding face information can be filtered. In other embodiments of the present invention, the threshold value of the face frame ratio may be adjusted according to actual parameters of different doorbells.
Step 104, performing face yaw detection based on the face key point coordinates, if the yaw angle is in a preset range, continuing to execute the next step 105, otherwise, returning;
through the aforementioned step 103, information on persons standing within 1m in front of the door can be obtained. In this step, further filtering is performed by calculating the face deviation angle according to a predetermined second filtering manner, considering that the passing person still meets the above condition.
Preferably, in the first embodiment of the present invention, the yaw angle yaw is selected from yaw (yaw angle), pitch (pitch), roll (roll), and the like for calculating the face attitude angle, in consideration of the fact that other passing unrelated persons may enter the range of 1m in front of the door in practical application, but in this case, the passing unrelated persons often appear as a side face, therefore, in the first embodiment, the face yaw (face attitude yaw) of the face is calculated through the face key point coordinates obtained in step 102, and if the yaw is within plus or minus 45 degrees (the predetermined deviation angle range may be adjusted according to practical situations), step 105 is performed, otherwise, the process returns.
Step 105, judging whether the user is a takeaway, if so, continuing to the next step 106, otherwise, returning;
in the step 104, the information of the person standing in front of the door within 1m and facing the door can be obtained, and in practical applications, considering that the takeaway person may meet the above conditions, in this step, it is determined whether the person is the takeaway person according to a predetermined third filtering manner.
Specifically, in the first embodiment, an object detection model (specifically, an object detection module such as YOLO-v5 may be used) is used to detect whether a specified object appears in the screen. In the invention, a large number of data samples of the take-out helmet and clothes are collected in advance, after training, whether a person in a picture wears the take-out helmet or the take-out clothes can be judged, and if the take-out helmet and/or clothes are detected, filtering is carried out, and the process is returned; if not, the subsequent steps are followed.
Step 106: performing face recognition on the filtered face information to obtain face data, and continuing to the next step 107;
specifically, after the foregoing steps 102 to 105, it is already roughly judged that the person appearing in front of the door is a person who may enter the room, and in this step 106, a face feature encoding feature is calculated by the face feature extraction model ArcFace, and then a face quality score is calculated by the quality evaluation model SDD-FIDA.
And then storing the face feature coding feature, the face quality score and the original frame image corresponding to the face as face data into a cache list face _ list.
Step 107: judging whether the current frame is the last frame, if so, continuing to the next step 108, otherwise, returning to the step 101;
step 108, carrying out cluster analysis on the face data in the cache list;
through the face detection and the specific filtering processing in steps 102 to 106, a cache list face _ list in which all face information meeting the filtering condition is cached can be obtained, at this time, multiple faces of multiple persons may be stored in the face _ list, and finally, each person only needs to select one clearest face, so that the faces are clustered according to the face characteristics to distinguish different persons.
Preferably, in this step 108, a DBSCAN (Density-Based Clustering of Applications with Noise) Clustering method is adopted, so that faces of different people can be classified into one type. In brief, clustering means that data having similar characteristics in a group of data are classified into one class. In the invention, different people can be distinguished by carrying out cluster analysis on the face features.
Step 109, obtaining an optimal image of each type of face information;
specifically, in step 109, after performing cluster analysis on the face data, an original frame image corresponding to a face with the highest score in each type of face data is obtained.
And step 110, ending.
By adopting the face data acquisition method, the clearest images of each person entering and exiting the civilian can be acquired, and then the images can be sent to the landlord for identity verification or comparison and authentication for a face library of a public security system.
Example two
As shown in fig. 3, the face data acquiring apparatus of the second embodiment mainly includes: an intelligent doorbell and a cloud server,
wherein, intelligent doorbell further includes: the device comprises a sensor, a camera and a first communication module;
the sensor is used for triggering the camera to start recording videos when sensing that a person appears or the person passes by;
the first communication module is used for uploading the video information recorded by the camera to the cloud server.
Wherein, the cloud server further includes: a second communication module, a face detection module, a filtering processing module and a face recognition module,
the second communication module is used for receiving the video information uploaded by the first communication module;
the face detection module is used for detecting the face and extracting the detected face information; the face information comprises face position coordinate information and face key point coordinate information;
the filtering processing module is used for filtering the extracted face information according to a preset filtering condition and then sending the face information to the face recognition module;
the face recognition module is used for carrying out face feature extraction and face quality grading on the filtered face information and storing corresponding face data; and clustering the face data according to the extracted face features, and obtaining the optimal image of each type of face data according to the face quality score.
Further, the filtering processing module filters the extracted face information according to the following modes:
calculating the pixel value of the face image according to the face position coordinate information, calculating the ratio of the pixel value of the face image to the pixel of the whole picture, and filtering the corresponding face information if the ratio is greater than a preset ratio threshold;
calculating a human face yaw angle according to the coordinate information of the human face key points, and filtering corresponding human face information if the human face yaw angle is larger than a preset range;
detecting whether a specified target appears in a picture of the face information by adopting a target detection model, and if the specified target appears, filtering the corresponding face information; wherein the designated target comprises a take-away helmet and/or clothing.
Further, the face recognition module is configured to perform face feature extraction and face quality scoring on the filtered face information according to the following modes, and store corresponding face data:
calculating the face feature code of the face information by adopting a face recognition model;
calculating face quality scores of the face information by adopting a quality evaluation model;
and storing the face feature codes, the face quality scores and the images as corresponding face data.
Further, the face recognition module is configured to cluster the face data according to the following manner, and obtain an optimal image of each type of face data according to the face quality score:
clustering the face data according to the face feature codes;
and taking the image of the face data with the highest face quality score in each type of face data as the optimal image of the type of face data.
The embodiments of the present invention have been presented for purposes of illustration and description, and are not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art. The embodiment was chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.

Claims (9)

1. A face data acquisition method is characterized in that a cloud server acquires image or video information to be processed, and the following steps are carried out:
step S1, detecting human face based on the image or video information to be processed, and extracting the detected human face information; the face information comprises face position coordinate information and face key point coordinate information;
step S2, filtering the extracted face information according to preset filtering conditions;
step S3, extracting the face characteristics and grading the face quality of the filtered face information, and storing the corresponding face data;
and step S4, clustering the face data according to the extracted face features, and obtaining the optimal image of each type of face data according to the face quality score.
2. The face data acquisition method according to claim 1,
the step S2, filtering the extracted face information according to a preset filtering condition, including:
step S21, calculating the pixel value of the face image through the face position coordinate information, calculating the ratio of the pixel value of the face image to the pixel of the whole picture, and filtering the corresponding face information if the ratio is larger than a preset ratio threshold;
step S22, calculating a human face yaw angle through the coordinate information of the human face key points, and if the human face yaw angle is larger than a preset range, filtering corresponding human face information;
step S23, detecting whether a specified target appears in the picture of the face information by adopting a target detection model, and if the specified target appears, filtering the corresponding face information; wherein the designated target comprises a take-away helmet and/or clothing.
3. The face data acquisition method according to claim 1 or 2,
step S3, performing face feature extraction and face quality scoring on the filtered face information, and storing corresponding face data, including:
calculating the face feature code of the face information by adopting a face recognition model;
calculating face quality scores of the face information by adopting a quality evaluation model;
and storing the face feature codes, the face quality scores and the images as corresponding face data.
4. The face data acquisition method according to claim 3,
the step S4, clustering the face data according to the extracted face features, and obtaining an optimal image of each type of face data according to the face quality score, includes:
clustering the face data according to the face feature codes;
and taking the image of the face data with the highest face quality score in each type of face data as the optimal image of the type of face data.
5. The face data acquisition method according to claim 1,
before the cloud server obtains the image or video information to be processed, the method further includes step S0: when a sensor at the door senses that a person appears or passes by, a camera is triggered to start recording a video, and recorded video information is uploaded to the cloud server, or frames of the recorded video are extracted at fixed time intervals, and the extracted image information is uploaded to the cloud server.
6. The face data acquisition device is characterized by comprising an intelligent doorbell and a cloud server, wherein,
the intelligent doorbell further comprises: the device comprises a sensor, a camera and a first communication module;
the sensor is used for triggering the camera to start recording videos when sensing that a person appears or the person passes by;
the first communication module is used for uploading the video information recorded by the camera to the cloud server;
the cloud server further comprises: a second communication module, a face detection module, a filtering processing module and a face recognition module,
the second communication module is used for receiving the video information uploaded by the intelligent doorbell;
the face detection module is used for detecting the face and extracting the detected face information; the face information comprises face position coordinate information and face key point coordinate information;
the filtering processing module is used for filtering the extracted face information according to a preset filtering condition and then sending the face information to the face recognition module;
the face recognition module is used for carrying out face feature extraction and face quality grading on the filtered face information and storing corresponding face data; and clustering the face data according to the extracted face features, and obtaining the optimal image of each type of face data according to the face quality score.
7. The face data acquisition apparatus according to claim 6,
the filtering processing module filters the extracted face information according to the following modes:
calculating the pixel value of the face image according to the face position coordinate information, calculating the ratio of the pixel value of the face image to the pixel of the whole picture, and filtering the corresponding face information if the ratio is greater than a preset ratio threshold;
calculating a human face yaw angle according to the coordinate information of the human face key points, and filtering corresponding human face information if the human face yaw angle is larger than a preset range;
detecting whether a specified target appears in a picture of the face information by adopting a target detection model, and if the specified target appears, filtering the corresponding face information; wherein the designated target comprises a take-away helmet and/or clothing.
8. The face data acquisition apparatus according to claim 7,
the face recognition module is used for extracting the face features and grading the face quality of the face information after filtering in the following modes, and storing corresponding face data:
calculating the face feature code of the face information by adopting a face recognition model;
calculating face quality scores of the face information by adopting a quality evaluation model;
and storing the face feature codes, the face quality scores and the images as corresponding face data.
9. The face data acquisition apparatus according to claim 8,
the face recognition module is used for clustering the face data according to the following modes and obtaining the optimal image of each type of face data according to the face quality scores:
clustering the face data according to the face feature codes;
and taking the image of the face data with the highest face quality score in each type of face data as the optimal image of the type of face data.
CN202210630438.2A 2022-06-06 2022-06-06 Face data acquisition method and device Pending CN114898443A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210630438.2A CN114898443A (en) 2022-06-06 2022-06-06 Face data acquisition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210630438.2A CN114898443A (en) 2022-06-06 2022-06-06 Face data acquisition method and device

Publications (1)

Publication Number Publication Date
CN114898443A true CN114898443A (en) 2022-08-12

Family

ID=82728537

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210630438.2A Pending CN114898443A (en) 2022-06-06 2022-06-06 Face data acquisition method and device

Country Status (1)

Country Link
CN (1) CN114898443A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115410306A (en) * 2022-09-01 2022-11-29 南京北新智能科技有限公司 Non-inductive attendance access control algorithm based on face recognition
CN116580828A (en) * 2023-05-16 2023-08-11 深圳弗瑞奇科技有限公司 Visual monitoring method for full-automatic induction identification of cat health

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115410306A (en) * 2022-09-01 2022-11-29 南京北新智能科技有限公司 Non-inductive attendance access control algorithm based on face recognition
CN116580828A (en) * 2023-05-16 2023-08-11 深圳弗瑞奇科技有限公司 Visual monitoring method for full-automatic induction identification of cat health
CN116580828B (en) * 2023-05-16 2024-04-02 深圳弗瑞奇科技有限公司 Visual monitoring method for full-automatic induction identification of cat health

Similar Documents

Publication Publication Date Title
CN110491004B (en) Resident community personnel safety management system and method
CN109377616B (en) Access control system based on two-dimensional face recognition
CN109299683B (en) Security protection evaluation system based on face recognition and behavior big data
JP6905850B2 (en) Image processing system, imaging device, learning model creation method, information processing device
CN109389719B (en) Community door access control system and door opening method
CN109658554B (en) Intelligent residential district security protection system based on big data
CN206515931U (en) A kind of face identification system
CN114898443A (en) Face data acquisition method and device
CN112991585B (en) Access personnel management method and computer readable storage medium
CN101404107A (en) Internet bar monitoring and warning system based on human face recognition technology
CN108108711B (en) Face control method, electronic device and storage medium
CN111767823A (en) Sleeping post detection method, device, system and storage medium
CN111800617A (en) Intelligent security system based on Internet of things
CN110942580A (en) Intelligent building visitor management method and system and storage medium
CN112634561A (en) Safety alarm method and system based on image recognition
CN108376237A (en) A kind of house visiting management system and management method based on 3D identifications
CN110956768A (en) Automatic anti-theft device of intelligence house
JP2002304651A (en) Device and method for managing entering/leaving room, program for executing the same method and recording medium with the same execution program recorded thereon
CN110717428A (en) Identity recognition method, device, system, medium and equipment fusing multiple features
CN113869115A (en) Method and system for processing face image
KR20200059643A (en) ATM security system based on image analyses and the method thereof
CN108197614A (en) A kind of examination hall monitor camera and system based on face recognition technology
CN111091047B (en) Living body detection method and device, server and face recognition equipment
CN112601054B (en) Pickup picture acquisition method and device, storage medium and electronic equipment
CN112070943B (en) Access control management system based on active RFID technology and face recognition technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication