CN115909468B - Face five sense organs shielding detection method, storage medium and system - Google Patents

Face five sense organs shielding detection method, storage medium and system Download PDF

Info

Publication number
CN115909468B
CN115909468B CN202310023150.3A CN202310023150A CN115909468B CN 115909468 B CN115909468 B CN 115909468B CN 202310023150 A CN202310023150 A CN 202310023150A CN 115909468 B CN115909468 B CN 115909468B
Authority
CN
China
Prior art keywords
face
facial
image
skin
shielding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310023150.3A
Other languages
Chinese (zh)
Other versions
CN115909468A (en
Inventor
王先来
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Bairui Network Technology Co ltd
Original Assignee
Guangzhou Bairui Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Bairui Network Technology Co ltd filed Critical Guangzhou Bairui Network Technology Co ltd
Priority to CN202310023150.3A priority Critical patent/CN115909468B/en
Publication of CN115909468A publication Critical patent/CN115909468A/en
Application granted granted Critical
Publication of CN115909468B publication Critical patent/CN115909468B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a face five sense organs shielding detection method, a storage medium and a system. The detection method comprises the following steps: A. carrying out face recognition on the face image and respectively extracting area images corresponding to eyes, nose and mouth; B. the extracted region images of eyes, noses and mouths are subjected to matching detection one by one with the standard region feature images of the corresponding five sense organs, and if the matching fails, the five sense organs are judged to be blocked; if the facial feature type is judged to be blocked, a C.palm blocking type identification step, a D.mask/scarf blocking type identification step and/or an E.sunglasses/hat blocking type identification step are carried out on the facial image so as to identify the blocking type. When the detection method detects that the five sense organs are blocked, the specific face blocking type can be further identified, and the service processor can be prompted to correct in a targeted manner.

Description

Face five sense organs shielding detection method, storage medium and system
Technical Field
The present invention relates to the field of image recognition technologies, and in particular, to a method, a storage medium, and a system for detecting facial feature occlusion.
Background
With the development of the Internet, a plurality of businesses are processed online, and the online business processing method is not limited to on-site processing, so that the time of business processors is greatly saved, and the online business processing method is convenient and quick. Some financial businesses of banks or security dealers have strict requirements on compliance of business handling, and in order to ensure compliance of business handling, such businesses are usually handled through audio-video recording. In the process of business handling through audio and video recording, the situation of face shielding needs to be avoided, otherwise, the face recognition authentication program on some business process nodes can not be passed, and the compliance requirement of business handling is not met. Therefore, in the video recording process, the face shielding condition of the service processor needs to be detected, and when the face shielding condition is detected, the service processor is prompted to correct so as to ensure that the face is not shielded.
The face shielding detection method in the market at present can only detect the shielding area of the face, can not detect what object the face is shielded, namely can not detect the type of shielding the face, and is inconvenient for a service transactor to be prompted to correct in a targeted manner.
Disclosure of Invention
One of the technical problems to be solved by the invention is to provide a face five sense organ shielding detection method and a computer readable storage medium for storing a computer program for realizing the method.
In order to solve the technical problems, the invention provides a face five sense organs shielding detection method, which comprises the following steps:
A. carrying out face recognition on the face image and respectively extracting area images corresponding to eyes, nose and mouth;
B. the extracted region images of eyes, noses and mouths are subjected to matching detection one by one with the standard region feature images of the corresponding five sense organs, and if the matching fails, the five sense organs are judged to be blocked;
if the facial feature type is judged to be blocked, executing a C.palm blocking type identification step and/or a D.mask/scarf blocking type identification step on the face image, wherein:
C. the palm shielding type identification steps are as follows:
c1, extracting outline of suspected palm shielding area from the face image;
-C2. extracting a suspected finger seam line segment in the outline of the suspected palm occlusion area;
c3. calculating the length of each suspected finger suture segment, and if the length of each suspected finger suture segment meets the first preset condition, judging that the shielding area is a palm shielding area;
D. the mask/scarf shielding type identification steps are as follows:
d1, predicting and obtaining a face skin initial area according to the face image;
-D2. randomly selecting a plurality of local skin areas in the upper half of the initial area of human facial skin, determining a threshold range of HSV color values of human facial skin from the HSV color values of these local skin areas;
and D3, extracting a facial skin region from the facial image according to the threshold range of the HSV color value of the facial skin, and judging that the facial image is blocked by the mask/scarf if the facial skin region meets a second preset condition.
Further, if it is determined that the facial feature type is blocked, an e. sunglasses/hat blocking type recognition step is also performed on the face image:
e1. determining the facial skin area in the facial image;
e2. if the region image corresponding to the eyes is not in the skin region of the face, judging that the face image is blocked by the sunglasses/the hat.
Further, in step D1, specifically, a face skin initial area is extracted from the face image according to a preset range of skin HSV color values.
Further, in step D2, the abnormal HSV values in the HSV color values of the plurality of local skin areas are removed by using a quartile range method.
Further, in step C2, the inclination angles of the suspected finger stitch segments are calculated, the [0 °,180 ° ] is equally divided into n angle intervals, the number of suspected finger stitch segments with inclination angles falling into each angle interval is counted, the angle interval with the largest number is taken as a threshold interval, and only the suspected finger stitch segments with inclination angles within the threshold interval are reserved.
Further, in step C3, the first preset condition is: the sum of the lengths of the suspected finger seam segments is greater than a preset length threshold.
Further, in step D3, the second preset condition is: the area of the skin area of the human face is smaller than a preset area threshold value.
Further, the step B specifically comprises the steps of carrying out binarization processing on the extracted regional images of eyes, noses and mouths one by one according to a preset binarization threshold value to obtain corresponding gray feature images, and then carrying out matching detection on the gray feature images of the eyes, noses and mouths and the corresponding standard regional feature images of the five sense organs;
the binarization Threshold is calculated according to the following formula:
Threshold = Sum / Amount * α
in the formula, sum is the total of the regional imagesThe gray-scale value of the gray-scale value,
Figure 558084DEST_PATH_IMAGE001
g is a gray value in the range of 0,255],
Figure 253508DEST_PATH_IMAGE002
The number of pixel points with the gray value of g;
Figure 823030DEST_PATH_IMAGE003
is the total number of pixels of the regional image,
Figure 98153DEST_PATH_IMAGE004
the method comprises the steps of carrying out a first treatment on the surface of the Alpha is a percentage constant, and the value range is (1% -100%).
Further, step F. Outputting a corresponding correction prompt according to the detected occlusion type.
The invention also provides a method for detecting the facial feature shielding in the audio-video call process, which comprises the following steps:
s1, acquiring video stream data of an audio-video call in real time;
s2, extracting a kth frame of image from video stream data, and executing facial five-sense organ occlusion detection on the frame of image by adopting the facial five-sense organ occlusion detection method to obtain a facial five-sense organ occlusion detection result;
s3, extracting a (k+1) th frame image from video stream data, and judging whether the frame image and the kth frame image have differences or not:
s4, if no difference exists, face five sense organ occlusion detection is not carried out on the k+1th frame image, and the face five sense organ occlusion detection result of the k frame image is used as the face five sense organ occlusion detection result of the k+1th frame image;
s5, if the difference exists, performing facial five sense organ occlusion detection on the k+1st frame image by adopting the facial five sense organ occlusion detection method, and obtaining a facial five sense organ occlusion detection result.
The invention also provides a computer readable storage medium, on which an executable computer program is stored, which when executed implements the method for detecting facial feature occlusion as described above, or implements the method for detecting facial feature occlusion during an audio/video call as described above.
The invention also provides a facial feature shielding detection system, which comprises an imaging device and a background server which is in communication connection with the imaging device, wherein the background server comprises a processor and the computer readable storage medium, and the processor of the background server can execute the computer program in the storage medium so as to realize the facial feature shielding detection method.
When the facial feature shielding detection method provided by the invention detects that the facial feature is shielded, the specific facial shielding type can be further identified through the palm shielding type identification step and/or the mask/scarf shielding type identification step, so that the correction of a business processor can be prompted in a targeted manner.
Drawings
Fig. 1 is a flowchart of a face five sense organs shielding detection method provided by the invention.
Fig. 2 is a flowchart of a method for detecting facial feature occlusion in an audio/video call.
Fig. 3 is a schematic view of a face image with outline of a suspected palm shielding area outlined in the present invention.
Fig. 4 is a schematic outline view of a suspected palm shielding area marked with a suspected finger seam line according to the present invention.
Detailed Description
The invention will be described in detail with reference to specific examples.
The embodiment provides a face and facial feature shielding detection system, which comprises a camera device and a background server which is in communication connection with the camera device. The background server includes a processor and a computer-readable storage medium. The processor of the background server may execute the computer program in the storage medium to implement the method for detecting facial feature occlusion during an audio-video call as shown in fig. 2. The method can identify the shielding type of the face when the face shielding is detected in the audio and video conversation process, and specifically prompts a service processor to correct the face so as to ensure that the face is not shielded. The process of executing the method by the background server is described below by taking the dealer account opening service as an example, taking the user mobile phone as a camera device of the detection system, and taking the dealer background server as a background server of the detection system.
The user initiates an account opening business handling request through a stock dealer APP installed on the mobile phone, then establishes audio and video call with a computer terminal of a stock dealer customer service seat according to an operation prompt, and then the stock dealer customer service seat guides the user to perform related operations according to a business handling flow. In the audio and video call process, the user mobile phone is used as a camera device to transmit video stream data to a security dealer background server in real time. After the security dealer background server obtains video stream data of the audio and video call between the user and the customer service seat, the security dealer background server extracts images from the video stream data frame by frame to perform facial feature shielding detection, and specifically:
and (3) carrying out facial feature occlusion detection on the first frame image (namely the facial image) by the dealer background server by adopting a facial feature occlusion detection method shown in figure 1. Firstly, face recognition is carried out on the frame of image through a CNN convolutional neural network, coordinate information of eyes, noses and mouths is identified, then region images corresponding to the eyes, the noses and the mouths are respectively extracted through an ROI region of interest extraction algorithm according to the coordinate information, then feature extraction is carried out on the region images corresponding to the eyes, the noses and the mouths, the feature extraction is carried out by adopting a binarization method in the embodiment, and a binarization Threshold value Threshold is calculated according to the following preset formula:
Threshold = Sum / Amount * α
where Sum is the total gray value of the area image,
Figure 7203DEST_PATH_IMAGE001
g is a gray value in the range of 0,255],
Figure 545632DEST_PATH_IMAGE002
The number of pixel points with the gray value of g;
Figure 274554DEST_PATH_IMAGE003
is the total number of pixels of the regional image,
Figure 618947DEST_PATH_IMAGE004
the method comprises the steps of carrying out a first treatment on the surface of the Alpha is a percentage constant, and the value range is (1% -100%).
After the binarization threshold value is calculated, the dealer background server carries out binarization processing on the extracted area images of eyes, noses and mouths one by one according to the binarization threshold value to obtain corresponding gray feature images, and then carries out matching detection on the gray feature images of the area images of the eyes, noses and mouths and the corresponding standard area feature images of the five sense organs. Taking a right eye area image as an example, and the corresponding five sense organs are the right eye, so that the currently extracted right eye area gray feature image is respectively matched and detected with a standard area feature image of the right eye in an open eye state and a standard feature image of the right eye in a closed eye state, and if the currently extracted right eye area gray feature image is matched with the standard area feature image of the right eye in the open eye state or is matched with the standard feature image of the right eye in the closed eye state, the matching is successful, and the right eye is judged not to be blocked; and if the currently extracted gray feature map of the right eye region is not matched with the standard feature map in the two states, judging that the right eye is blocked. Similarly, the matching detection of the gray feature images of the left eye, nose and mouth refers to the matching detection process of the right eye region image above, and is not repeated.
Because the types of shielding corresponding to different types of five sense organs are different, when a certain type of five sense organs is shielded, only the step of identifying the type of shielding corresponding to the type of five sense organs is needed, for example, eyes are shielded, and only the step of identifying the type of shielding by a palm and the step of identifying the type of shielding by a sunglasses/a hat are needed, and the mask/the scarf type of shielding is not needed, therefore, the embodiment establishes the corresponding relation between each type of shielding and the shielding situation of the type of five sense organs in advance, and is specifically as follows: (1) Palm occlusion type—any five sense organ type occlusion situation; (2) Sunglasses/hat occlusion type—binocular occlusion situation; (3) Mask/scarf type of occlusion-mouth occlusion situation, mouth and nose occlusion situation.
Taking the example of judging that the left eye and the right eye are blocked, the palm blocking type and the sunglasses/hat blocking type are respectively arranged on the blocking types corresponding to the eye blocking, so that the background server of the dealer carries out the palm blocking type identification step and the sunglasses/hat blocking type identification step on the frame image respectively.
The palm shielding type identification step is implemented as follows:
as shown in fig. 3, the embodiment first extracts a contour of a suspected palm shielding region from the first frame image, and then extracts a suspected finger suture segment from the contour of the suspected palm shielding region (see fig. 4). The palm shelters from usually having many fingers, can have the gap between the finger, so the regional profile of palm shelter from can have many finger seam line segments to contained angle (hereafter called inclination) between each finger seam line segment and the image base is usually little, in order to reduce misjudgement probability, this embodiment utilizes the inclination of each finger seam line segment to differ little characteristic to the suspected finger seam line segment screening: calculating the inclination angles of all the suspected finger seam line segments, uniformly dividing the [0 DEG, 180 DEG ] into 9 angle sections, counting the number of the suspected finger seam line segments with the inclination angles falling into each angle section, taking the angle section with the largest number as a threshold section, for example, taking the inclination angle of most suspected finger seam line segments as the threshold section, and taking the [20 DEG, 40 DEG ] as the threshold section, wherein only the suspected finger seam line segments with the inclination angles within the threshold section [20 DEG, 40 DEG ] are reserved. After the suspected finger seam segments are screened out, the lengths of the screened suspected finger seam segments are calculated according to the coordinates of the vertexes at the two ends of each suspected finger seam segment through a two-point distance formula, then the lengths of the suspected finger seam segments are accumulated, if the sum of the lengths of the suspected finger seam segments is larger than a preset length threshold value, namely, a first preset condition is met, the shielding area is judged to be a palm shielding area, namely, the shielding type of the left eye of the human face in the first frame image is judged to be palm shielding. The preset length threshold is set to be 1/2 of the face width of the currently detected face image.
If the occlusion type of the left eye of the human face in the first frame image is identified as palm occlusion, the step of identifying the occlusion type of the sunglasses/the cap is not executed any more, and a corresponding correction prompt such as 'eyes are occluded and the palm is removed' is output according to the detected palm occlusion type. If the left eye shielding type is not palm shielding, executing a sunglasses/hat shielding type identification step, wherein the execution process is as follows:
according to the embodiment, firstly, a facial skin region is extracted from a first frame of facial image according to a preset range of facial skin HSV color values (the preset range is determined according to HSV color values of common skin colors and covers the facial skin HSV color values of the common skin colors), if a region image corresponding to eyes is not in the extracted facial skin region, the facial image is judged to be blocked by a sunglasses/hat, and then a correction prompt' eyes are blocked, and the sunglasses/hat are removed.
As a modified embodiment, the palm shielding type recognition step and the sunglasses/hat shielding type recognition step are sequentially exchanged, that is, whether the palm shielding is the sunglasses/hat shielding is first recognized, and if not, whether the palm shielding is recognized.
If the face of the first frame image is not blocked by eyes, but the nose and the mouth are blocked, and the corresponding blocking types are palm blocking and mask/scarf blocking, the background server of the stock dealer carries out a palm blocking type recognition step and a mask/scarf blocking type recognition step on the frame image respectively. The execution process of the palm shielding type recognition step is referred to above, and will not be described here again. Assuming that the mask type of the nose and mouth is not palm mask, the stock dealer background server performs mask/scarf mask type identification steps as follows:
according to the embodiment, firstly, according to the preset range of the HSV color value of the facial skin, the facial skin initial area is extracted from the first frame image, namely, the facial skin initial area is obtained according to the first frame image prediction. Because the preset range of the HSV color values of the face skin covers HSV color values of various skin colors, the initial area of the face skin extracted from the current image possibly contains a non-skin area, and because eyes are not blocked, meaning that the skin at the upper half of the face is likely to be completely exposed, the HSV color value of the skin at the upper half of the face can be taken as an actual skin HSV color value in the image, in order to reduce errors, the embodiment randomly selects a plurality of local skin areas at the upper half of the initial area of the face skin, then adopts a quarter-bit distance method to reject abnormal HSV values in the HSV color values of the local skin areas, then determines the threshold range of the HSV color values of the local skin areas according to the HSV color values of the residual local skin areas after being rejected, accordingly, extracts the face skin area of the face from the face image, if the face area is smaller than the threshold of the preset area, namely, meets the second preset condition, then judges that the face image exists/is blocked, and then outputs a correction prompt that the nose mask/the mask is blocked. In the embodiment, the preset area threshold is set to be 2/3 of the area of the face area in the image, the face area refers to the full face area comprising the shielding part, and face recognition prediction is carried out through a convolutional neural network.
The above is a process of performing facial feature shielding detection on the first frame image, after obtaining the detection result of the first frame image, the dealer background server extracts the second frame image from the video stream data, and judges whether the frame image is different from the first frame image. If the difference does not exist, which means that the action posture of the user is not changed, the facial five-element shielding detection is not performed on the second frame image, and the facial five-element shielding detection result of the first frame image is directly used as the facial five-element shielding detection result of the second frame image. Therefore, the shielding detection frequency can be reduced, the power consumption is reduced, and the overall detection efficiency is improved. If the difference exists, face five sense organ occlusion detection is carried out on the second frame image according to the face five sense organ occlusion detection flow carried out on the first frame image, and a face five sense organ occlusion detection result of the second frame image is obtained. After the face five sense organs shelter from the second frame image is obtained, the stock dealer background server extracts a third frame image from the video stream data, and judges whether the frame image and the second frame image have differences. If the difference does not exist, which means that the action posture of the user is not changed, the facial five-sense organ occlusion detection is not performed on the third frame image, and the facial five-sense organ occlusion detection result of the second frame image is directly used as the facial five-sense organ occlusion detection result of the third frame image. If the difference exists, face five sense organ occlusion detection is carried out on the third frame image according to the face five sense organ occlusion detection flow carried out on the first frame image, and a face five sense organ occlusion detection result of the third frame image is obtained. Similarly, the fourth frame, fifth frame … …, kth frame image, and so on.
Finally, it should be noted that the above embodiments are only for illustrating the technical solution of the present invention and not for limiting the scope of the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions can be made to the technical solution of the present invention without departing from the spirit and scope of the technical solution of the present invention.

Claims (9)

1. A face five sense organs shelter from detection method, including the following steps:
A. carrying out face recognition on the face image and respectively extracting area images corresponding to eyes, nose and mouth;
B. the extracted region images of eyes, noses and mouths are subjected to matching detection one by one with the standard region feature images of the corresponding five sense organs, and if the matching fails, the five sense organs are judged to be blocked;
the method is characterized in that if the facial feature type is judged to be blocked, a C.palm blocking type recognition step and a D.mask/scarf blocking type recognition step are executed on the face image, wherein:
C. the palm shielding type identification steps are as follows:
C1. extracting outline of a suspected palm shielding area from a face image;
C2. extracting suspected finger suture segments in the outline of the suspected palm shielding area;
C3. calculating the length of each suspected finger suture segment, and judging the shielding area as a palm shielding area if the length of each suspected finger suture segment meets a first preset condition;
D. the mask/scarf shielding type identification steps are as follows:
D1. predicting and obtaining a face skin initial area according to the face image;
D2. randomly selecting a plurality of local skin areas on the upper half of the initial area of the facial skin, and determining a threshold range of HSV color values of the facial skin according to the HSV color values of the local skin areas;
D3. extracting a facial skin region from the facial image according to the threshold range of the HSV color value of the facial skin, and judging that the facial image is blocked by a mask/scarf if the facial skin region meets a second preset condition;
in step C2, calculating the inclination angles of the suspected finger stitch segments, uniformly dividing [0 DEG, 180 DEG ] into n angle intervals, counting the number of the suspected finger stitch segments with the inclination angles falling into each angle interval, taking the angle interval with the largest number as a threshold interval, and only reserving the suspected finger stitch segments with the inclination angles within the threshold interval.
2. The face and facial feature occlusion detection method of claim 1, further comprising performing an e. sunglasses/hat occlusion type identification step on said face image if it is determined that the facial feature type is occluded:
E1. determining a face skin region in the face image;
E2. if the region image corresponding to the eyes is not in the skin region of the face, judging that the face image is blocked by the sunglasses/the hat.
3. The method for detecting facial feature occlusion according to claim 1, wherein step D1 is specifically performed to extract a facial skin initial region from the facial image according to a preset range of skin HSV color values.
4. The method for detecting facial feature occlusion according to claim 1, wherein in step D2, abnormal HSV values in the HSV color values of the plurality of local skin areas are removed by a quartile range method.
5. The face and facial feature occlusion detection method of claim 1, wherein:
in step C3, the first preset condition is: the sum of the lengths of the suspected finger seam segments is greater than a preset length threshold; and/or
In step D3, the second preset condition is: the area of the skin area of the human face is smaller than a preset area threshold value.
6. The method for detecting facial feature occlusion according to claim 1, wherein step B specifically comprises performing binarization processing on the extracted region images of the eyes, nose and mouth one by one according to a preset binarization threshold to obtain corresponding gray feature images, and then performing matching detection on the gray feature images of the eyes, nose and mouth and the standard region feature images of the corresponding facial feature types;
the binarization Threshold is calculated according to the following formula:
Threshold = Sum / Amount * α
where Sum is the total gray value of the area image,
Figure QLYQS_1
g is a gray value in the range of 0,255],/>
Figure QLYQS_2
The number of pixel points with the gray value of g; />
Figure QLYQS_3
Is the total number of pixels of the regional image,
Figure QLYQS_4
the method comprises the steps of carrying out a first treatment on the surface of the Alpha is a percentage constant and the value range is 1% -100%.
7. The method for detecting facial feature occlusion in any one of claims 1 to 6, comprising step f. Outputting a corresponding correction prompt based on the type of occlusion detected.
8. A computer-readable storage medium having stored thereon an executable computer program, wherein the computer program when executed implements the face five sense organ occlusion detection method of any of claims 1 to 7.
9. A facial feature occlusion detection system comprising an imaging device and a background server communicatively coupled to the imaging device, wherein the background server comprises a processor and a computer readable storage medium as claimed in claim 8, the processor of the background server being executable by a computer program in the storage medium to implement the facial feature occlusion detection method as claimed in any one of claims 1 to 7.
CN202310023150.3A 2023-01-09 2023-01-09 Face five sense organs shielding detection method, storage medium and system Active CN115909468B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310023150.3A CN115909468B (en) 2023-01-09 2023-01-09 Face five sense organs shielding detection method, storage medium and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310023150.3A CN115909468B (en) 2023-01-09 2023-01-09 Face five sense organs shielding detection method, storage medium and system

Publications (2)

Publication Number Publication Date
CN115909468A CN115909468A (en) 2023-04-04
CN115909468B true CN115909468B (en) 2023-06-06

Family

ID=86481987

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310023150.3A Active CN115909468B (en) 2023-01-09 2023-01-09 Face five sense organs shielding detection method, storage medium and system

Country Status (1)

Country Link
CN (1) CN115909468B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104364733A (en) * 2012-06-01 2015-02-18 夏普株式会社 Position-of-interest detection device, position-of-interest detection method, and position-of-interest detection program
CN107016370A (en) * 2017-04-10 2017-08-04 电子科技大学 One kind is based on the enhanced partial occlusion face identification method of data

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108319953B (en) * 2017-07-27 2019-07-16 腾讯科技(深圳)有限公司 Occlusion detection method and device, electronic equipment and the storage medium of target object
CN109711297A (en) * 2018-12-14 2019-05-03 深圳壹账通智能科技有限公司 Risk Identification Method, device, computer equipment and storage medium based on facial picture
CN111428581B (en) * 2020-03-05 2023-11-21 平安科技(深圳)有限公司 Face shielding detection method and system
CN111753783A (en) * 2020-06-30 2020-10-09 北京小米松果电子有限公司 Finger occlusion image detection method, device and medium
CN112287823A (en) * 2020-10-28 2021-01-29 怀化学院 Facial mask identification method based on video monitoring
CN213958135U (en) * 2020-12-22 2021-08-13 广州传晟智能科技有限公司 Wear gauze mask face identification temperature measurement entrance guard's equipment
CN113705466B (en) * 2021-08-30 2024-02-09 浙江中正智能科技有限公司 Face five sense organ shielding detection method for shielding scene, especially under high imitation shielding

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104364733A (en) * 2012-06-01 2015-02-18 夏普株式会社 Position-of-interest detection device, position-of-interest detection method, and position-of-interest detection program
CN107016370A (en) * 2017-04-10 2017-08-04 电子科技大学 One kind is based on the enhanced partial occlusion face identification method of data

Also Published As

Publication number Publication date
CN115909468A (en) 2023-04-04

Similar Documents

Publication Publication Date Title
CN108520219B (en) Multi-scale rapid face detection method based on convolutional neural network feature fusion
KR101808467B1 (en) Feature extraction and matching and template update for biometric authentication
KR101159830B1 (en) Red eye false positive filtering using face location and orientation
US9922238B2 (en) Apparatuses, systems, and methods for confirming identity
US7970180B2 (en) Method, apparatus, and program for processing red eyes
KR100374708B1 (en) Non-contact type human iris recognition method by correction of rotated iris image
CN110751025A (en) Business handling method, device, equipment and medium based on face recognition
US11804071B2 (en) Method for selecting images in video of faces in the wild
KR20040059313A (en) Method of extracting teeth area from teeth image and personal identification method and apparatus using teeth image
CN111898413A (en) Face recognition method, face recognition device, electronic equipment and medium
US20140079296A1 (en) Biometric identification via retina scanning
CN110619300A (en) Correction method for simultaneous recognition of multiple faces
CN110612530A (en) Method for selecting a frame for use in face processing
CN112396050B (en) Image processing method, device and storage medium
CN111898610B (en) Card unfilled corner detection method, device, computer equipment and storage medium
CN110532992A (en) A kind of face identification method based on visible light and near-infrared
Gasparini et al. Automatic red-eye removal for digital photography
CN115909468B (en) Face five sense organs shielding detection method, storage medium and system
US20230103555A1 (en) Information processing apparatus, information processing method, and program
KR102518061B1 (en) Method and apparatus for checking whether mask is worn through facial contour estimation
JP2005084979A (en) Face authentication system, method and program
CN112381042A (en) Method for extracting palm vein features from palm vein image and palm vein identification method
CN111325058A (en) Driving behavior detection method, device and system and storage medium
NL2024816B1 (en) Detection method for detecting an occlusion of an eye region in an image
CN115810214B (en) AI-based face recognition verification management method, system, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant