CN115909468A - Human face facial feature occlusion detection method, storage medium and system - Google Patents

Human face facial feature occlusion detection method, storage medium and system Download PDF

Info

Publication number
CN115909468A
CN115909468A CN202310023150.3A CN202310023150A CN115909468A CN 115909468 A CN115909468 A CN 115909468A CN 202310023150 A CN202310023150 A CN 202310023150A CN 115909468 A CN115909468 A CN 115909468A
Authority
CN
China
Prior art keywords
occlusion
face
area
human face
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310023150.3A
Other languages
Chinese (zh)
Other versions
CN115909468B (en
Inventor
王先来
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Bairui Network Technology Co ltd
Original Assignee
Guangzhou Bairui Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Bairui Network Technology Co ltd filed Critical Guangzhou Bairui Network Technology Co ltd
Priority to CN202310023150.3A priority Critical patent/CN115909468B/en
Publication of CN115909468A publication Critical patent/CN115909468A/en
Application granted granted Critical
Publication of CN115909468B publication Critical patent/CN115909468B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a human face facial feature occlusion detection method, a storage medium and a system. The detection method comprises the following steps: A. carrying out face recognition on the face image and respectively extracting area images corresponding to eyes, a nose and a mouth; B. matching and detecting the extracted region images of the eyes, the nose and the mouth one by one with the corresponding standard region characteristic graphs of the types of the five sense organs, and if the matching fails, judging that the types of the five sense organs are blocked; and if the facial features are judged to be shielded, executing a palm shielding type identification step, a mask/scarf shielding type identification step and/or an sunglasses/hat shielding type identification step on the face image to identify the shielding types. When detecting that five sense organs are shielded, the detection method can further identify the specific human face shielding type, and is convenient for prompting a business transactor to correct the specific human face shielding type in a targeted manner.

Description

Human face facial feature occlusion detection method, storage medium and system
Technical Field
The invention relates to the technical field of image recognition, in particular to a human face facial feature occlusion detection method, a storage medium and a system.
Background
With the development of the Internet, a plurality of services are transacted online and are not only transacted on site any more, so that the time of a service transactor is greatly saved, and the system is convenient and quick. Some financial businesses of banks or securities dealers have strict requirements on the compliance of business handling, and in order to ensure the compliance of business handling, the businesses generally need to be handled through audio and video recording. In the process of business handling through audio and video recording, the situation of face shielding needs to be avoided, otherwise, face identification authentication programs on some business process nodes cannot be passed, and the requirement of compliance of business handling cannot be met. Therefore, in the video recording process, the face shielding condition of the service transactor needs to be detected, and when the face shielding condition is detected, the service transactor is prompted to correct the face shielding condition so as to ensure that the face is not shielded.
The existing human face shielding detection method on the market can only detect the shielding area of the human face, and can not detect what object the human face is shielded by, namely the human face shielding type, so that a service transactor is inconveniently prompted to correct the human face shielding type.
Disclosure of Invention
One of the technical problems to be solved by the present invention is to provide a human face facial features occlusion detection method and a computer readable storage medium for storing a computer program for implementing the method, wherein the method can detect the type of human face occlusion, and is convenient for prompting a service transactor to correct the human face occlusion specifically.
In order to solve the technical problem, the invention provides a human face facial feature occlusion detection method, which comprises the following steps:
A. carrying out face recognition on the face image and respectively extracting area images corresponding to eyes, a nose and a mouth;
B. matching and detecting the extracted regional images of the eyes, the nose and the mouth one by one with the corresponding standard regional characteristic graphs of the types of the five sense organs, and if the matching fails, judging that the types of the five sense organs are blocked;
if the facial features are judged to be shielded, executing a C-palm shielding type identification step and/or a D-mask/scarf shielding type identification step on the face image, wherein:
C. the palm occlusion type identification step is as follows:
c1, extracting a suspected palm occlusion area outline from the face image;
c2, extracting a suspected finger seam line segment in the outline of the suspected palm occlusion area;
c3, calculating the length of each suspected finger seam line segment, and if the length of each suspected finger seam line segment meets a first preset condition, judging that the shielding area is a palm shielding area;
D. the mask/scarf shielding type identification steps are as follows:
d1, predicting a human face skin initial region according to the human face image;
d2, randomly selecting a plurality of local skin areas at the upper half part of the initial area of the face skin, and determining a threshold range of HSV color values of the face skin according to the HSV color values of the local skin areas;
d3, extracting a face skin area from the face image according to the threshold value range of the HSV color value of the face skin, and judging that the face image has mask/scarf shielding if the face skin area meets a second preset condition.
Further, if the facial features type is judged to be blocked, the face image is further subjected to E-sunglasses/hat blocking type identification:
-E1. Determining a face skin region in said face image;
and E2, if the area image corresponding to the eyes is not in the human face skin area, judging that the human face image has sunglasses/hat occlusion.
Further, in the step D1, specifically, according to a preset range of skin HSV color values, a face skin initial region is extracted from the face image.
Further, in step D2, the abnormal HSV values in the HSV color values of the local skin regions are removed by using a quartering distance method.
Further, in step C2, the inclination angles of the suspected finger seam line segments are calculated, the angles [0 °,180 ° ] are divided into n angle intervals, the number of the suspected finger seam line segments with the inclination angles falling into each angle interval is counted, the angle interval with the largest number is used as a threshold interval, and only the suspected finger seam line segments with the inclination angles in the threshold interval are reserved.
Further, in step C3, the first preset condition is: the sum of the lengths of the suspected finger seam line segments is greater than a preset length threshold.
Further, in step D3, the second preset condition is: the area of the human face skin area is smaller than a preset area threshold value.
Further, step B specifically, binarizing the extracted eye, nose and mouth region images one by one according to a preset binarization threshold value to obtain corresponding gray level feature maps, and then performing matching detection on the gray level feature maps of the eyes, nose and mouth and the corresponding standard region feature map of the five sense organs type;
wherein, the binary Threshold is calculated according to the following formula:
Threshold = Sum / Amount * α
in the formula, sum is the total gray scale value of the area image,
Figure 558084DEST_PATH_IMAGE001
g is a gray scale value with a value range of [0,255 ]],
Figure 253508DEST_PATH_IMAGE002
The number of pixels with the gray value of g;
Figure 823030DEST_PATH_IMAGE003
is the total number of pixel points of the region image,
Figure 98153DEST_PATH_IMAGE004
(ii) a Alpha is a percentage constant and has a value range of (1% -100%).
And further, a step F is included, corresponding correction prompts are output according to the detected occlusion types.
The invention also provides a method for detecting facial features occlusion in the audio and video communication process, which comprises the following steps:
s1, video stream data of audio and video calls are obtained in real time;
s2, extracting a kth frame of image from video stream data, and performing human face facial features occlusion detection on the frame of image by adopting the human face facial features occlusion detection method to obtain a human face facial features occlusion detection result;
s3, extracting a (k + 1) th frame image from the video stream data, and judging whether the frame image is different from the kth frame image:
s4, if no difference exists, not performing human face facial features occlusion detection on the (k + 1) th frame image, and taking the human face facial features occlusion detection result of the (k + 1) th frame image as the human face facial features occlusion detection result of the (k + 1) th frame image;
and S5, if the difference exists, performing face facial features occlusion detection on the (k + 1) th frame image by adopting the face facial features occlusion detection method to obtain a face facial features occlusion detection result.
The present invention also provides a computer-readable storage medium having stored thereon an executable computer program, when being executed, the computer program realizes the human face facial features occlusion detection method or the human face facial features occlusion detection method in the audio and video call process.
The invention also provides a human face facial features occlusion detection system, which comprises a camera device and a background server in communication connection with the camera device, wherein the background server comprises a processor and the computer readable storage medium, and the processor of the background server can execute a computer program in the storage medium so as to realize the human face facial features occlusion detection method.
When the human face five sense organs shielding detection method provided by the invention detects that the five sense organs are shielded, the specific human face shielding type can be further identified through a palm shielding type identification step and/or a mask/scarf shielding type identification step, so that a business transactor can be prompted to correct the specific human face shielding type in a targeted manner.
Drawings
FIG. 1 is a flow chart of a human face facial feature occlusion detection method provided by the invention.
Fig. 2 is a flowchart of a method for detecting facial features occlusion in an audio/video call process according to the present invention.
Fig. 3 is a schematic diagram of a face image with a suspected palm occlusion area outlined according to the present invention.
FIG. 4 is a schematic diagram of a suspected palm-covered area marked with a suspected finger line segment according to the present invention.
Detailed Description
The invention is described in detail below with reference to specific embodiments.
The embodiment provides a human face facial feature occlusion detection system which comprises a camera device and a background server in communication connection with the camera device. The backend server includes a processor and a computer-readable storage medium. The processor of the background server can execute the computer program in the storage medium, thereby realizing the method for detecting the occlusion of facial features in the audio and video call process as shown in fig. 2. The method can identify the shielding type of the face when the face shielding is detected in the audio and video call process, and specifically prompts a service transactor to correct so as to ensure that the face is not shielded. The following explains the process of executing the method by using a security dealer account opening service as an example, using a user mobile phone as a camera device of the detection system, using a security dealer background server as a background server of the detection system, and using the background server.
The user initiates an account opening business handling request through a security dealer APP installed on a mobile phone, then establishes an audio and video conversation with a computer terminal of a security dealer customer service seat according to an operation prompt, and then the security dealer customer service seat can guide the user to perform related operations according to a business handling process. In the audio and video communication process, a user mobile phone serves as a camera device to transmit video streaming data to a security dealer background server in real time. After obtaining video stream data of audio and video calls of a user and a customer service seat, a certificate dealer background server extracts images frame by frame from the video stream data to perform face five sense organs occlusion detection, and specifically:
for the extracted first frame image (i.e., the face image), the stock dealer backend server performs face facial features occlusion detection on the frame image by using the face facial features occlusion detection method shown in fig. 1. Firstly, performing face recognition on the frame image through a CNN convolutional neural network to recognize coordinate information of eyes, a nose and a mouth, then extracting region images corresponding to the eyes, the nose and the mouth respectively through an ROI (region of interest) extraction algorithm according to the coordinate information, and then performing feature extraction on the region images corresponding to the eyes, the nose and the mouth, wherein the feature extraction is performed by adopting a binarization method, and a binarization Threshold is calculated according to a preset formula as follows:
Threshold = Sum / Amount * α
in the formula, sum is the total gray scale value of the area image,
Figure 7203DEST_PATH_IMAGE001
g is a gray scale value with a value range of [0,255 ]],
Figure 545632DEST_PATH_IMAGE002
The number of pixels with the gray value of g;
Figure 274554DEST_PATH_IMAGE003
is the total number of pixel points of the region image,
Figure 618947DEST_PATH_IMAGE004
(ii) a Alpha is a percentage constant and has a value range of (1% -100%).
After the binarization threshold value is obtained through calculation, the stock dealer background server conducts binarization processing on the extracted eye, nose and mouth region images one by one according to the binarization threshold value to obtain corresponding gray feature maps, and then matching detection is conducted on the gray feature maps of the eye, nose and mouth region images and the corresponding standard region feature maps of the types of the five sense organs. Taking the right eye region image as an example, and the corresponding five sense organs type is the right eye, so that matching detection is carried out on the currently extracted right eye region gray scale feature map respectively with the standard region feature map of the right eye in the eye opening state and the standard feature map of the right eye in the eye closing state, if the currently extracted right eye region gray scale feature map is matched with the standard region feature map of the right eye in the eye opening state or matched with the standard feature map of the right eye in the eye closing state, matching is successful, and the right eye is judged not to be blocked; and if the currently extracted gray characteristic diagram of the right eye region is not matched with the standard characteristic diagrams in the two states, judging that the right eye is shielded. Similarly, the matching detection of the gray characteristic images of the area images of the left eye, the nose and the mouth refers to the matching detection process of the area images of the right eye, and is not repeated.
Because the shielding types corresponding to different types of facial features are different, when a certain type of facial features is shielded, only the identification step of the shielding type corresponding to the type of facial features needs to be executed, for example, eyes are shielded, only the palm shielding type identification step and the sunglasses/hat shielding type identification step need to be executed, and the mask/scarf shielding type does not need to be executed, so that the embodiment establishes the corresponding relationship between each shielding type and the shielding situation of the type of facial features in advance, specifically as follows: (1) Palm occlusion type-any five sense organ type occlusion situation; (2) Sunglass/hat occlusion type-binocular occlusion case; (3) Mask/scarf occlusion types-mouth occlusion case, mouth and nose occlusion case.
Taking the example of judging that both the left eye and the right eye are occluded, the occlusion types corresponding to the eye occlusions include palm occlusion and sunglasses/hat occlusion, so the stock dealer background server respectively executes a palm occlusion type identification step and a sunglasses/hat occlusion type identification step successively on the frame image.
The execution process of the palm shielding type identification step is as follows:
as shown in fig. 3, in the present embodiment, a suspected palm-occluded area contour is extracted from the first frame image, and then a suspected finger seam line segment is extracted from the suspected palm-occluded area contour (see fig. 4). The palm shelters from and has many fingers usually, can have the gap between the finger, so the palm shelters from regional profile and can have many finger seam line segments to contained angle (hereinafter referred to as inclination) between each finger seam line segment and the image base usually differs very much, for reducing the erroneous judgement probability, the characteristic that the inclination of this embodiment utilization each finger seam line segment is not big differs screens the suspected finger seam line segment: calculating the inclination angle of each suspected finger seam line segment, dividing [0 degrees and 180 degrees ] into 9 angle intervals, counting the number of the suspected finger seam line segments with the inclination angles falling into each angle interval, taking the angle interval with the largest number as a threshold interval, for example, taking [20 degrees and 40 degrees ] as the threshold interval when the inclination angles of most suspected finger seam line segments are positioned in the angle interval [20 degrees and 40 degrees ], and only keeping the suspected finger seam line segments with the inclination angles positioned in the threshold interval [20 degrees and 40 degrees ]. After suspected finger suture segments are screened out, the length of each screened suspected finger suture segment is calculated through a distance formula between two points according to coordinates of vertexes of two ends of each suspected finger suture segment, then the lengths of the suspected finger suture segments are accumulated, if the length sum of each suspected finger suture segment is larger than a preset length threshold value, namely a first preset condition is met, the sheltered area is judged to be a palm sheltered area, namely the sheltered type of the left eye of the face in the first frame image is palm sheltered. In the embodiment, the preset length threshold is set to be 1/2 of the face width of the currently detected face image.
If the occlusion type of the left eye of the face in the first frame image is identified as palm occlusion, the step of identifying the occlusion type of sunglasses/hats is not executed any more, but a corresponding correction prompt such as "eyes are occluded, please move the palm" is output according to the detected occlusion type of the palm. If the occlusion type of the left eye is not the palm occlusion, executing a step of identifying the occlusion type of the sunglasses/hat, wherein the execution process is as follows:
according to the embodiment, firstly, according to a preset range of human face skin HSV color values (the preset range is determined according to the HSV color values of common skin colors and covers the human face skin HSV color values of common skin colors), a human face skin area is extracted from a first frame of human face image, if an area image corresponding to eyes is not in the extracted human face skin area, the fact that the human face image is shielded by sunglasses or a hat is judged, and then a correction prompt that the eyes are shielded and the sunglasses or the hat is required to be taken off is output.
As a modified embodiment, the palm occlusion type identification step and the sunglasses/hat occlusion type identification step are performed in reverse order, that is, whether the palm occlusion is sunglasses/hat occlusion is identified first, and if not, whether the palm occlusion is identified again.
If the face of the first frame image is not blocked by eyes but blocked by nose and mouth, and the corresponding blocking types comprise palm blocking and mask/scarf blocking, the security dealer background server executes a palm blocking type identification step and a mask/scarf blocking type identification step on the frame image in sequence respectively. The execution process of the palm occlusion type identification step is referred to above, and is not described herein again. Assuming that the mask type of the nose and mouth is not the palm mask, the stock dealer backend server performs the mask/scarf mask type identification steps as follows:
in this embodiment, firstly, according to a preset range of the color value of the face skin HSV, a face skin initial region is extracted from a first frame image, that is, the face skin initial region is obtained according to the prediction of the first frame image. Because the preset range of the HSV color values of the face skin covers HSV color values of various skin colors, the original region of the face skin extracted from the current image may include a non-skin region, because eyes are not blocked, the skin at the upper half part of the face is likely to be completely naked, the HSV color values of the skin at the upper half part of the face can be used as actual HSV color values of the skin in the image, in order to reduce errors, a plurality of local skin regions are randomly selected at the upper half part of the original region of the face skin, abnormal HSV values in the local skin regions are eliminated by adopting a four-bit-division method, then the threshold range of the HSV color values of the skin in the image is determined according to the HSV color values of the local skin regions left after elimination, the face skin region is extracted from the face image accordingly, if the area of the face skin region is smaller than a preset area threshold, namely a second preset condition is met, the face image is judged to have a mask/block, and then a correction prompt that the nose and the mouth are blocked is output, and the mask/scarf is removed. In the embodiment, the preset area threshold is set to be 2/3 of the area of a face area in the image, the face area refers to a whole face area including a shielding part, and the face area is obtained through face recognition prediction through a convolutional neural network.
After the detection result of the first frame image is obtained in the process of carrying out the facial features occlusion detection on the first frame image, the security dealer background server extracts the second frame image from the video stream data and judges whether the frame image is different from the first frame image or not. If no difference exists, meaning that the action posture of the user is not changed, the facial features occlusion detection is not executed on the second frame image, and the facial features occlusion detection result of the first frame image is directly used as the facial features occlusion detection result of the second frame image. Therefore, the shielding detection frequency can be reduced, the power consumption is reduced, and the overall detection efficiency is improved. And if the difference exists, performing facial feature occlusion detection on the second frame image according to the above facial feature occlusion detection process performed on the first frame image to obtain a facial feature occlusion detection result of the second frame image. After the face facial features occlusion detection result of the second frame image is obtained, the stock dealer background server extracts a third frame image from the video stream data and judges whether the frame image is different from the second frame image. If no difference exists, meaning that the action posture of the user is not changed, the facial features occlusion detection is not executed on the third frame image, and the facial features occlusion detection result of the second frame image is directly used as the facial features occlusion detection result of the third frame image. And if the difference exists, performing facial feature occlusion detection on the third frame image according to the facial feature occlusion detection process performed on the first frame image, so as to obtain a facial feature occlusion detection result of the third frame image. Similarly, the fourth frame and the fifth frame are (8230) (\8230) (/ k) frame images, and so on.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the protection scope of the present invention, although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions can be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (10)

1. A human face five sense organs shielding detection method comprises the following steps:
A. carrying out face recognition on the face image and respectively extracting area images corresponding to eyes, a nose and a mouth;
B. matching and detecting the extracted region images of the eyes, the nose and the mouth one by one with the corresponding standard region characteristic graphs of the types of the five sense organs, and if the matching fails, judging that the types of the five sense organs are blocked;
the method is characterized in that if the type of the five sense organs is judged to be shielded, a step of recognizing the palm shielding type and/or a step of recognizing the mask/scarf shielding type are/is executed on the face image, wherein the steps are as follows:
C. the palm occlusion type identification step is as follows:
c1, extracting a suspected palm occlusion area outline from the face image;
c2, extracting a suspected finger seam line segment in the outline of the suspected palm occlusion area;
c3, calculating the length of each suspected finger seam line segment, and if the length of each suspected finger seam line segment meets a first preset condition, judging that the shielding area is a palm shielding area;
D. the mask/scarf shielding type identification steps are as follows:
d1, predicting a human face skin initial region according to the human face image;
d2, randomly selecting a plurality of local skin areas at the upper half part of the initial area of the face skin, and determining a threshold range of HSV color values of the face skin according to the HSV color values of the local skin areas;
d3, extracting a face skin area from the face image according to the threshold value range of the HSV color value of the face skin, and judging that the face image is shielded by the mask/scarf if the face skin area meets a second preset condition.
2. The method for detecting facial features occlusion as claimed in claim 1, wherein if the facial features type is determined to be occluded, the facial image is further subjected to e.sunglasses/hat occlusion type recognition step:
-E1. Determining a face skin region in said face image;
and E2, if the area image corresponding to the eyes is not in the human face skin area, judging that the human face image has sunglasses/hat occlusion.
3. The method according to claim 1, wherein in step D1, a face skin initial region is extracted from the face image according to a preset range of HSV color values.
4. The method for detecting facial features occlusion according to claim 1, wherein in step D2, the abnormal HSV values in the HSV color values of the local skin regions are removed by a quartile distance method.
5. The method for detecting facial features occlusion according to claim 1, wherein in step C2, the inclination angle of each suspected finger seam line segment is calculated, the angles [0 °,180 ° ] are divided into n angle intervals, the number of suspected finger seam line segments with inclination angles falling into each angle interval is counted, the angle interval with the largest number is used as the threshold interval, and only the suspected finger seam line segments with inclination angles in the threshold interval are reserved.
6. The human face facial feature occlusion detection method of claim 1, characterized by:
in step C3, the first preset condition is: the sum of the lengths of the suspected finger seam line segments is greater than a preset length threshold; and/or
In step D3, the second preset condition is: the area of the human face skin area is smaller than a preset area threshold value.
7. The method for detecting facial five-sense organ occlusion according to claim 1, wherein in the step B, specifically, the extracted eye, nose and mouth region images are binarized one by one according to a preset binarization threshold value to obtain corresponding gray level feature maps, and then the gray level feature maps of the eyes, nose and mouth are matched and detected with the corresponding standard region feature map of the five sense organ type;
the binary Threshold is calculated according to the following formula:
Threshold = Sum / Amount * α
in the formula, sum is the total gray scale value of the area image,
Figure 421561DEST_PATH_IMAGE001
g is a gray scale value with a value range of [0,255 ]],
Figure 618187DEST_PATH_IMAGE002
The number of pixels with the gray value of g;
Figure 372516DEST_PATH_IMAGE003
is the total number of pixel points of the region image,
Figure 375108DEST_PATH_IMAGE004
(ii) a Alpha is a percentage constant and has a value range of (1% -100%).
8. The method for detecting facial occlusion of five sense organs according to any one of claims 1 to 7, comprising a step F of outputting a corresponding correction prompt according to the detected occlusion type.
9. A computer-readable storage medium on which an executable computer program is stored, the computer program when executed implementing a method of facial feature occlusion detection as claimed in any one of claims 1 to 8.
10. A human face five sense organs occlusion detection system, comprising a camera device and a background server in communication connection with the camera device, wherein the background server comprises a processor and a computer-readable storage medium according to claim 9, and the processor of the background server can execute a computer program in the storage medium to realize the human face five sense organs occlusion detection method according to any one of claims 1 to 8.
CN202310023150.3A 2023-01-09 2023-01-09 Face five sense organs shielding detection method, storage medium and system Active CN115909468B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310023150.3A CN115909468B (en) 2023-01-09 2023-01-09 Face five sense organs shielding detection method, storage medium and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310023150.3A CN115909468B (en) 2023-01-09 2023-01-09 Face five sense organs shielding detection method, storage medium and system

Publications (2)

Publication Number Publication Date
CN115909468A true CN115909468A (en) 2023-04-04
CN115909468B CN115909468B (en) 2023-06-06

Family

ID=86481987

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310023150.3A Active CN115909468B (en) 2023-01-09 2023-01-09 Face five sense organs shielding detection method, storage medium and system

Country Status (1)

Country Link
CN (1) CN115909468B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104364733A (en) * 2012-06-01 2015-02-18 夏普株式会社 Position-of-interest detection device, position-of-interest detection method, and position-of-interest detection program
CN107016370A (en) * 2017-04-10 2017-08-04 电子科技大学 One kind is based on the enhanced partial occlusion face identification method of data
CN108319953A (en) * 2017-07-27 2018-07-24 腾讯科技(深圳)有限公司 Occlusion detection method and device, electronic equipment and the storage medium of target object
CN109711297A (en) * 2018-12-14 2019-05-03 深圳壹账通智能科技有限公司 Risk Identification Method, device, computer equipment and storage medium based on facial picture
CN111428581A (en) * 2020-03-05 2020-07-17 平安科技(深圳)有限公司 Face shielding detection method and system
CN112287823A (en) * 2020-10-28 2021-01-29 怀化学院 Facial mask identification method based on video monitoring
CN213958135U (en) * 2020-12-22 2021-08-13 广州传晟智能科技有限公司 Wear gauze mask face identification temperature measurement entrance guard's equipment
CN113705466A (en) * 2021-08-30 2021-11-26 浙江中正智能科技有限公司 Human face facial feature occlusion detection method used for occlusion scene, especially under high-imitation occlusion
US20210406532A1 (en) * 2020-06-30 2021-12-30 Beijing Xiaomi Pinecone Electronics Co., Ltd. Method and apparatus for detecting finger occlusion image, and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104364733A (en) * 2012-06-01 2015-02-18 夏普株式会社 Position-of-interest detection device, position-of-interest detection method, and position-of-interest detection program
CN107016370A (en) * 2017-04-10 2017-08-04 电子科技大学 One kind is based on the enhanced partial occlusion face identification method of data
CN108319953A (en) * 2017-07-27 2018-07-24 腾讯科技(深圳)有限公司 Occlusion detection method and device, electronic equipment and the storage medium of target object
CN109711297A (en) * 2018-12-14 2019-05-03 深圳壹账通智能科技有限公司 Risk Identification Method, device, computer equipment and storage medium based on facial picture
CN111428581A (en) * 2020-03-05 2020-07-17 平安科技(深圳)有限公司 Face shielding detection method and system
US20210406532A1 (en) * 2020-06-30 2021-12-30 Beijing Xiaomi Pinecone Electronics Co., Ltd. Method and apparatus for detecting finger occlusion image, and storage medium
CN112287823A (en) * 2020-10-28 2021-01-29 怀化学院 Facial mask identification method based on video monitoring
CN213958135U (en) * 2020-12-22 2021-08-13 广州传晟智能科技有限公司 Wear gauze mask face identification temperature measurement entrance guard's equipment
CN113705466A (en) * 2021-08-30 2021-11-26 浙江中正智能科技有限公司 Human face facial feature occlusion detection method used for occlusion scene, especially under high-imitation occlusion

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
奚琰 等: "基于对比学习的细粒度遮挡人脸表情识别", 《计算机系统应用》 *
林宛杨 等: "基于机器视觉的人脸口罩佩戴检测装置设计", 《应用技术学报》 *

Also Published As

Publication number Publication date
CN115909468B (en) 2023-06-06

Similar Documents

Publication Publication Date Title
US10789465B2 (en) Feature extraction and matching for biometric authentication
CN112052781A (en) Feature extraction model training method, face recognition device, face recognition equipment and medium
WO2003081532A1 (en) Method and apparatus for the automatic detection of facial features
US8290277B2 (en) Method and apparatus for setting a lip region for lip reading
JP2007305030A (en) Red-eye processing method, device and program
CN110619300A (en) Correction method for simultaneous recognition of multiple faces
CN111898610B (en) Card unfilled corner detection method, device, computer equipment and storage medium
WO2017064838A1 (en) Facial detection device, facial detection system provided with same, and facial detection method
CN114067431A (en) Image processing method, image processing device, computer equipment and storage medium
CN110443184A (en) ID card information extracting method, device and computer storage medium
CN113239739A (en) Method and device for identifying wearing article
CN111325058A (en) Driving behavior detection method, device and system and storage medium
CN115909468B (en) Face five sense organs shielding detection method, storage medium and system
CN111163332A (en) Video pornography detection method, terminal and medium
CN112907206B (en) Business auditing method, device and equipment based on video object identification
CN116228644A (en) Image detection method, electronic device and storage medium
CN112418189B (en) Face recognition method, device and equipment for wearing mask and storage medium
CN113947795A (en) Mask wearing detection method, device, equipment and storage medium
KR101818955B1 (en) An apparatus for recognizing finger vein by using moving average filtering and virtual core point detection and the method thereof
US20230103555A1 (en) Information processing apparatus, information processing method, and program
NL2024816B1 (en) Detection method for detecting an occlusion of an eye region in an image
CN111832439B (en) Multi-face rapid identification method and processing terminal
CN114708592B (en) Seal security level judging method, device, equipment and computer readable storage medium
CN117115884A (en) Face missing detection method and device
Yahya-Zoubir et al. Edge and Texture Analysis for Face Spoofing Detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant