CN112434562B - Mask wearing state detection method, mask wearing state detection equipment, electronic device and storage medium - Google Patents

Mask wearing state detection method, mask wearing state detection equipment, electronic device and storage medium Download PDF

Info

Publication number
CN112434562B
CN112434562B CN202011209638.8A CN202011209638A CN112434562B CN 112434562 B CN112434562 B CN 112434562B CN 202011209638 A CN202011209638 A CN 202011209638A CN 112434562 B CN112434562 B CN 112434562B
Authority
CN
China
Prior art keywords
mask
region
boundary
detection
wearing state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011209638.8A
Other languages
Chinese (zh)
Other versions
CN112434562A (en
Inventor
俞依杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202011209638.8A priority Critical patent/CN112434562B/en
Publication of CN112434562A publication Critical patent/CN112434562A/en
Application granted granted Critical
Publication of CN112434562B publication Critical patent/CN112434562B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application relates to a method, equipment, an electronic device and a storage medium for detecting a wearing state of a mask, wherein the method for detecting the wearing state of the mask comprises the following steps: acquiring a face image to be detected, and acquiring key points in the face image under the condition that a mask exists in the face image; and obtaining an interested region in the face image according to the key points, obtaining detection marks of the mask, and judging the mask wearing state of the pedestrian corresponding to the face image according to the position relation between the interested region and the detection marks of the mask. The application solves the problem that the human cost is still higher because the pedestrian is judged whether the mask is worn by the pedestrian or not according to the texture characteristics of the mask, and can improve the detection accuracy of the mask wearing state while reducing the detection cost.

Description

Mask wearing state detection method, mask wearing state detection equipment, electronic device and storage medium
Technical Field
The present application relates to the field of image recognition technologies, and in particular, to a method, an apparatus, an electronic device, and a storage medium for detecting a wearing state of a mask.
Background
In the epidemic situation prevention and control process, it is crucial that the mask is worn by pedestrians in the aspect of reducing epidemic situation transmission, therefore, staff check the mask wearing condition at places with large crowd flow rates such as each district, supermarket, station and the like, if the mask is not normally worn by the pedestrians, for example, the mask only covers the mouth or is only hung on the chin, and the staff can prompt the pedestrians to wear the mask again. However, this inspection method requires a lot of labor and may cause missed inspection when the flow of people is large.
In the related art, the wearing state of the mask for pedestrians is judged by performing image recognition through a camera, specifically, after a face is detected in a monitoring picture, texture features of a target area are extracted from the face area, if the texture features comprise the texture features of the mask, the mask is worn by a user, then the area of the area with the texture features of the mask is used as mask coverage area information, and whether the mask is worn in standard or not is judged according to the ratio of the area information to the face area, however, the method needs to manually set a mask texture database in advance to serve as a comparison basis, and the work needs a great amount of labor cost.
At present, whether a pedestrian wears a mask or not is judged through the mask texture characteristics in the related art, so that the problem that the labor cost is still high is solved, and an effective solution is not proposed.
Disclosure of Invention
The embodiment of the application provides a method, equipment, an electronic device and a storage medium for detecting the wearing state of a mask, which at least solve the problem that in the related art, the manpower cost is still high because whether the mask is worn by pedestrians is judged to be standard or not through the texture characteristics of the mask.
In a first aspect, an embodiment of the present application provides a method for detecting a wearing state of a mask, including:
Acquiring a face image to be detected, and acquiring key points in the face image under the condition that a mask exists in the face image;
obtaining an interested region in the face image according to the key points, and obtaining a detection mark of the mask, wherein the interested region comprises at least one region in facial five sense organs and/or facial contours;
and judging the mask wearing state of the pedestrian corresponding to the face image according to the position relation between the region of interest and the detection mark of the mask.
In some embodiments, the region of interest includes a nose region, a mouth region, and a chin region, and the method of acquiring the region of interest includes:
acquiring eye region key points, nose region key points, mouth region key points and chin region key points from key points of the face image;
the nose region is obtained according to the eye region key point and the nose region key point, the mouth region is obtained according to the nose region key point and the mouth region key point, and the chin region is obtained according to the mouth region key point and the chin region key point.
In some embodiments, determining the mask wearing state of the pedestrian corresponding to the face image according to the positional relationship between the region of interest and the detection mark of the mask includes:
When the detection mark is positioned in the chin area, judging that the mask only covers the chin area, wherein the wearing state of the mask is nonstandard;
when the detection mark is positioned in the mouth area, judging that the mask covers the mouth area and the chin area, wherein the wearing state of the mask is nonstandard;
when the detection mark is located in the nose region, it is determined that the mask covers the nose region, the mouth region, and the chin region, and the mask wearing state is standard.
In some embodiments, after obtaining the detection signature of the mask, the method further comprises:
judging the mask wearing state of the pedestrian according to the position relation between the boundary line of the region of interest and the detection mark of the mask;
the method for acquiring the boundary line of the region of interest further comprises the following steps:
determining a first boundary from the eye region keypoints, determining a second boundary from the nose region keypoints, and determining a third boundary from the mouth region keypoints, wherein the second boundary and the third boundary are both parallel to the first boundary.
In some embodiments, obtaining the mask detection signature comprises:
Determining the central line of the face image according to the nose region key points;
and acquiring a pixel value of the central line, and determining a pixel mutation point as the detection mark according to the change condition of the pixel value.
In some of these embodiments, after determining the pixel discontinuity, the method comprises:
and obtaining parallel lines of the first boundary through the pixel mutation points to obtain a detection boundary serving as a detection mark of the mask.
In some embodiments, determining the mask wearing state of the pedestrian corresponding to the face image according to the positional relationship between the region of interest and the detection mark of the mask includes:
when the detection boundary is lower than the third boundary, judging that the mask only covers the chin area, wherein the wearing state of the mask is nonstandard;
when the detection boundary is located between the third boundary and the second boundary, judging that the mask only covers the mouth area and the chin area, wherein the wearing state of the mask is nonstandard;
and when the detection boundary is higher than the second boundary, judging that the mask covers the nose area, the mouth area and the chin area, wherein the wearing state of the mask is standard.
In some of these embodiments, determining that the mask covers only the mouth region and the chin region comprises:
and under the condition that the detection boundary is positioned between the third boundary and the second boundary and the distance between the detection boundary and the third boundary is larger than a first preset distance, judging that the mask only covers the mouth area and the chin area, wherein the first preset distance is obtained according to the distance between the third boundary and the second boundary and a first preset proportion.
In some of these embodiments, determining that the mask covers the nasal region, the mouth region, and the chin region comprises:
and under the condition that the detection boundary is higher than the second boundary and the distance between the detection boundary and the first boundary is smaller than a second preset distance, judging that the mask covers the nose area, the mouth area and the chin area, wherein the second preset distance is obtained according to the distance between the first boundary and the second boundary and a second preset proportion.
In some embodiments, after determining the mask wearing state of the pedestrian corresponding to the face image, the method further includes:
Acquiring a detection state;
under the condition that the detection state is epidemic prevention, if the wearing state of the mask is nonstandard, acquiring the identity information of the pedestrians and prohibiting the pedestrians from passing through;
and under the condition that the detection state is identification, if the wearing state of the mask is nonstandard, determining whether to acquire the identity information of the pedestrian and whether to allow the pedestrian to pass according to a preset rule.
In some embodiments, before obtaining the detection signature of the mask, the method further comprises:
and carrying out image enhancement on the face image through gray level transformation.
In a second aspect, an embodiment of the present application provides a device for detecting a wearing state of a mask, where the device includes a determining module, an obtaining module, and a judging module:
the determining module is used for acquiring a face image to be detected, and acquiring key points in the face image under the condition that a mask exists in the face image;
the acquisition module is used for acquiring an interested region in the face image according to the key points and acquiring a detection mark of the mask, wherein the interested region comprises at least one region in facial five sense organs and/or facial contours;
The judging module is used for judging the mask wearing state of the pedestrian corresponding to the face image according to the position relation between the region of interest and the detection mark of the mask.
In a third aspect, an embodiment of the present application provides an electronic device, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the processor implements the method for detecting a wearing state of a mask according to the first aspect when the processor executes the computer program.
In a fourth aspect, an embodiment of the present application provides a storage medium having stored thereon a computer program which, when executed by a processor, implements the method for detecting a wearing state of a mask according to the first aspect described above.
Compared with the related art, the method for detecting the wearing state of the mask provided by the embodiment of the application has the advantages that the key points in the face image are obtained by obtaining the face image to be detected under the condition that the mask exists in the face image; according to the method, a region of interest in a face image is obtained according to key points, a detection mark of a mask is obtained, the mask wearing state of a pedestrian corresponding to the face image is judged according to the position relation between the region of interest and the detection mark of the mask, the problem that whether the mask is worn by the pedestrian or not is standard through the texture features of the mask, so that the labor cost is still high is solved, and the detection accuracy of the mask wearing state can be improved while the detection cost is reduced.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the other features, objects, and advantages of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
fig. 1 is a schematic view of an application environment of a method for detecting a wearing state of a mask according to an embodiment of the present application;
fig. 2 is a flowchart of a method for detecting a wearing state of a mask according to an embodiment of the present application;
FIG. 3 is a flow chart of a method of acquiring a region of interest according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a 68-point face key point according to an embodiment of the application;
FIG. 5 is a schematic diagram of boundary lines of a region of interest according to an embodiment of the application;
fig. 6 is a flowchart of a method for acquiring mask detection marks according to an embodiment of the present application;
fig. 7 is a schematic view of a mask detection mark according to an embodiment of the present application;
fig. 8 is a block diagram of a hardware configuration of a terminal of a method for detecting a mask wearing state according to an embodiment of the present application;
Fig. 9 is a block diagram of a structure of a mask wearing state detecting apparatus according to an embodiment of the present application.
Detailed Description
The present application will be described and illustrated with reference to the accompanying drawings and examples in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application. All other embodiments, which can be made by a person of ordinary skill in the art based on the embodiments provided by the present application without making any inventive effort, are intended to fall within the scope of the present application. Moreover, it should be appreciated that while such a development effort might be complex and lengthy, it would nevertheless be a routine undertaking of design, fabrication, or manufacture for those of ordinary skill having the benefit of this disclosure, and thus should not be construed as having the benefit of this disclosure.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is to be expressly and implicitly understood by those of ordinary skill in the art that the described embodiments of the application can be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms used herein should be given the ordinary meaning as understood by one of ordinary skill in the art to which this application belongs. The terms "a," "an," "the," and similar referents in the context of the application are not to be construed as limiting the quantity, but rather as singular or plural. The terms "comprising," "including," "having," and any variations thereof, are intended to cover a non-exclusive inclusion; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to only those steps or elements but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. The terms "connected," "coupled," and the like in connection with the present application are not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The term "plurality" as used herein means greater than or equal to two. "and/or" describes an association relationship of an association object, meaning that there may be three relationships, e.g., "a and/or B" may mean: a exists alone, A and B exist together, and B exists alone. The terms "first," "second," "third," and the like, as used herein, are merely distinguishing between similar objects and not representing a particular ordering of objects.
The method for detecting the wearing state of the mask provided by the application can be applied to an application environment shown in fig. 1, and fig. 1 is a schematic diagram of the application environment of the method for detecting the wearing state of the mask according to the embodiment of the application, as shown in fig. 1. Wherein the monitoring device 102 communicates with the processor 104 via a network. The monitoring device 102 acquires a face image of a pedestrian to be detected, the processor 104 acquires key points in the face image when a mask exists in the face image, the processor 104 acquires a region of interest in the face image according to the key points, acquires a detection mark of the mask, and then the processor 104 judges the mask wearing state of the pedestrian according to the position relationship between the region of interest and the detection mark of the mask. Wherein the monitoring device 102 may be a video camera or a camera, and the processor 104 may be implemented by a server or a chip.
The embodiment provides a method for detecting a wearing state of a mask, fig. 2 is a flowchart of a method for detecting a wearing state of a mask according to an embodiment of the present application, and as shown in fig. 2, the method includes the following steps:
step S210, acquiring a face image to be detected, and acquiring key points in the face image under the condition that a mask exists in the face image.
In this embodiment, a monitoring device, such as a camera or a camera, acquires a face image, determines the existence state of a mask in the face image through a mask classification algorithm, and determines whether the mask exists in the face image. Specifically, a semantic segmentation algorithm can be adopted to identify the mask in the face image, for example, a semantic segmentation algorithm Unet based on deep learning can be used for obtaining a more accurate segmentation result by using few training images, and meanwhile, the algorithm can also ensure a higher image processing speed and can not have a larger influence on the real-time performance of face recognition. All pixels belonging to the mask in the face image can be obtained through the semantic segmentation algorithm, so that the mask region can be accurately segmented.
Under the condition that a mask exists in the face image, whether the mask worn by a pedestrian is standard or not needs to be further judged, so that key points in the face image can be extracted through a face key point extraction algorithm, and the face key point extraction algorithm in the embodiment can be 21-point key point detection, 68-point key point detection or 81-point key point detection.
Step S220, obtaining an interested region in the face image according to the key points, and obtaining a detection mark of the mask, wherein the interested region comprises at least one region in facial five sense organs and/or facial contours.
In general, the key points mark the eyebrow, eye, nose, mouth, chin, etc., so that the position of the region of interest can be obtained by dividing the face image according to the position of the key points in the face image.
Further, in this embodiment, the mask detection mark is a mark for determining the mask position, and after the mask in the face image is acquired, the mask region may be extracted, and the detection mark may be obtained from the mask region, for example, using the center point of the mask region as the detection mark or using part of the contour line of the mask region as the detection mark.
Step S230, judging the mask wearing state of the pedestrian corresponding to the face image according to the position relation between the region of interest and the detection mark of the mask.
In this embodiment, the positional relationship between the detection mark and the region of interest may be whether the detection mark is located within the region of interest, for example, in the case where the region of interest is a nose region, if the detection mark is located in the nose region, the mask wearing state is determined to be standard. Further, in different actual scenes, a plurality of interested areas can be set, and a plurality of mask wearing state grades are set according to the positions of the detection marks in the different interested areas.
Through the steps S210 to S230, the embodiment obtains the detection mark of the region of interest and the mask in the face image, and determines the mask wearing state of the pedestrian according to the relative position relationship between the detection mark and the region of interest, without inputting a large amount of mask texture data in advance, thereby solving the problem that whether the mask wearing of the pedestrian is standard or not is determined by the mask texture features, resulting in higher labor cost, and the detection accuracy of the mask wearing state can be improved by detecting the position relationship while reducing the detection cost.
In some embodiments, before determining the mask existence state of the face image, a face detection algorithm based on deep learning may be used to obtain a face target in the face image, for example, a face detection algorithm based on RetinaNet and TinyYolo may be used to obtain the face target. The RetinaNet target detection algorithm is a general target detection algorithm, and the problem of serious unbalance of the proportion of positive and negative samples is solved by detecting left eyes, right eyes, nose tips and mouth corners in an image to obtain targets. In this embodiment, a RetinaNet detection head is used to perform face detection. The TinyYolo is further designed to be light-weighted on the basis of Yolo, yolo is a general target detection algorithm, in order to use the method in the application for chip equipment with smaller internal memory, in the embodiment, the backbone network of the method is replaced by the light-weighted backbone network of TinyYolo, the residual structure in the dark net network used by Yolo is removed, and the output characteristics of downsampling of 8, 16 and 32 are used as the input of a RetinaNet detection head.
In some of these embodiments, the region of interest includes a nose region, a mouth region, and a chin region, and fig. 3 is a flowchart of a method of acquiring a region of interest according to an embodiment of the present application, as shown in fig. 3, the method including the steps of:
step S310, acquiring eye region key points, nose region key points, mouth region key points and chin region key points from key points of a face image;
step S320, acquiring a nose region according to the eye region key points and the nose region key points, acquiring a mouth region according to the nose region key points and the mouth region key points, and acquiring a chin region according to the mouth region key points and the chin region key points.
The face key point extraction algorithm refers to locating the key area position of the face under the condition of a given face image, and comprises eyebrows, eyes, a nose, a mouth, a face outline and the like. In this embodiment, a 68-point face key point extraction algorithm is used to extract face key points, fig. 4 is a schematic diagram of 68-point face key points according to an embodiment of the present application, and as shown in fig. 4, numbers 0 to 67 in the figure represent different key points, in this embodiment, the 68-point face key point extraction algorithm uses a neural network nonlinear mapping to enable learning of mapping from a face image to the key points, and the obtained face feature points have a fixed sequence number, so that a required region of interest of a face can be easily obtained through the sequence numbers of the key points. For example, the numbers 30 to 35 always represent the position of the nose, can be used as the key points of the nose region, the numbers 36 to 45 represent the positions of the eyes, can be used as the key points of the eye region, the numbers 61 to 63 and 65 to 67 always represent the central positions of lips, can be used as the key points of the mouth region, and the numbers 5 to 11 always represent the positions of the chin, can be used as the key points of the chin region, so that the method can extract the region of interest through the key points.
Further, in this embodiment, the region formed between the eye region keypoint and the nose region keypoint is taken as a nose region, the region formed between the nose region keypoint and the mouth region keypoint is taken as a mouth region, and the region formed between the mouth region keypoint and the chin region keypoint is taken as a chin region.
Through the steps S310 and S320, the accuracy of extracting the key points can be improved by extracting the key points in the face image based on the 68-point face key point extraction algorithm.
In some embodiments, the method for determining the mask wearing state of the pedestrian specifically includes: under the condition that the detection mark is positioned in the chin area, the mask is judged to cover only the chin area, and the wearing state of the mask is nonstandard; under the condition that the detection mark is positioned in the mouth area, judging that the mask covers the mouth area and the chin area, wherein the wearing state of the mask is nonstandard; when the detection mark is located in the nose region, it is determined that the mask covers the nose region, the mouth region, and the chin region, and the mask wearing state is standard. In this embodiment, the interested areas are a chin area, a mouth area and a nose area, the wearing state of the mask is divided into only the chin area, the mouth area and the chin area, and the nose area, the mouth area and the chin area are covered according to the positions of the detection marks of the mask in different interested areas, and in an actual scene, punishment measures can be set according to the specific conditions of the detected wearing state of the mask so as to remind pedestrians.
In some embodiments, the mask wearing state of the pedestrian may be determined according to a positional relationship between a boundary line of the region of interest and a detection flag of the mask. The method for acquiring the boundary line of the region of interest comprises the following steps: a first boundary is determined from the eye region keypoints, a second boundary is determined from the nose region keypoints, and a third boundary is determined from the mouth region keypoints, wherein both the second boundary and the third boundary are parallel to the first boundary. In general, there are 4 mask wearing cases: 1. the mask is not worn; 2. the mask covers the nose, mouth and chin and is considered to be worn normally; 3. the mask only covers the mouth; 4. the mask covers only the chin. Therefore, the nose region, the mouth region, and the chin region are three important regions, and thus, the region of interest in the present embodiment is set as the nose region, the mouth region, and the chin region, and the boundary of the region of interest can be extracted by the key points.
Fig. 5 is a schematic diagram of boundary lines of a region of interest according to an embodiment of the present application, as shown in fig. 5, a 68-point face key point extraction algorithm is still adopted in this embodiment. Specifically, two points are selected from four points 36, 39, 42 and 45 of the eye region key points to obtain a first boundary, further, a point closest to the forehead can be selected from the eye region key points, the point is used as a parallel line of the edge of the monitoring image to obtain the first boundary, further, a standard line can be determined according to symmetrical points on two sides of the mask, and a parallel line of one point of the eye region key points is used as the standard line to obtain the first boundary.
After the first boundary is obtained, the parallel line passing through the nose area key point 33 as the first boundary is used as the second boundary, and finally, the point closest to the chin is selected from the three points 67, 66 and 65 of the mouth area key point, and the parallel line passing through the point as the first boundary is used for obtaining the third boundary. The key points of each region of interest in this embodiment may also be adjusted according to the actual scene, for example, the three points 67, 66, 65 may be replaced by the points 58, 57, 56.
In this embodiment, by comparing the detection mark of the mask with the boundary line of the region of interest, the positional relationship between the detection mark and the region of interest can be obtained more accurately, which is beneficial to improving the detection accuracy of the wearing state of the mask.
In some embodiments, considering that the edge of the mask is not flat, the edge points of the mask may misjudge the comparison between the detection mark and the region of interest, so that the detection mark of the mask needs to be redetermined, fig. 6 is a flowchart of a method for obtaining the detection mark of the mask according to an embodiment of the present application, as shown in fig. 6, the method includes the following steps:
step S610, the central line of the face image is determined according to the key points of the nose area.
In this embodiment, the nose region keypoints 27, 28, 29, 30 are utilized as the center lines in the face image based on a 68-point face keypoint extraction algorithm.
Step S620, obtaining the pixel value of the central line, and determining the pixel mutation point as a detection mark according to the change condition of the pixel value.
In the face image, the pixel value of the mask is obviously different from the face pixel value, so that a point with abrupt change of the pixel value on the central line can be used as a detection mark, and the judgment standard of abrupt change can be that the difference of the pixel values of adjacent pixels is larger than a certain threshold value.
Fig. 7 is a schematic diagram of mask detection marks according to an embodiment of the present application, as shown in fig. 7, a dotted line is a center line determined according to a critical point of a nose area, a part covered by a mask is in a solid line, a point O is an obtained detection mark, a point a and a point B are edge points of the mask, and it is apparent that the judgment of the wearing state of the mask according to the point O is more accurate than the judgment according to the points a and B.
Further, the intersection point between the mask region contour line and the center line can be obtained by semantic division, and if there are a plurality of intersection points, the intersection point closer to the forehead side among the intersection points is used as the detection flag.
Through the step S610 and the step S620, the detection mark of the mask is determined through the pixel change of the center line, so that the position of the mask in the face image can be more accurately positioned, and the detection accuracy of the wearing state of the mask is further improved.
In some embodiments, after determining the pixel mutation point, the parallel line of the first boundary may be obtained through the pixel mutation point, so as to obtain the detection boundary as the detection mark of the mask. In the face image, the detection boundary is more obvious and easy to identify than the pixel mutation points, so that the mask wearing state is detected according to the detection boundary, and the detection speed and the detection accuracy can be effectively improved.
In some embodiments, after obtaining the boundary of the region of interest and detecting the boundary, determining the mask wearing state of the pedestrian includes: under the condition that the detection boundary is lower than the third boundary, the detection boundary of the mask is considered to be lower than the mouth, the mask is judged to cover only the chin area, and the wearing state of the mask is nonstandard; when the detection boundary is located between the third boundary and the second boundary, the detection boundary of the mask is considered to be located between the nose and the mouth and covers the mouth, so that the mask is judged to cover only the mouth area and the chin area, and the wearing state of the mask is nonstandard; when the detection boundary is higher than the second boundary, the mask is considered to cover the nose, mouth and chin completely, and therefore the wearing state of the mask is determined as a standard. In this embodiment, the detection boundary is compared with the first boundary, the second boundary and the third boundary, and the position of the mask in the face can be obtained more clearly through each line in the face image, so that the detection speed and the detection accuracy can be improved effectively.
In some of these embodiments, it may also be determined that the mask covers only the mouth region and chin region by: under the condition that the detection boundary is positioned between the third boundary and the second boundary and the distance between the detection boundary and the third boundary is larger than a first preset distance, the mask is judged to cover only the mouth area and the chin area, wherein the first preset distance is obtained according to the distance between the third boundary and the second boundary and a first preset proportion, and the first preset proportion can be flexibly set according to an actual scene, for example, is set to be 2/3; further, it can also be determined that the mask covers the nose area, mouth area and chin area by: and under the condition that the detection boundary is higher than the second boundary and the distance between the detection boundary and the first boundary is smaller than a second preset distance, judging that the mask covers the nose area, the mouth area and the chin area, and taking the wearing state of the mask as a standard, wherein the second preset distance is obtained according to the distance between the first boundary and the second boundary and a second preset proportion, and the second preset proportion can be flexibly set according to an actual scene, for example, is set to be 1/3. Specifically, the judgment can be made according to the following formulas 1 to 3:
Upbound < Bound3 formula 1
In equations 1 to 3, upbound is a detection boundary of the mask, bound1 represents a first boundary, bound2 represents a second boundary, bound3 represents a third boundary,Distance Up3 to detect the Distance between the boundary and the third boundary 23 Distance is the Distance between the second boundary and the third boundary Up1 To detect the distance between the boundary and the first boundary, distances are used 12 Is the distance between the first boundary and the second boundary. The mask is judged to cover only the chin area under the condition that each boundary satisfies the constraint of formula 1, the mask is judged to cover only the mouth area and the chin area under the condition that each boundary satisfies the constraint of formula 2, and the mask is judged to cover the nose area, the mouth area and the chin area under the condition that each boundary satisfies the constraint of formula 3.
In this embodiment, when the position relationship between the detection boundary and the region of interest is determined, the distance determination between the detection boundary and the first boundary, the second boundary and the third boundary is introduced, so that the detection error caused by the key point identification error can be effectively avoided, and the detection accuracy is improved.
In some embodiments, detection states of different levels may be set to flexibly adjust measures for pedestrians in different mask wearing states, specifically, after the mask wearing states of pedestrians corresponding to face images are determined, the detection states are acquired, wherein the detection states may be set to be a first level, a second level and a third level according to actual scenes, represent different detection levels, and the severity of each detection level is different.
Under the condition that the detection state is epidemic prevention, if the wearing state of the mask is nonstandard, the identity information of the pedestrian is obtained, the pedestrian is forbidden to pass, the nonstandard in the embodiment comprises that the mask is not worn, the mask only covers the chin, the mask only covers the mouth and the chin, the identity information of the pedestrian comprises a name, a work number and the like, the identity information can be obtained by carrying out face recognition on the pedestrian, and the recognition result is obtained by comparing the face library with a prestored face library. Specifically, matching the face features in the face image to be detected with the face features stored in the face library in advance, and under the condition that the matching is successful, associating the mask wearing state of the pedestrian in the face image with the identity information of the pedestrian, and uploading the mask wearing state to the rear end for statistical operation.
Under the condition that the detection state is identification, if the wearing state of the mask is nonstandard, whether the identity information of the pedestrian is acquired or not and whether the pedestrian is allowed to pass or not are determined according to preset rules. The mask wearing requirements for pedestrians are low in the recognition scene, whether the identity information of the pedestrians is acquired and recorded, and whether the pedestrians are allowed to pass through can be flexibly set according to the actual scene. For example, the mask may be set in a case where the pedestrian does not wear the mask, and the mask of the pedestrian covers only the chin, the identity information of the pedestrian is acquired, but the pedestrian is not prohibited from passing through.
Through the method in this embodiment, the detection states of each security inspection port can be set respectively, the security inspection effect and efficiency are improved, and the passing efficiency among each security inspection port is not affected.
In some embodiments, before the detection mark of the mask is obtained, in order to further improve the difference between the mask coverage area feature and the face feature, a preprocessing operation needs to be performed on the face image, for example, image enhancement is performed on the face image through gray level transformation, and in this embodiment, histogram equalization is used to enhance the image. The histogram equalization is a common gray level transformation method, and is essentially to perform nonlinear stretching on an image, redistribute pixel values of the image, and make the number of the pixel values approximately equal in a certain gray level range, so as to achieve the purpose of image enhancement. Typically, the image is mapped using a cumulative distribution function such that the processed pixels are uniformly distributed over each gray scale range.
The monotonously increasing property of the cumulative distribution function and the value range from 0 to 1 can ensure that the original size relation of the pixels is unchanged no matter how the pixels are mapped, and meanwhile, the value range of the pixel mapping function is ensured to be between 0 and 255, and the boundary is not crossed. As shown in equation 4:
In formula 4, s k Representation imageCumulative probability of a pixel, n is the sum of pixels in the image, n k The number of pixels at the current gray level, L is the total number of possible gray levels in the image, L-1 is the gray scale range, and j represents the pixel number at the current gray level. And obtaining the accumulated probability of each pixel in the face image, and multiplying the accumulated probability by the gray scale range to obtain the mapped gray scale value of each pixel.
It should be noted that, in the related art, whether the mask wearing state is standard is judged by the ratio of the area of the mask area to the face area, however, when the situation that the face posture is too big is encountered, for example, the face is 90 degrees with the monitoring equipment, even when the posture is bigger, the texture characteristics of the key point area will also have great difference, the detection accuracy will be significantly reduced, in this embodiment, the detection mark of the mask is compared with the interested area, because the longitudinal proportion change in the face image is smaller, the mask wearing state of the pedestrian can be accurately judged even if the face posture is too big, and further the application range of the mask wearing state detection method is effectively enlarged, so that the mask wearing state detection is convenient for the crowd with different face characteristics. Furthermore, the mask feature comparison is carried out by utilizing the semantic segmentation technology without preparing the mask texture template library in advance, the mask outline is obtained by utilizing the semantic segmentation technology, and the mask feature comparison method has no complex preset condition, does not cause recognition errors due to face difference, and can adapt to more crowds and scenes.
It will be further appreciated that the steps shown in the flowcharts described above or in the flowcharts of the figures may be performed in a computer system such as a set of computer executable instructions, and that, although a logical order is shown in the flowcharts, in some cases, the steps shown or described may be performed in an order different than that shown or described herein.
The method embodiment provided by the application can be executed in a terminal, a computer or similar computing device. Taking the operation on the terminal as an example, fig. 8 is a block diagram of the hardware structure of the terminal according to the method for detecting the wearing state of the mask according to the embodiment of the present application. As shown in fig. 8, the terminal 80 may include one or more processors 802 (only one is shown in fig. 8) (the processor 802 may include, but is not limited to, a microprocessor MCU or a processing device such as a programmable logic device FPGA) and a memory 804 for storing data, and optionally, a transmission device 806 for communication functions and an input-output device 808. It will be appreciated by those skilled in the art that the structure shown in fig. 8 is merely illustrative and is not intended to limit the structure of the terminal. For example, terminal 80 may also include more or fewer components than shown in fig. 8, or have a different configuration than shown in fig. 8.
The memory 804 may be used to store control programs, such as software programs of application software and modules, such as control programs corresponding to the method for detecting a wearing state of a mask in the embodiment of the present application, and the processor 802 executes the control programs stored in the memory 804, thereby executing various functional applications and data processing, that is, implementing the method described above. The memory 804 may include high-speed random access memory, but may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid state memory. In some examples, the memory 804 may further include memory remotely located relative to the processor 802, which may be connected to the terminal 80 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 806 is used to receive or transmit data via a network. The specific examples of the network described above may include a wireless network provided by a communication provider of the terminal 80. In one example, the transmission device 806 includes a network adapter (Network Interface Controller, simply referred to as NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 806 may be a Radio Frequency (RF) module for communicating with the internet wirelessly.
The embodiment also provides a device for detecting a wearing state of a mask, which is used for implementing the above embodiment and the preferred embodiment, and is not described again. As used below, the terms "module," "unit," "sub-unit," and the like may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
Fig. 9 is a block diagram of a structure of a mask wearing state detection apparatus according to an embodiment of the present application, and as shown in fig. 9, the apparatus includes a determination module 91, an acquisition module 92, and a judgment module 93:
a determining module 91, configured to obtain a face image to be detected, and obtain key points in the face image when a mask exists in the face image;
the obtaining module 92 is configured to obtain a region of interest in the face image according to the key points, and obtain a detection mark of the mask, where the region of interest includes at least one region in facial features and/or facial contours;
the judging module 93 is configured to judge a mask wearing state of the pedestrian corresponding to the face image according to a positional relationship between the region of interest and the detection mark of the mask.
According to the embodiment, the acquisition module 92 acquires the detection marks of the region of interest and the mask in the face image, the judgment module 93 judges the mask wearing state of the pedestrian according to the relative position relation between the detection marks and the region of interest, a large amount of mask texture data does not need to be input in advance, the problem that whether the mask wearing state of the pedestrian is standard or not is judged through the mask texture features, so that the labor cost is still high is solved, the detection cost is reduced, and meanwhile, the detection accuracy of the mask wearing state can be improved through the position relation.
The above-described respective modules may be functional modules or program modules, and may be implemented by software or hardware. For modules implemented in hardware, the various modules described above may be located in the same processor; or the above modules may be located in different processors in any combination.
The present embodiment also provides an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, where the transmission device is connected to the processor, and the input/output device is connected to the processor.
Alternatively, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
s1, acquiring a face image to be detected, and acquiring key points in the face image under the condition that a mask exists in the face image.
S2, obtaining an interested region in the face image according to the key points, and obtaining a detection mark of the mask, wherein the interested region comprises at least one region in facial five sense organs and/or facial contours.
S3, judging the mask wearing state of the pedestrian corresponding to the face image according to the position relation between the region of interest and the detection mark of the mask.
It should be noted that, specific examples in this embodiment may refer to examples described in the foregoing embodiments and alternative implementations, and this embodiment is not repeated herein.
In addition, in combination with the method for detecting the wearing state of the mask in the above embodiment, the embodiment of the application may be implemented by providing a storage medium. The storage medium has a computer program stored thereon; the computer program, when executed by the processor, implements the method for detecting the wearing state of any one of the masks in the above embodiments.
The technical features of the above-described embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above-described embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.

Claims (13)

1. The method for detecting the wearing state of the mask is characterized by comprising the following steps of:
acquiring a face image to be detected, and acquiring key points in the face image under the condition that a mask exists in the face image;
obtaining an interested region in the face image according to the key points, wherein the interested region comprises at least one region in facial five sense organs and/or facial contours;
Obtaining a detection mark of the mask comprises the following steps: determining the central line of the face image according to the nose region key points in the face image; acquiring a pixel value of the central line, and determining a pixel mutation point as a detection mark of the mask according to the change condition of the pixel value;
and judging the mask wearing state of the pedestrian corresponding to the face image according to the position relation between the region of interest and the detection mark of the mask.
2. The method for detecting a wearing state of a mask according to claim 1, wherein the region of interest includes a nose region, a mouth region, and a chin region, and the method for acquiring the region of interest includes:
acquiring eye region key points, nose region key points, mouth region key points and chin region key points from key points of the face image;
the nose region is obtained according to the eye region key point and the nose region key point, the mouth region is obtained according to the nose region key point and the mouth region key point, and the chin region is obtained according to the mouth region key point and the chin region key point.
3. The method according to claim 2, wherein determining the mask wearing state of the pedestrian corresponding to the face image based on the positional relationship between the region of interest and the detection flag of the mask includes:
When the detection mark is positioned in the chin area, judging that the mask only covers the chin area, wherein the wearing state of the mask is nonstandard;
when the detection mark is positioned in the mouth area, judging that the mask covers the mouth area and the chin area, wherein the wearing state of the mask is nonstandard;
when the detection mark is located in the nose region, it is determined that the mask covers the nose region, the mouth region, and the chin region, and the mask wearing state is standard.
4. The method for detecting a wearing state of a mask according to claim 2, wherein after acquiring the detection flag of the mask, the method further comprises:
judging the mask wearing state of the pedestrian according to the position relation between the boundary line of the region of interest and the detection mark of the mask;
the method for acquiring the boundary line of the region of interest further comprises the following steps:
determining a first boundary from the eye region keypoints, determining a second boundary from the nose region keypoints, and determining a third boundary from the mouth region keypoints, wherein the second boundary and the third boundary are both parallel to the first boundary.
5. The method for detecting a wearing state of a mask according to claim 4, wherein after determining the pixel mutation points, the method comprises:
and obtaining parallel lines of the first boundary through the pixel mutation points to obtain a detection boundary serving as a detection mark of the mask.
6. The method according to claim 5, wherein determining the mask wearing state of the pedestrian corresponding to the face image based on the positional relationship between the region of interest and the detection flag of the mask includes:
when the detection boundary is lower than the third boundary, judging that the mask only covers the chin area, wherein the wearing state of the mask is nonstandard;
when the detection boundary is located between the third boundary and the second boundary, judging that the mask only covers the mouth area and the chin area, wherein the wearing state of the mask is nonstandard;
and when the detection boundary is higher than the second boundary, judging that the mask covers the nose area, the mouth area and the chin area, wherein the wearing state of the mask is standard.
7. The method of detecting a wearing state of a mask according to claim 6, wherein determining that the mask covers only the mouth region and the chin region comprises:
And under the condition that the detection boundary is positioned between the third boundary and the second boundary and the distance between the detection boundary and the third boundary is larger than a first preset distance, judging that the mask only covers the mouth area and the chin area, wherein the first preset distance is obtained according to the distance between the third boundary and the second boundary and a first preset proportion.
8. The method of detecting a state of wearing a mask according to claim 6, wherein determining that the mask covers the nose region, the mouth region, and the chin region comprises:
and under the condition that the detection boundary is higher than the second boundary and the distance between the detection boundary and the first boundary is smaller than a second preset distance, judging that the mask covers the nose area, the mouth area and the chin area, wherein the second preset distance is obtained according to the distance between the first boundary and the second boundary and a second preset proportion.
9. The method according to claim 1, wherein after judging the mask wearing state of the pedestrian corresponding to the face image, the method further comprises:
Acquiring a detection state;
under the condition that the detection state is epidemic prevention, if the wearing state of the mask is nonstandard, acquiring the identity information of the pedestrians and prohibiting the pedestrians from passing through;
and under the condition that the detection state is identification, if the wearing state of the mask is nonstandard, determining whether to acquire the identity information of the pedestrian and whether to allow the pedestrian to pass according to a preset rule.
10. The method for detecting a wearing state of a mask according to claim 1, wherein before acquiring the detection flag of the mask, the method further comprises:
and carrying out image enhancement on the face image through gray level transformation.
11. The utility model provides a detection equipment of state is worn to gauze mask which characterized in that, equipment includes determination module, acquisition module and judgement module:
the determining module is used for acquiring a face image to be detected, and acquiring key points in the face image under the condition that a mask exists in the face image;
the acquisition module is used for acquiring an interested region in the face image according to the key points; the region of interest comprises at least one region in facial features and/or facial contours; determining the central line of the face image according to the nose region key points in the face image; acquiring a pixel value of the central line, and determining a pixel mutation point as a detection mark of the mask according to the change condition of the pixel value;
The judging module is used for judging the mask wearing state of the pedestrian corresponding to the face image according to the position relation between the region of interest and the detection mark of the mask.
12. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, the processor being arranged to run the computer program to perform the method of detecting the state of wear of a mask as claimed in any one of claims 1 to 10.
13. A storage medium having a computer program stored therein, wherein the computer program is configured to perform the method of detecting the state of mask wear of any one of claims 1 to 10 when run.
CN202011209638.8A 2020-11-03 2020-11-03 Mask wearing state detection method, mask wearing state detection equipment, electronic device and storage medium Active CN112434562B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011209638.8A CN112434562B (en) 2020-11-03 2020-11-03 Mask wearing state detection method, mask wearing state detection equipment, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011209638.8A CN112434562B (en) 2020-11-03 2020-11-03 Mask wearing state detection method, mask wearing state detection equipment, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN112434562A CN112434562A (en) 2021-03-02
CN112434562B true CN112434562B (en) 2023-08-25

Family

ID=74695262

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011209638.8A Active CN112434562B (en) 2020-11-03 2020-11-03 Mask wearing state detection method, mask wearing state detection equipment, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN112434562B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112818953A (en) * 2021-03-12 2021-05-18 苏州科达科技股份有限公司 Mask wearing state identification method, device, equipment and readable storage medium
CN113420675A (en) * 2021-06-25 2021-09-21 浙江大华技术股份有限公司 Method and device for detecting mask wearing standardization
CN113723214B (en) * 2021-08-06 2023-10-13 武汉光庭信息技术股份有限公司 Face key point labeling method, system, electronic equipment and storage medium
CN114255517B (en) * 2022-03-02 2022-05-20 中运科技股份有限公司 Scenic spot tourist behavior monitoring system and method based on artificial intelligence analysis
CN115457625A (en) * 2022-08-22 2022-12-09 慧之安信息技术股份有限公司 Mask wearing condition detection method based on edge calculation
WO2024050760A1 (en) * 2022-09-08 2024-03-14 Intel Corporation Image processing with face mask detection

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004310397A (en) * 2003-04-07 2004-11-04 Toyota Central Res & Dev Lab Inc Device for determining wearing of mask
WO2017054605A1 (en) * 2015-09-29 2017-04-06 腾讯科技(深圳)有限公司 Picture processing method and device
CN106709954A (en) * 2016-12-27 2017-05-24 上海唱风信息科技有限公司 Method for masking human face in projection region
CN109101923A (en) * 2018-08-14 2018-12-28 罗普特(厦门)科技集团有限公司 A kind of personnel wear the detection method and device of mask situation
WO2019019828A1 (en) * 2017-07-27 2019-01-31 腾讯科技(深圳)有限公司 Target object occlusion detection method and apparatus, electronic device and storage medium
CN111428559A (en) * 2020-02-19 2020-07-17 北京三快在线科技有限公司 Method and device for detecting wearing condition of mask, electronic equipment and storage medium
CN111523473A (en) * 2020-04-23 2020-08-11 北京百度网讯科技有限公司 Mask wearing identification method, device, equipment and readable storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004310397A (en) * 2003-04-07 2004-11-04 Toyota Central Res & Dev Lab Inc Device for determining wearing of mask
WO2017054605A1 (en) * 2015-09-29 2017-04-06 腾讯科技(深圳)有限公司 Picture processing method and device
CN106709954A (en) * 2016-12-27 2017-05-24 上海唱风信息科技有限公司 Method for masking human face in projection region
WO2019019828A1 (en) * 2017-07-27 2019-01-31 腾讯科技(深圳)有限公司 Target object occlusion detection method and apparatus, electronic device and storage medium
CN109101923A (en) * 2018-08-14 2018-12-28 罗普特(厦门)科技集团有限公司 A kind of personnel wear the detection method and device of mask situation
CN111428559A (en) * 2020-02-19 2020-07-17 北京三快在线科技有限公司 Method and device for detecting wearing condition of mask, electronic equipment and storage medium
CN111523473A (en) * 2020-04-23 2020-08-11 北京百度网讯科技有限公司 Mask wearing identification method, device, equipment and readable storage medium

Also Published As

Publication number Publication date
CN112434562A (en) 2021-03-02

Similar Documents

Publication Publication Date Title
CN112434562B (en) Mask wearing state detection method, mask wearing state detection equipment, electronic device and storage medium
CN112434578B (en) Mask wearing normalization detection method, mask wearing normalization detection device, computer equipment and storage medium
CN108229509B (en) Method and device for identifying object class and electronic equipment
CN110363047B (en) Face recognition method and device, electronic equipment and storage medium
CN108009531B (en) Multi-strategy anti-fraud face recognition method
CN107506693B (en) Distort face image correcting method, device, computer equipment and storage medium
CN110569756B (en) Face recognition model construction method, recognition method, device and storage medium
CN111488756A (en) Face recognition-based living body detection method, electronic device, and storage medium
CN109670441A (en) A kind of realization safety cap wearing knows method for distinguishing, system, terminal and computer readable storage medium
CN109840565A (en) A kind of blink detection method based on eye contour feature point aspect ratio
CN111445459A (en) Image defect detection method and system based on depth twin network
CN107194361A (en) Two-dimentional pose detection method and device
CN108564579B (en) Concrete crack detection method and detection device based on time-space correlation
CN113449704B (en) Face recognition model training method and device, electronic equipment and storage medium
CN106570447B (en) Based on the matched human face photo sunglasses automatic removal method of grey level histogram
CN112507963B (en) Automatic generation of batch mask face samples and mask face recognition method
CN114937232B (en) Wearing detection method, system and equipment for medical waste treatment personnel protective appliance
CN112464850B (en) Image processing method, device, computer equipment and medium
CN109740572A (en) A kind of human face in-vivo detection method based on partial color textural characteristics
CN111860369A (en) Fraud identification method and device and storage medium
CN112613471B (en) Face living body detection method, device and computer readable storage medium
CN111626163A (en) Human face living body detection method and device and computer equipment
WO2022148378A1 (en) Rule-violating user processing method and apparatus, and electronic device
CN112183438A (en) Image identification method for illegal behaviors based on small sample learning neural network
CN113239739A (en) Method and device for identifying wearing article

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant