CN112434562A - Method and device for detecting wearing state of mask, electronic device and storage medium - Google Patents

Method and device for detecting wearing state of mask, electronic device and storage medium Download PDF

Info

Publication number
CN112434562A
CN112434562A CN202011209638.8A CN202011209638A CN112434562A CN 112434562 A CN112434562 A CN 112434562A CN 202011209638 A CN202011209638 A CN 202011209638A CN 112434562 A CN112434562 A CN 112434562A
Authority
CN
China
Prior art keywords
mask
boundary
region
wearing state
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011209638.8A
Other languages
Chinese (zh)
Other versions
CN112434562B (en
Inventor
俞依杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202011209638.8A priority Critical patent/CN112434562B/en
Publication of CN112434562A publication Critical patent/CN112434562A/en
Application granted granted Critical
Publication of CN112434562B publication Critical patent/CN112434562B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application relates to a method, equipment, an electronic device and a storage medium for detecting the wearing state of a mask, wherein the method for detecting the wearing state of the mask comprises the following steps: acquiring a face image to be detected, and acquiring key points in the face image under the condition that a mask exists in the face image; and obtaining an interested region in the face image according to the key points, obtaining a detection mark of the mask, and judging the mask wearing state of the pedestrian corresponding to the face image according to the position relationship between the interested region and the detection mark of the mask. Through this application, whether the pedestrian is judged whether the gauze mask wears the standard through gauze mask textural feature has been solved, leads to the still higher problem of human cost, when realizing reducing the detection cost, can improve the detection rate of accuracy that the gauze mask was worn the state.

Description

Method and device for detecting wearing state of mask, electronic device and storage medium
Technical Field
The present application relates to the field of image recognition technologies, and in particular, to a method, an apparatus, an electronic device, and a storage medium for detecting a wearing state of a mask.
Background
In the process of epidemic prevention and control, the pedestrian wears the gauze mask and is being crucial in the aspect of lightening the epidemic propagation, so there is staff to inspect the gauze mask wearing condition in the great point places of crowd flow such as each district, supermarket, station, if the pedestrian gauze mask is worn the nonstandard, for example the gauze mask only covers the mouth or only hangs on the chin, the staff can indicate that this pedestrian wears the gauze mask again. However, this inspection method requires a lot of labor, and when the flow rate of people is large, an inspection omission may occur.
In the related art, image recognition is performed through a camera to judge the wearing state of a pedestrian mask, specifically, after a face is detected in a monitoring picture, texture features of a target area are extracted from a face area, if the texture features include mask texture features, the user wearing the mask is determined, then the area of an area with the mask texture features is used as mask coverage area information, and whether mask wearing is standard or not is judged according to the ratio of the area information to the face area, but in the method, a mask texture database needs to be manually set in advance to serve as a comparison basis, and a large amount of labor cost is needed for work.
At present, an effective solution is not provided aiming at the problem that in the related technology, whether the mask is worn by pedestrians is judged according to the texture characteristics of the mask, so that the labor cost is still high.
Disclosure of Invention
The embodiment of the application provides a method, equipment, an electronic device and a storage medium for detecting a wearing state of a mask, and aims to solve the problem that in the related art, whether the mask wearing of pedestrians is standard or not is judged through mask texture features, so that the labor cost is still high.
In a first aspect, an embodiment of the present application provides a method for detecting a wearing state of a mask, including:
acquiring a face image to be detected, and acquiring key points in the face image under the condition that a mask exists in the face image;
obtaining an interested area in the face image according to the key points, and obtaining a detection mark of the mask, wherein the interested area comprises at least one area in facial features and/or facial contours;
and judging the mask wearing state of the pedestrian corresponding to the face image according to the position relation between the region of interest and the detection mark of the mask.
In some embodiments, the region of interest includes a nose region, a mouth region, and a chin region, and the method for acquiring the region of interest includes:
acquiring eye region key points, nose region key points, mouth region key points and chin region key points from the key points of the face image;
the nose area is obtained according to the key points of the eye area and the nose area, the mouth area is obtained according to the key points of the nose area and the mouth area, and the chin area is obtained according to the key points of the mouth area and the chin area.
In some embodiments, determining the mask wearing state of the pedestrian corresponding to the face image according to the position relationship between the region of interest and the detection mark of the mask includes:
under the condition that the detection mark is located in the chin area, judging that the mask only covers the chin area, wherein the wearing state of the mask is not standard;
under the condition that the detection mark is located in the mouth area, judging that the mouth area and the chin area are covered by the mask, wherein the wearing state of the mask is not standard;
and under the condition that the detection mark is positioned in the nose area, judging that the mask covers the nose area, the mouth area and the chin area, wherein the wearing state of the mask is standard.
In some embodiments, after obtaining the mask detection mark, the method further comprises:
judging the mask wearing state of the pedestrian according to the position relation between the boundary line of the region of interest and the detection mark of the mask;
the method for acquiring the boundary line of the region of interest further comprises the following steps:
determining a first boundary according to the eye region key points, determining a second boundary according to the nose region key points, and determining a third boundary according to the mouth region key points, wherein the second boundary and the third boundary are both parallel to the first boundary.
In some embodiments, obtaining the mask detection mark comprises:
determining the central line of the face image according to the key points of the nose area;
and acquiring a pixel value of the central line, and determining a pixel mutation point as the detection mark according to the change condition of the pixel value.
In some of these embodiments, after determining the pixel discontinuities, the method comprises:
and obtaining parallel lines of the first boundary after the pixel mutation points are crossed, and obtaining a detection boundary which is used as a detection mark of the mask.
In some embodiments, determining the mask wearing state of the pedestrian corresponding to the face image according to the position relationship between the region of interest and the detection mark of the mask includes:
when the detection boundary is lower than the third boundary, determining that the mask only covers the chin area, and the wearing state of the mask is not standard;
determining that the mask covers only the mouth region and the chin region and the mask wearing state is not standard, when the detection boundary is located between the third boundary and the second boundary;
and under the condition that the detection boundary is higher than the second boundary, judging that the mask covers the nose area, the mouth area and the chin area, wherein the wearing state of the mask is standard.
In some of these embodiments, determining that the mask covers only the mouth region and the chin region comprises:
and under the condition that the detection boundary is positioned between the third boundary and the second boundary and the distance between the detection boundary and the third boundary is greater than a first preset distance, judging that the mask only covers the mouth region and the chin region, wherein the first preset distance is obtained according to the distance between the third boundary and the second boundary and a first preset proportion.
In some of these embodiments, determining that the mask covers the nose region, the mouth region, and the chin region comprises:
and under the condition that the detection boundary is higher than the second boundary and the distance between the detection boundary and the first boundary is smaller than a second preset distance, judging that the mask covers the nose area, the mouth area and the chin area, wherein the second preset distance is obtained according to the distance between the first boundary and the second boundary and a second preset proportion.
In some embodiments, after determining the mask wearing state of the pedestrian corresponding to the face image, the method further includes:
acquiring a detection state;
under the condition that the detection state is epidemic prevention, if the wearing state of the mask is not standard, acquiring the identity information of the pedestrian, and forbidding the pedestrian to pass;
and under the condition that the detection state is identification, if the wearing state of the mask is not standard, determining whether to acquire the identity information of the pedestrian according to a preset rule and whether to allow the pedestrian to pass.
In some embodiments, before obtaining the detection mark of the mask, the method further comprises:
and carrying out image enhancement on the face image through gray level transformation.
In a second aspect, an embodiment of the present application provides a device for detecting a wearing state of a mask, where the device includes a determining module, an obtaining module, and a determining module:
the determining module is used for acquiring a face image to be detected, and acquiring key points in the face image under the condition that a mask exists in the face image;
the acquisition module is used for acquiring an interested region in the face image according to the key points and acquiring a detection mark of the mask, wherein the interested region comprises at least one region in facial features and/or facial contours;
and the judging module is used for judging the mask wearing state of the pedestrian corresponding to the face image according to the position relation between the region of interest and the detection mark of the mask.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor implements the method for detecting wearing status of a mask according to the first aspect when executing the computer program.
In a fourth aspect, embodiments of the present application provide a storage medium, on which a computer program is stored, the program, when executed by a processor, implementing the method for detecting wearing state of a mask according to the first aspect.
Compared with the related art, the method for detecting the wearing state of the mask, provided by the embodiment of the application, comprises the steps of acquiring the face image to be detected, and acquiring key points in the face image under the condition that the mask exists in the face image; the method comprises the steps of obtaining an interested region in a face image according to key points, obtaining detection marks of the mask, judging the mask wearing state of a pedestrian corresponding to the face image according to the position relation between the interested region and the detection marks of the mask, solving the problem that the labor cost is still high due to the fact that whether the pedestrian is worn on the mask is judged according to mask texture features, and improving the detection accuracy of the mask wearing state while achieving the purpose of reducing the detection cost.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic application environment diagram of a method for detecting a wearing state of a mask according to an embodiment of the present application;
fig. 2 is a flowchart of a method for detecting a wearing state of a mask according to an embodiment of the present application;
FIG. 3 is a flow chart of a method of acquiring a region of interest according to an embodiment of the present application;
FIG. 4 is a schematic diagram of 68-point face key points according to an embodiment of the present application;
FIG. 5 is a schematic boundary line view of a region of interest according to an embodiment of the present application;
fig. 6 is a flowchart of a mask detection flag acquisition method according to an embodiment of the present application;
fig. 7 is a schematic view of a mask detection mark according to an embodiment of the present application;
fig. 8 is a block diagram of a hardware configuration of a terminal in the method for detecting a wearing state of a mask according to the embodiment of the present application;
fig. 9 is a block diagram showing a configuration of a mask wearing state detection device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described and illustrated below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments provided in the present application without any inventive step are within the scope of protection of the present application. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of ordinary skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms referred to herein shall have the ordinary meaning as understood by those of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar words throughout this application are not to be construed as limiting in number, and may refer to the singular or the plural. The present application is directed to the use of the terms "including," "comprising," "having," and any variations thereof, which are intended to cover non-exclusive inclusions; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or elements, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Reference to "connected," "coupled," and the like in this application is not intended to be limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. Reference herein to "a plurality" means greater than or equal to two. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. Reference herein to the terms "first," "second," "third," and the like, are merely to distinguish similar objects and do not denote a particular ordering for the objects.
The method for detecting the wearing state of the mask provided by the application can be applied to the application environment shown in fig. 1, and fig. 1 is a schematic application environment diagram of the method for detecting the wearing state of the mask according to the embodiment of the application, as shown in fig. 1. Wherein the monitoring device 102 and the processor 104 communicate over a network. The monitoring device 102 obtains a face image to be detected of a pedestrian, the processor 104 obtains key points in the face image under the condition that the mask exists in the face image, obtains an interested region in the face image according to the key points, obtains a detection mark of the mask, and then the processor 104 judges the wearing state of the mask of the pedestrian according to the position relation between the interested region and the detection mark of the mask. The monitoring device 102 may be a camera or a camera, and the processor 104 may be implemented by a server or a chip.
The present embodiment provides a method for detecting a wearing state of a mask, and fig. 2 is a flowchart of a method for detecting a wearing state of a mask according to an embodiment of the present application, and as shown in fig. 2, the method includes the following steps:
step S210, a face image to be detected is obtained, and under the condition that a mask exists in the face image, key points in the face image are obtained.
In this embodiment, a face image is acquired by a monitoring device, such as a camera or a camera, the existence state of a mask in the face image is determined by a mask classification algorithm, and whether the mask exists in the face image is determined. Specifically, a semantic segmentation algorithm can be used for identifying the mask in the face image, for example, the semantic segmentation algorithm Unet based on deep learning can obtain a relatively accurate segmentation result by using few training images, and meanwhile, the algorithm can also ensure a relatively high image processing speed and cannot greatly affect the real-time performance of face identification. All pixels belonging to the mask in the face image can be obtained through the semantic segmentation algorithm, so that the mask area can be accurately segmented.
Under the condition that a mask exists in a face image, whether the mask worn by a pedestrian is standard needs to be further judged, so that key points in the face image can be extracted through a face key point extraction algorithm, and the face key point extraction algorithm in the embodiment can be 21-point key point detection, 68-point key point detection or 81-point key point detection.
Step S220, obtaining an interested area in the face image according to the key points, and obtaining a detection mark of the mask, wherein the interested area comprises at least one area in facial features and/or facial contours.
In general, the key points mark eyebrows, eyes, noses, mouths, and chin, so that the face image can be segmented according to the positions of the key points in the face image to obtain the positions of the regions of interest, and the regions of interest in the embodiment can be set according to actual requirements, for example, the regions of interest can be set as forehead regions, eye regions, eyebrow regions, nose regions, ear regions, mouth regions, and/or chin regions.
Further, the detection flag of the mask in this embodiment is a flag for determining the position of the mask, and after the mask in the face image is acquired, the detection flag may be obtained from the mask region by extracting the mask region, for example, the central point of the mask region is used as the detection flag, or a partial contour line of the mask region is used as the detection flag.
And step S230, judging the mask wearing state of the pedestrian corresponding to the face image according to the position relation between the region of interest and the detection mark of the mask.
The position relationship between the detection mark and the region of interest in the embodiment may be whether the detection mark is located within the range of the region of interest, for example, in the case where the region of interest is a nose region, if the detection mark is located in the nose region, it is determined that the mask wearing state is standard. Further, under different actual scenes, a plurality of interested areas can be set, and a plurality of mask wearing state grades are set according to the positions of the detection marks in the different interested areas.
Through the steps S210 to S230, the detection marks of the region of interest and the mask are obtained from the face image, the wearing state of the mask of the pedestrian is determined according to the relative position relationship between the detection marks and the region of interest, a large amount of mask texture data does not need to be input in advance, the problem that the labor cost is still high due to the fact that whether the mask of the pedestrian is worn is determined according to the mask texture features is solved, the detection cost is reduced, and meanwhile the detection accuracy of the wearing state of the mask can be improved by detecting through the position relationship.
In some embodiments, before determining the mask existing state of the face image, a face target may be obtained in the face image by using a face detection algorithm based on deep learning, for example, a face detection algorithm based on RetinaNet and TinyYolo realizes the acquisition of the face target. The RetinaNet target detection algorithm is a general target detection algorithm, and targets are obtained by detecting the left eye, the right eye, the nose tip and the mouth corner in an image, so that the problem of serious imbalance of the proportion of positive and negative samples is solved. In this embodiment, a detection head of RetinaNet is used to perform face detection. In order to be able to apply the method in the present application to a chip device with a small memory, in this embodiment, the backbone network of the method is replaced by a lightweight backbone network of TinyYolo, a residual structure in a dark net network used by Yolo is removed, and output features respectively obtained by downsampling 8, 16, and 32 are used as inputs of a RetinaNet detection head.
In some of these embodiments, the region of interest includes a nose region, a mouth region and a chin region, fig. 3 is a flowchart of a method of acquiring the region of interest according to an embodiment of the present application, as shown in fig. 3, the method including the steps of:
step S310, obtaining eye region key points, nose region key points, mouth region key points and chin region key points from key points of the face image;
step S320, obtaining a nose region according to the key point of the eye region and the key point of the nose region, obtaining a mouth region according to the key point of the nose region and the key point of the mouth region, and obtaining a chin region according to the key point of the mouth region and the key point of the chin region.
The human face key point extraction algorithm is used for positioning key region positions of a human face under the condition of a given human face image, wherein the key region positions comprise eyebrows, eyes, a nose, a mouth, a face contour and the like. In this embodiment, a 68-point face key point extraction algorithm is used to extract face key points, fig. 4 is a schematic diagram of 68-point face key points according to the embodiment of the present application, and as shown in fig. 4, serial numbers 0 to 67 in the diagram represent different key points, in the 68-point face key point extraction algorithm in this embodiment, the mapping from a facial image to key points is learned by using the neural network nonlinear mapping capability, the obtained facial feature points have a fixed serial number sequence, and a required facial region of interest can be easily obtained by using the serial numbers of the key points. For example, serial numbers 30 to 35 always represent the positions of the nose, can be used as key points of the nose region, serial numbers 36 to 45 represent the positions of the eyes, can be used as key points of the eye region, serial numbers 61 to 63 and 65 to 67 always represent the central positions of the lips, can be used as key points of the mouth region, and serial numbers 5 to 11 always represent the positions of the chin, and can be used as key points of the chin region, so that the method can extract the region of interest through the key points.
Further, in this embodiment, a region formed between the eye region key point and the nose region key point is taken as a nose region, a region formed between the nose region key point and the mouth region key point is taken as a mouth region, and a region formed between the mouth region key point and the chin region key point is taken as a chin region.
Through the steps S310 and S320, the present embodiment extracts the key points in the face image based on the 68-point face key point extraction algorithm, so that the accuracy of extracting the key points can be improved.
In some embodiments, the method for determining the wearing state of the mask of the pedestrian specifically includes: under the condition that the detection mark is located in the chin area, judging that the mask only covers the chin area, and judging that the wearing state of the mask is nonstandard; under the condition that the detection mark is located in the mouth area, judging that the mouth area and the chin area are covered by the mask, wherein the wearing state of the mask is nonstandard; under the condition that the detection mark is located in the nose area, the mask is judged to cover the nose area, the mouth area and the chin area, and the wearing state of the mask is standard. In this embodiment, the region of interest is chin area, mouth area and nose area, and according to the detection sign of gauze mask in the position of the regional of different interest, the state of wearing the gauze mask is divided into and only covers chin area, covers mouth area and chin area, covers different situations such as nose area, mouth area and chin area, and in actual scene, can set up punishment measure according to the particular case that detects the state of wearing the gauze mask to remind the pedestrian.
In some embodiments, the mask wearing state of the pedestrian can be determined according to the position relationship between the boundary line of the region of interest and the detection mark of the mask. The method for acquiring the boundary line of the region of interest comprises the following steps: determining a first boundary according to the key points of the eye region, determining a second boundary according to the key points of the nose region, and determining a third boundary according to the key points of the mouth region, wherein the second boundary and the third boundary are both parallel to the first boundary. In general, there are 4 mask wearing cases: 1. the mask is not worn; 2. the mask covers the nose, mouth and chin and is considered as normally worn; 3. the mask only covers the mouth; 4. the mask only covers the chin. Therefore, the nose region, the mouth region, and the chin region are three important regions, and therefore, the region of interest in the present embodiment is set as the nose region, the mouth region, and the chin region, and the boundary lines of the region of interest can be extracted by the key points.
Fig. 5 is a schematic boundary line diagram of a region of interest according to an embodiment of the present application, and as shown in fig. 5, a 68-point face key point extraction algorithm is still adopted in this embodiment. Specifically, two points are selected from four points 36, 39, 42 and 45 of key points of the eye area to obtain a first boundary, further, a point closest to the forehead can be selected from the key points of the eye area, a parallel line of the edge of the monitored image is taken through the point to obtain the first boundary, more recently, a standard line can be determined according to symmetrical points on two sides of the mask, and a parallel line of the standard line is taken through one point of the key points of the eye area to obtain the first boundary.
After the first boundary is obtained, a parallel line which passes through the key point 33 of the nose area and is used as the first boundary is used as a second boundary, finally, a point which is closest to the chin is selected from three points 67, 66 and 65 of the key point of the mouth area, and a third boundary is obtained by passing through the point and is used as a parallel line of the first boundary. In this embodiment, the key points of each region of interest may also be adjusted according to different actual scenes, for example, the three points 67, 66, and 65 may be replaced with points 58, 57, and 56.
In the embodiment, the detection mark of the mask is compared with the boundary line of the region of interest, so that the position relation between the detection mark and the region of interest can be obtained more accurately, and the detection accuracy of the wearing state of the mask is improved.
In some embodiments, considering that the edge of the mask is not flat, the edge point of the mask may cause erroneous judgment on the comparison between the detection mark and the region of interest, and therefore the detection mark of the mask needs to be determined again, fig. 6 is a flowchart of a method for acquiring the mask detection mark according to an embodiment of the present application, and as shown in fig. 6, the method includes the following steps:
and step S610, determining the center line of the face image according to the key points in the nose area.
In this embodiment, based on a 68-point face key point extraction algorithm, the key points 27, 28, 29, and 30 in the nose region are used as the center lines in the face image.
Step S620, obtaining the pixel value of the central line, and determining the pixel mutation point as the detection mark according to the change situation of the pixel value.
In the face image, the pixel value of the mask is obviously different from the pixel value of the face, so that a point of sudden change of the pixel value on the central line can be used as a detection mark, and the judgment criterion of the sudden change can be that the difference of the pixel values of adjacent pixels is larger than a certain threshold value.
Fig. 7 is a schematic diagram of mask detection marks according to an embodiment of the present application, and as shown in fig. 7, the dotted line is a center line determined according to key points in the nose region, the solid line is a portion covered by the mask, point O is the obtained detection mark, and point a and point B are edge points of the mask.
Further, on the basis of obtaining the mask region by semantic segmentation, intersection points of the mask region contour line and the center line can be obtained, and if there are a plurality of intersection points, an intersection point on the side closer to the forehead among the intersection points is used as a detection mark.
Through the step S610 and the step S620, the detection mark of the mask is determined through the pixel change of the central line, so that the position of the mask in the face image can be more accurately positioned, and the detection accuracy of the wearing state of the mask is further improved.
In some embodiments, after the pixel discontinuity point is determined, parallel lines of the first boundary may be obtained through the pixel discontinuity point, and the detected boundary is obtained as a detection mark of the mask. Because the detection boundary is more obvious and easier to identify than the pixel mutation point in the face image, the wearing state of the mask is detected according to the detection boundary, and the detection speed and the detection accuracy can be effectively improved.
In some embodiments, after obtaining the boundary of the region of interest and detecting the boundary, determining the mask wearing state of the pedestrian includes: under the condition that the detection boundary is lower than the third boundary, the detection boundary of the mask is considered to be below the mouth, the mask is judged to only cover the chin area, and the wearing state of the mask is not standard; in the case where the detection boundary is located between the third boundary and the second boundary, it is considered that the detection boundary of the mask is between the nose and the mouth and covers the mouth, and therefore it is determined that the mask covers only the mouth region and the chin region, and the mask wearing state is not standard; in the case where the detection boundary is higher than the second boundary, it is considered that the mask completely covers the nose, mouth, and chin, and thus the mask wearing state is determined to be standard. In the embodiment, the position of the mask in the face can be clearly obtained by comparing the detection boundary with the first boundary, the second boundary and the third boundary and by each line in the face image, so that the detection speed and the detection accuracy can be effectively improved.
In some of the embodiments, it can also be determined that the mask covers only the mouth region and the chin region by: under the condition that the detection boundary is located between the third boundary and the second boundary and the distance between the detection boundary and the third boundary is greater than a first preset distance, judging that the mask only covers the mouth region and the chin region, wherein the first preset distance is obtained according to the distance between the third boundary and the second boundary and a first preset proportion, and the first preset proportion can be flexibly set according to an actual scene, for example, set to 2/3; further, it can also be determined that the mask covers the nose region, the mouth region, and the chin region by: and under the condition that the detection boundary is higher than the second boundary and the distance between the detection boundary and the first boundary is smaller than a second preset distance, judging that the mask covers the nose area, the mouth area and the chin area, wherein the wearing state of the mask is standard, and the second preset distance is obtained according to the distance between the first boundary and the second boundary and a second preset proportion, and the second preset proportion can be flexibly set according to the actual scene, for example, set to 1/3. Specifically, the determination may be made according to the following formulas 1 to 3:
upbound < Bound3 equation 1
Figure BDA0002758391080000101
Figure BDA0002758391080000102
In formulas 1 to 3, Upbound is a detection boundary of the mask, Bound1 represents a first boundary, Bound2 represents a second boundary, Bound3 represents a third boundary, and Distance represents a DistanceUp3Distance to detect the Distance between a boundary and a third boundary23Distance, between the second and third boundariesUp1To detect the distance between the boundary and the first boundary, Distances12Is the distance between the first boundary and the second boundary. The mask is determined to cover only the chin region if each boundary satisfies the condition of the constraint of formula 1, the mask is determined to cover only the mouth region and the chin region if each boundary satisfies the condition of the constraint of formula 2, and the mask is determined to cover the nose region, the mouth region, and the chin region if each boundary satisfies the condition of the constraint of formula 3.
In the embodiment, when the position relation between the detection boundary and the region of interest is judged, the distance judgment between the detection boundary and the first boundary, the distance judgment between the detection boundary and the second boundary and the distance judgment between the detection boundary and the third boundary are introduced, so that the detection error caused by the identification error of the key point can be effectively avoided, and the detection accuracy is improved.
In some embodiments, detection states of different levels can be set to flexibly adjust measures for pedestrians in different mask wearing states, specifically, after the mask wearing state of the pedestrian corresponding to the face image is judged, the detection state is acquired, wherein the detection state can be set to be a first level, a second level and a third level according to an actual scene, different detection levels are represented, the severity of each detection level is different, in this embodiment, the detection states are divided into epidemic prevention and identification, and the detection in an epidemic prevention scene is more strict.
Under the condition that the detection state is epidemic prevention, if the wearing state of the mask is nonstandard, the identity information of the pedestrian is obtained, and the pedestrian is prohibited to pass through. Specifically, the face features in the face image to be detected are matched with face features stored in a face library in advance, under the condition that matching is successful, the mask wearing state of the pedestrian in the face image is associated with the identity information of the pedestrian, and the face features are uploaded to the rear end to perform statistical operation.
And under the condition that the detection state is identification, if the wearing state of the mask is not standard, determining whether to acquire the identity information of the pedestrian or not according to a preset rule and whether to allow the pedestrian to pass. The mask wearing requirement on the pedestrian under the recognition scene is low, whether the identity information of the pedestrian is acquired and recorded or not and whether the pedestrian is allowed to pass or not can be flexibly set according to the actual scene. For example, it may be set that in a case where a pedestrian does not wear a mask, and the mask of the pedestrian covers only the chin, the identification information of the pedestrian is acquired, but the pedestrian is not prohibited from passing.
Through the method in the embodiment, the detection states of the security inspection ports can be set respectively, the security inspection effect and efficiency are improved, and the passing efficiency among the security inspection ports is not influenced by each other.
In some embodiments, before the detection flag of the mask is acquired, in order to further improve the difference between the mask coverage area feature and the face feature, a preprocessing operation needs to be performed on the face image, for example, image enhancement is performed on the face image through gray scale transformation, and histogram equalization is used in this embodiment to enhance the image. Histogram equalization is a commonly used gray level transformation method, and is essentially to perform nonlinear stretching on an image, redistribute pixel values of the image, make the number of the pixel values in a certain gray level range approximately equal, and achieve the purpose of image enhancement. In general, the image is subjected to a mapping process using a cumulative distribution function so that the processed pixels are uniformly distributed in each gray scale range.
The monotonous increasing property of the cumulative distribution function and the value range from 0 to 1 can ensure that the original size relation of the pixels can be kept unchanged no matter how the pixels are mapped, and simultaneously ensure that the value range of the pixel mapping function is between 0 and 255 and cannot cross the boundary. As shown in equation 4:
Figure BDA0002758391080000121
in equation 4, skRepresenting the cumulative probability of a pixel, n being the sum of the pixels in the image, nkIs the number of pixels at the current gray level, L is the total number of possible gray levels in the image, L-1 is the gray scale range, and j represents the number of pixels at the current gray level. After the cumulative probability is obtained for each pixel in the face image, the gray scale range is multiplied to obtain the gray scale value of each pixel after mapping.
It should be noted that, in the related art, whether the mask wearing state is standard or not is determined by the ratio of the area of the mask region to the area of the face, however, when the face posture is too large, for example, the face and the monitoring device are at 90 degrees, even in a larger posture, the texture features of the key point region and the mask covering area are greatly different, the detection accuracy rate is significantly reduced, in this embodiment, by comparing the detection mark of the mask with the region of interest, because the longitudinal proportion change in the face image is small, even in a case of too large face posture, the mask wearing state of the pedestrian can be accurately determined, thereby effectively expanding the application range of the mask wearing state detection method, and facilitating the mask wearing state detection for people with different facial features. Furthermore, the method in the application does not need to prepare a mask texture template library in advance for mask feature comparison, obtains the mask outline by utilizing a semantic segmentation technology, does not have complex preset conditions, does not cause recognition errors due to face differences, and can adapt to more people and scenes.
It should be further noted that the steps illustrated in the above-described flow diagrams or in the flow diagrams of the figures may be performed in a computer system, such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flow diagrams, in some cases, the steps illustrated or described may be performed in an order different than presented here.
The method embodiments provided in the present application may be executed in a terminal, a computer or a similar computing device. Taking an operation on a terminal as an example, fig. 8 is a block diagram of a hardware structure of the terminal of the method for detecting a wearing state of a mask according to the embodiment of the present application. As shown in fig. 8, the terminal 80 may include one or more processors 802 (only one is shown in fig. 8) (the processor 802 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA) and a memory 804 for storing data, and optionally may also include a transmission device 806 for communication functions and an input-output device 808. It will be understood by those skilled in the art that the structure shown in fig. 8 is only an illustration and is not intended to limit the structure of the terminal. For example, terminal 80 may also include more or fewer components than shown in FIG. 8, or have a different configuration than shown in FIG. 8.
The memory 804 may be used to store a control program, for example, a software program and a module of application software, such as a control program corresponding to the method for detecting the wearing state of the mask in the embodiment of the present application, and the processor 802 executes various functional applications and data processing by running the control program stored in the memory 804, that is, implements the above-described method. The memory 804 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 804 can further include memory located remotely from the processor 802, which can be connected to the terminal 80 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 806 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the terminal 80. In one example, the transmission device 806 includes a Network adapter (NIC) that can be connected to other Network devices via a base station to communicate with the internet. In one example, the transmission device 806 can be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
The present embodiment further provides a device for detecting a wearing state of a mask, where the device is used to implement the foregoing embodiments and preferred embodiments, and details are not repeated for what has been described. As used hereinafter, the terms "module," "unit," "subunit," and the like may implement a combination of software and/or hardware for a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 9 is a block diagram of a mask wearing state detection apparatus according to an embodiment of the present application, and as shown in fig. 9, the apparatus includes a determination module 91, an acquisition module 92, and a determination module 93:
the determining module 91 is configured to acquire a face image to be detected, and acquire a key point in the face image when a mask exists in the face image;
the obtaining module 92 is configured to obtain an area of interest in the face image according to the key points, and obtain a detection mark of the mask, where the area of interest includes at least one area in facial features and/or facial contours;
and the judging module 93 is configured to judge a mask wearing state of the pedestrian corresponding to the face image according to a position relationship between the region of interest and the detection mark of the mask.
This embodiment is through the detection sign that obtains region of interest and gauze mask in facial image of acquisition module 92, according to the relative position relation that detects sign and region of interest through judging module 93, judge pedestrian's gauze mask and wear the state, need not enter a large amount of gauze mask texture data in advance, whether the standard is worn to the gauze mask of having solved through gauze mask texture feature judgement pedestrian, lead to the still higher problem of human cost, when realizing reducing detection cost, detect the detection accuracy that also can improve the gauze mask and wear the state through the position relation.
The above modules may be functional modules or program modules, and may be implemented by software or hardware. For a module implemented by hardware, the modules may be located in the same processor; or the modules can be respectively positioned in different processors in any combination.
The present embodiment also provides an electronic device comprising a memory having a computer program stored therein and a processor configured to execute the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
and S1, acquiring a face image to be detected, and acquiring key points in the face image under the condition that the mask exists in the face image.
S2, obtaining an interested region in the face image according to the key points, and obtaining a detection mark of the mask, wherein the interested region comprises at least one region in facial features and/or facial contours.
And S3, judging the wearing state of the mask of the pedestrian corresponding to the face image according to the position relation between the interested region and the detection mark of the mask.
It should be noted that, for specific examples in this embodiment, reference may be made to examples described in the foregoing embodiments and optional implementations, and details of this embodiment are not described herein again.
In addition, by combining the method for detecting the wearing state of the mask in the above embodiments, the embodiments of the present application can be implemented by providing a storage medium. The storage medium having stored thereon a computer program; the computer program, when executed by a processor, implements any one of the above-described embodiments of the method for detecting a wearing state of a mask.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (14)

1. A method for detecting a wearing state of a mask, comprising:
acquiring a face image to be detected, and acquiring key points in the face image under the condition that a mask exists in the face image;
obtaining an interested area in the face image according to the key points, and obtaining a detection mark of the mask, wherein the interested area comprises at least one area in facial features and/or facial contours;
and judging the mask wearing state of the pedestrian corresponding to the face image according to the position relation between the region of interest and the detection mark of the mask.
2. The method for detecting the wearing state of the mask according to claim 1, wherein the region of interest includes a nose region, a mouth region, and a chin region, and the method for acquiring the region of interest includes:
acquiring eye region key points, nose region key points, mouth region key points and chin region key points from the key points of the face image;
the nose area is obtained according to the key points of the eye area and the nose area, the mouth area is obtained according to the key points of the nose area and the mouth area, and the chin area is obtained according to the key points of the mouth area and the chin area.
3. The method for detecting a mask wearing state according to claim 2, wherein the step of determining the mask wearing state of the pedestrian corresponding to the face image based on the positional relationship between the region of interest and the detection mark of the mask comprises:
under the condition that the detection mark is located in the chin area, judging that the mask only covers the chin area, wherein the wearing state of the mask is not standard;
under the condition that the detection mark is located in the mouth area, judging that the mouth area and the chin area are covered by the mask, wherein the wearing state of the mask is not standard;
and under the condition that the detection mark is positioned in the nose area, judging that the mask covers the nose area, the mouth area and the chin area, wherein the wearing state of the mask is standard.
4. The method for detecting a wearing state of a mask according to claim 2, wherein after the detection flag of the mask is acquired, the method further comprises:
judging the mask wearing state of the pedestrian according to the position relation between the boundary line of the region of interest and the detection mark of the mask;
the method for acquiring the boundary line of the region of interest further comprises the following steps:
determining a first boundary according to the eye region key points, determining a second boundary according to the nose region key points, and determining a third boundary according to the mouth region key points, wherein the second boundary and the third boundary are both parallel to the first boundary.
5. The method for detecting a wearing state of a mask according to claim 4, wherein the step of obtaining a detection flag of the mask comprises:
determining the central line of the face image according to the key points of the nose area;
and acquiring a pixel value of the central line, and determining a pixel mutation point as the detection mark according to the change condition of the pixel value.
6. The method for detecting a wearing state of a mask according to claim 5, wherein after the pixel discontinuity point is determined, the method comprises:
and obtaining parallel lines of the first boundary after the pixel mutation points are crossed, and obtaining a detection boundary which is used as a detection mark of the mask.
7. The method for detecting a mask wearing state according to claim 6, wherein the step of determining the mask wearing state of the pedestrian corresponding to the face image based on the positional relationship between the region of interest and the detection mark of the mask comprises:
when the detection boundary is lower than the third boundary, determining that the mask only covers the chin area, and the wearing state of the mask is not standard;
determining that the mask covers only the mouth region and the chin region and the mask wearing state is not standard, when the detection boundary is located between the third boundary and the second boundary;
and under the condition that the detection boundary is higher than the second boundary, judging that the mask covers the nose area, the mouth area and the chin area, wherein the wearing state of the mask is standard.
8. The method for detecting a wearing state of a mask according to claim 7, wherein determining that the mask covers only the mouth region and the chin region comprises:
and under the condition that the detection boundary is positioned between the third boundary and the second boundary and the distance between the detection boundary and the third boundary is greater than a first preset distance, judging that the mask only covers the mouth region and the chin region, wherein the first preset distance is obtained according to the distance between the third boundary and the second boundary and a first preset proportion.
9. The method for detecting a wearing state of a mask according to claim 7, wherein determining that the mask covers the nose region, the mouth region, and the chin region includes:
and under the condition that the detection boundary is higher than the second boundary and the distance between the detection boundary and the first boundary is smaller than a second preset distance, judging that the mask covers the nose area, the mouth area and the chin area, wherein the second preset distance is obtained according to the distance between the first boundary and the second boundary and a second preset proportion.
10. The method for detecting a mask wearing state according to claim 1, wherein after determining a mask wearing state of a pedestrian corresponding to the face image, the method further comprises:
acquiring a detection state;
under the condition that the detection state is epidemic prevention, if the wearing state of the mask is not standard, acquiring the identity information of the pedestrian, and forbidding the pedestrian to pass;
and under the condition that the detection state is identification, if the wearing state of the mask is not standard, determining whether to acquire the identity information of the pedestrian according to a preset rule and whether to allow the pedestrian to pass.
11. The method for detecting a wearing state of a mask according to claim 1, wherein before the detection flag of the mask is acquired, the method further comprises:
and carrying out image enhancement on the face image through gray level transformation.
12. The utility model provides a detection equipment of state is worn to gauze mask, its characterized in that, equipment includes confirms module, obtains module and judgement module:
the determining module is used for acquiring a face image to be detected, and acquiring key points in the face image under the condition that a mask exists in the face image;
the acquisition module is used for acquiring an interested region in the face image according to the key points and acquiring a detection mark of the mask, wherein the interested region comprises at least one region in facial features and/or facial contours;
and the judging module is used for judging the mask wearing state of the pedestrian corresponding to the face image according to the position relation between the region of interest and the detection mark of the mask.
13. An electronic device comprising a memory and a processor, wherein the memory stores a computer program, and the processor is configured to execute the computer program to perform the method for detecting the wearing state of the mask according to any one of claims 1 to 11.
14. A storage medium having stored therein a computer program, wherein the computer program is configured to execute the method for detecting a wearing state of a mask according to any one of claims 1 to 11 when the computer program is run.
CN202011209638.8A 2020-11-03 2020-11-03 Mask wearing state detection method, mask wearing state detection equipment, electronic device and storage medium Active CN112434562B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011209638.8A CN112434562B (en) 2020-11-03 2020-11-03 Mask wearing state detection method, mask wearing state detection equipment, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011209638.8A CN112434562B (en) 2020-11-03 2020-11-03 Mask wearing state detection method, mask wearing state detection equipment, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN112434562A true CN112434562A (en) 2021-03-02
CN112434562B CN112434562B (en) 2023-08-25

Family

ID=74695262

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011209638.8A Active CN112434562B (en) 2020-11-03 2020-11-03 Mask wearing state detection method, mask wearing state detection equipment, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN112434562B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112818953A (en) * 2021-03-12 2021-05-18 苏州科达科技股份有限公司 Mask wearing state identification method, device, equipment and readable storage medium
CN113420675A (en) * 2021-06-25 2021-09-21 浙江大华技术股份有限公司 Method and device for detecting mask wearing standardization
CN113723214A (en) * 2021-08-06 2021-11-30 武汉光庭信息技术股份有限公司 Face key point marking method, system, electronic equipment and storage medium
CN114255517A (en) * 2022-03-02 2022-03-29 中运科技股份有限公司 Scenic spot tourist behavior monitoring system and method based on artificial intelligence analysis
WO2024050760A1 (en) * 2022-09-08 2024-03-14 Intel Corporation Image processing with face mask detection

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004310397A (en) * 2003-04-07 2004-11-04 Toyota Central Res & Dev Lab Inc Device for determining wearing of mask
WO2017054605A1 (en) * 2015-09-29 2017-04-06 腾讯科技(深圳)有限公司 Picture processing method and device
CN106709954A (en) * 2016-12-27 2017-05-24 上海唱风信息科技有限公司 Method for masking human face in projection region
CN109101923A (en) * 2018-08-14 2018-12-28 罗普特(厦门)科技集团有限公司 A kind of personnel wear the detection method and device of mask situation
WO2019019828A1 (en) * 2017-07-27 2019-01-31 腾讯科技(深圳)有限公司 Target object occlusion detection method and apparatus, electronic device and storage medium
CN111428559A (en) * 2020-02-19 2020-07-17 北京三快在线科技有限公司 Method and device for detecting wearing condition of mask, electronic equipment and storage medium
CN111523473A (en) * 2020-04-23 2020-08-11 北京百度网讯科技有限公司 Mask wearing identification method, device, equipment and readable storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004310397A (en) * 2003-04-07 2004-11-04 Toyota Central Res & Dev Lab Inc Device for determining wearing of mask
WO2017054605A1 (en) * 2015-09-29 2017-04-06 腾讯科技(深圳)有限公司 Picture processing method and device
CN106709954A (en) * 2016-12-27 2017-05-24 上海唱风信息科技有限公司 Method for masking human face in projection region
WO2019019828A1 (en) * 2017-07-27 2019-01-31 腾讯科技(深圳)有限公司 Target object occlusion detection method and apparatus, electronic device and storage medium
CN109101923A (en) * 2018-08-14 2018-12-28 罗普特(厦门)科技集团有限公司 A kind of personnel wear the detection method and device of mask situation
CN111428559A (en) * 2020-02-19 2020-07-17 北京三快在线科技有限公司 Method and device for detecting wearing condition of mask, electronic equipment and storage medium
CN111523473A (en) * 2020-04-23 2020-08-11 北京百度网讯科技有限公司 Mask wearing identification method, device, equipment and readable storage medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112818953A (en) * 2021-03-12 2021-05-18 苏州科达科技股份有限公司 Mask wearing state identification method, device, equipment and readable storage medium
CN113420675A (en) * 2021-06-25 2021-09-21 浙江大华技术股份有限公司 Method and device for detecting mask wearing standardization
CN113723214A (en) * 2021-08-06 2021-11-30 武汉光庭信息技术股份有限公司 Face key point marking method, system, electronic equipment and storage medium
CN113723214B (en) * 2021-08-06 2023-10-13 武汉光庭信息技术股份有限公司 Face key point labeling method, system, electronic equipment and storage medium
CN114255517A (en) * 2022-03-02 2022-03-29 中运科技股份有限公司 Scenic spot tourist behavior monitoring system and method based on artificial intelligence analysis
WO2024050760A1 (en) * 2022-09-08 2024-03-14 Intel Corporation Image processing with face mask detection

Also Published As

Publication number Publication date
CN112434562B (en) 2023-08-25

Similar Documents

Publication Publication Date Title
CN112434562A (en) Method and device for detecting wearing state of mask, electronic device and storage medium
CN112434578B (en) Mask wearing normalization detection method, mask wearing normalization detection device, computer equipment and storage medium
CN109670441B (en) Method, system, terminal and computer readable storage medium for realizing wearing recognition of safety helmet
Cai et al. Detecting human faces in color images
CN103902977B (en) Face identification method and device based on Gabor binary patterns
TW201627917A (en) Method and device for face in-vivo detection
US20170061252A1 (en) Method and device for classifying an object of an image and corresponding computer program product and computer-readable medium
CN107844742B (en) Facial image glasses minimizing technology, device and storage medium
WO2022062379A1 (en) Image detection method and related apparatus, device, storage medium, and computer program
Lin et al. Robust license plate detection using image saliency
CN112507978B (en) Person attribute identification method, device, equipment and medium
CN113239739B (en) Wearing article identification method and device
CN112287823A (en) Facial mask identification method based on video monitoring
Dhar et al. An efficient real time moving object detection method for video surveillance system
CN105809183A (en) Video-based human head tracking method and device thereof
CN113743199A (en) Tool wearing detection method and device, computer equipment and storage medium
CN112347988A (en) Mask recognition model training method and device, computer equipment and readable storage medium
CN113947795B (en) Mask wearing detection method, device, equipment and storage medium
CN113159037B (en) Picture correction method, device, computer equipment and storage medium
He Mask wearing detection method based on the skin color and eyes detection
CN113673362A (en) Method and device for determining motion state of object, computer equipment and storage medium
Jang et al. Skin region segmentation using an image-adapted colour model
Casiraghi et al. A face detection system based on color and support vector machines
CN114038197B (en) Scene state determining method and device, storage medium and electronic device
Zou et al. Real-time elliptical head contour detection under arbitrary pose and wide distance range

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant