CN111626210B - Personnel dressing detection method, processing terminal and storage medium - Google Patents

Personnel dressing detection method, processing terminal and storage medium Download PDF

Info

Publication number
CN111626210B
CN111626210B CN202010461950.XA CN202010461950A CN111626210B CN 111626210 B CN111626210 B CN 111626210B CN 202010461950 A CN202010461950 A CN 202010461950A CN 111626210 B CN111626210 B CN 111626210B
Authority
CN
China
Prior art keywords
body part
dressing
person
human body
dressing detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010461950.XA
Other languages
Chinese (zh)
Other versions
CN111626210A (en
Inventor
李瑞青
石志儒
吴旻烨
方能虎
王东鸣
周贤
刘艳飞
肖彦军
张梅玲
黄金
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ShanghaiTech University
Original Assignee
ShanghaiTech University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ShanghaiTech University filed Critical ShanghaiTech University
Priority to CN202010461950.XA priority Critical patent/CN111626210B/en
Publication of CN111626210A publication Critical patent/CN111626210A/en
Application granted granted Critical
Publication of CN111626210B publication Critical patent/CN111626210B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/70Multimodal biometrics, e.g. combining information from different biometric modalities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The personnel dressing detection method, the processing terminal and the storage medium are applied to a scene with dressing requirements; specifically, a shooting image of a person to be entered in a scene is obtained; processing according to the photographed image to obtain a human body position locating map of the person to be entered; and performing dressing detection on a shot image area corresponding to at least one preset human body part in the human body part positioning chart so as to obtain a dressing detection result of the person to be entered. According to the application, the dressing detection result of the person to be entered in the scene is automatically detected, so that the dressing detection result is efficient and accurate, the dressing of the person can be further monitored by using the dressing detection result to remind and promote the safety consciousness of the related person, and the person who does not pass the dressing detection result is possibly a stranger and screened to limit the behavior of the person, so that the safety level of the scene is improved; in addition, the dressing detection result can also be used for matching with the dressing principal, and if necessary, the matching result can be sent to the manager or the dressing principal.

Description

Personnel dressing detection method, processing terminal and storage medium
Technical Field
The present application relates to the technical field of personnel monitoring, and in particular, to a personnel dressing detection method, a processing terminal, and a storage medium.
Background
Many scenes require personnel to have corresponding specific dressing, especially scenes such as laboratories and the like which possibly have a certain danger, and the distribution situation of formal dressing (namely wearing the laboratory dress) of the laboratory personnel can obviously reflect the importance degree of the laboratory personnel on the laboratory safety regulation. In addition to this, there may be places where protection dressing is required, such as hospital rooms, radiological rooms, etc., all of which require specific dressing by the relevant personnel before access is possible. Factories, hazardous operation sites, etc. require specific dressing to perform the operation. The damage caused by incorrect installation creates a number of unnecessary disputes and management costs in the definition of the subsequent reimbursement liabilities.
However, if the dressing of the person involved in the observation and detection is performed by manpower, it is time-consuming and laborious, and it is easy to leak or be circumvented, resulting in inefficiency.
Therefore, how to realize efficient and accurate dressing detection of specific scenes is a technical problem to be solved in the industry.
Disclosure of Invention
In view of the above-mentioned drawbacks of the prior art, a primary object of the present application is to provide a personnel dressing detection method, a processing terminal, and a storage medium, so as to solve the problem of low dressing detection efficiency in the prior art.
To achieve the above and other related objects, a first aspect of the present application provides a person dressing detection method applied to a scene where dressing requirements exist; the method comprises the following steps: acquiring a shooting image of a person to be entered of the scene; processing according to the photographed image to obtain a human body position locating map of the person to be entered; and performing dressing detection on a shot image area corresponding to at least one preset human body part in the human body part positioning map so as to obtain a dressing detection result of the person to be entered.
In an embodiment of the first aspect of the present application, before the step of processing according to the captured image to obtain a body part localization map of the person to be entered therein, the method further includes: identifying the identity of the person to be entered according to the biological characteristics in the photographed image; and executing the subsequent steps under the condition that the identity of the personnel to be entered passes.
In an embodiment of the first aspect of the present application, the human body part localization map is a human body skeleton map obtained by processing the captured image by a human body posture estimation model; alternatively, the human body part localization map is a human body pixel map segmented from the captured image.
In an embodiment of the first aspect of the present application, the performing a dressing detection on the shot image area corresponding to at least one predetermined human body part in the human body part positioning chart to obtain a dressing detection result of the person to be entered includes: matching and comparing the area characteristics of the shot image area corresponding to at least one preset human body part with the standard clothing characteristics of the scene clothing, and forming the dressing detection result according to at least one comparison result obtained by comparison; wherein the standard garment features comprise: any one or more of color, pattern, shape, and gray scale; the region features correspondingly include: any one or more of a pixel mean value of the photographed image area, a pattern feature included in the photographed image area, a shape feature included in the photographed image area, and a gray mean value of the photographed image area.
In an embodiment of the first aspect of the present application, the predetermined human body part has a plurality of; each predetermined body part is assigned a weight; the dressing detection is performed in a sequence from high to low corresponding to the weight; if the comparison result corresponding to each preset human body part with relatively high weight is accumulated until the dressing detection result is enough to pass, the dressing detection is not carried out on the corresponding shooting image area of each preset human body part with relatively low weight.
In an embodiment of the first aspect of the present application, each of the predetermined human body parts includes: any one or more of the head, eyes, face, shoulders, abdomen, chest, legs, feet, upper body, and lower body.
In an embodiment of the first aspect of the present application, the dressing requirement corresponds to a scene dressing covering the upper half to the lower half of the person; the plurality of predetermined body parts includes: two symmetrical upper body parts respectively having the same first weight, and two symmetrical lower body parts respectively having the same second weight; wherein, the first upper body part and the second upper body part of the two upper body parts are respectively one and the other of the two shoulder parts; the first lower body part and the second lower body part of the two lower body parts are respectively one and the other of an abdomen to left leg part and an abdomen to right leg part; the second weight is greater than the first weight; performing dressing detection corresponding to photographed image areas at a plurality of predetermined human body parts in the human body part localization map, comprising: obtaining regional characteristics of shooting image regions corresponding to the first upper body part, the second upper body part, the first lower body part and the second lower body part respectively; matching and comparing the regional characteristics of the shooting image regions corresponding to the first lower body part and the second lower body part with the standard clothing characteristics of the scene clothing respectively; if the comparison results corresponding to the first lower body part and the second lower body part are consistent, the dressing detection result is obtained to pass; if the comparison results corresponding to the first lower body part and the second lower body part are not consistent, the dressing detection result is not passed; if only one comparison result corresponding to the first lower body part and the second lower body part is consistent, respectively comparing the area characteristics of the shooting image areas corresponding to the first upper body part and the second upper body part with the standard clothing characteristics of the scene clothing; if the comparison results corresponding to the first upper body part and the second upper body part are consistent, the dressing detection result is obtained to pass; if the comparison results corresponding to the first upper body part and the second upper body part are not consistent, the dressing detection result is not passed; if only one comparison result corresponding to the first upper body part and the second upper body part is consistent, the dressing detection result is obtained to be determined.
In an embodiment of the first aspect of the present application, the scenario where there is a dressing requirement includes: any one of a laboratory, a dust-free/sterile studio, a medical institution, a radiology room, an indoor space where an infectious patient is located, an enterprise/business unit or a department thereof requiring wearing uniform, and an education institution requiring wearing student's clothing; and/or, the personnel dressing detection method further comprises: and associating the dressing detection result with the information of the corresponding target personnel so as to provide the information for the management personnel or the target personnel.
To achieve the above and other related objects, a second aspect of the present application provides a processing terminal, including: the communication unit is used for communicating with the image acquisition device to receive a shot image of a person to be entered of a scene by the image acquisition device; a storage unit for storing at least one computer program; a processing unit coupled to the communication unit and the storage unit for executing the at least one computer program to perform the person wearing detection method as set forth in any one of the first aspects.
To achieve the above and other related objects, a third aspect of the present application provides a computer-readable storage medium storing at least one computer program which, when executed, performs the person wearing detection method as set forth in any one of the first aspects.
As described above, the personnel dressing detection method, the processing terminal, and the storage medium of the present application are applied to a scene where a dressing request exists; specifically, a shooting image of a person to be entered in the scene is obtained; processing according to the photographed image to obtain a human body position locating map of the person to be entered; and performing dressing detection on a shot image area corresponding to at least one preset human body part in the human body part positioning map so as to obtain a dressing detection result of the person to be entered. According to the technical scheme, the dressing detection result of the person to be entered in the scene is automatically detected, the dressing detection result is efficient and accurate, the dressing detection result can be further utilized to monitor the dressing of the person to remind and promote the safety consciousness of related persons, and the failed person possibly is screened to limit the behavior of the failed person, so that the safety level of the scene is effectively improved.
Drawings
Fig. 1 is a schematic diagram of a communication system architecture for implementing personnel dressing detection in an embodiment of the present application.
Fig. 2 is a flowchart of a method for detecting clothing of a person according to an embodiment of the application.
Fig. 3 is a schematic diagram of a map of body position location according to an embodiment of the application.
Fig. 4 is a schematic flow chart of dressing detection according to a plurality of predetermined human body parts in an embodiment of the application.
Fig. 5 is a schematic structural diagram of a processing terminal according to an embodiment of the present application.
Fig. 6 is a schematic diagram showing functional modules of a person wearing detection system according to an embodiment of the present application.
Detailed Description
Further advantages and effects of the present application will become apparent to those skilled in the art from the disclosure of the present application, which is described by the following specific examples.
In the following description, reference is made to the accompanying drawings which describe several embodiments of the application. It is to be understood that other embodiments may be utilized and that structural, electrical, and operational changes may be made without departing from the spirit and scope of the present disclosure. The following detailed description is not to be taken in a limiting sense, and the scope of embodiments of the present application is defined only by the claims of the issued patent. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
Although the terms first, second, etc. may be used herein to describe various elements, information or parameters in some examples, these elements or parameters should not be limited by these terms. These terms are only used to distinguish one element or parameter from another element or parameter. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of the various described embodiments. The first element and the second element are each described as one element, but not the same element, unless the context clearly indicates otherwise. The word "if" as used herein may be interpreted as "at … …" or "when … …", depending on the context, for example.
Furthermore, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes," and/or "including" specify the presence of stated features, steps, operations, elements, components, items, categories, and/or groups, but do not preclude the presence, presence or addition of one or more other features, steps, operations, elements, components, items, categories, and/or groups. The terms "or" and/or "as used herein are to be construed as inclusive, or meaning any one or any combination. Thus, "A, B or C" or "A, B and/or C" means "any of the following: a, A is as follows; b, a step of preparing a composite material; c, performing operation; a and B; a and C; b and C; A. b and C). An exception to this definition will occur only when a combination of elements, functions, steps or operations are in some way inherently mutually exclusive.
Those of ordinary skill in the art will appreciate that the modules and method steps of the examples described in connection with the embodiments disclosed herein can be implemented as electronic hardware, or as a combination of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In view of the requirement of realizing efficient and accurate personnel dressing detection, the dressing detection is carried out according to the acquired images of the self-correlation personnel through an image processing technology in the embodiment of the application, so that the problems in the prior art are solved.
As shown in fig. 1, a schematic diagram of a communication system architecture for implementing personnel wear detection in one embodiment of the present application is shown.
In this communication system, since an image of a person concerned is to be obtained, an image pickup device 101, such as one or more cameras, or the like, may be provided. The image acquisition device 101 may be disposed in a scene where there is a dressing requirement, for example, at an entrance or the like, for acquiring a photographed image of a person to be entered in the scene.
Wherein, the scene that has the dress requirement includes: any one of laboratory, dust-free/aseptic working room, medical institution, radiation room, indoor space where infectious patient is located, enterprise/institution or its department requiring wearing uniform, education institution requiring wearing student's clothing.
The image acquisition device 101 is communicatively connected to the processing terminal 102 to transmit the captured image to the processing terminal 102. In some examples, the image capturing device 101 and the processing terminal 102 may be connected by a wired manner, for example, the image capturing device 101 is provided with a USB interface, or a video output interface, so as to be directly or indirectly (e.g. through a switching device) connected to the processing terminal 102 through a type-matched transmission medium (e.g. a cable); still alternatively, in some examples, the image capturing device 101 and the processing terminal 102 may be connected wirelessly, for example, the image capturing device 101 and the processing terminal 102 each have a wireless communication circuit (e.g. bluetooth, wiFi, 2G/3G/4G/5G communication module) that conforms to the same communication protocol, so that a wireless communication connection can be established.
The processing terminal 102 processes the received photographed image using an image processing technique, thereby obtaining a processing result including a dressing detection result of the person to be entered.
Illustratively, the processing terminal 102 may be implemented by any one of a server/server bank, a desktop, a notebook, a smart phone, a tablet, etc., or by a distributed system in which a plurality of communications are connected to cooperate. In addition, if the processing terminal 102 is implemented by a server/server group, it may be a centralized architecture, or a distributed architecture, for example, a public cloud (public cloud) service end and a private cloud (PrivateCloud) service end, where the public cloud service end is, for example, an ali cloud computing service platform, amazon cloud computing service platform, a hundred-degree cloud computing platform, a Tencent cloud computing platform, and the like, and the private cloud service end is, for example, a self-installed server, and the like.
Fig. 2 is a schematic flow chart of a method for detecting clothing of a person according to an embodiment of the application. The person wearing detection method may be performed by a processing terminal in the embodiment of fig. 1, for example.
The personnel dressing detection method comprises the following steps:
Step S201: and acquiring a shooting image of a person to be entered of the scene.
The processing terminal may, for example, obtain its captured image of the person to be entered by being communicatively connected to an image acquisition device.
The image acquisition device may perform shooting continuously, may perform shooting periodically, or may perform shooting under a triggering condition; and the captured image may be in the form of a video stream or a picture. For example, the image capturing device may continuously capture images and transmit the images in real time to the processing terminal, and the processing terminal detects captured images including the person to be entered therein through a trained object detection model (e.g., R-CNN or its variant, SSD, YOLO, etc.); or the image acquisition device can shoot once every N minutes or N seconds and transmit the shot once to the processing terminal for target detection; still alternatively, the person to be entered may stand to a specific position (e.g., an entrance to a scene, or standing in the field Jing Rukou) and be sensed by a sensor (e.g., any one or more combination of various sensors such as photoelectric, ultrasonic, magnetic, pressure, travel switch, etc.), or a signal generator may be triggered (e.g., an entrance guard input device receives input of a key, fingerprint, face scan, etc.) to generate a notification signal, so that the notification signal may be received by a controller controlling an image acquisition device, thereby enabling the image acquisition device to acquire a captured image of the person to be entered and transmit the captured image to the processing terminal.
Alternatively, the photographing angle of the image pickup device may be set so that the person photographed by the image pickup device belongs to the identity of the person to be entered with a high probability, such as a view angle photographed downward at the entrance, or the like.
Alternatively (indicated by the dashed outline), the following person dressing test may also be performed prior to:
step S202: and identifying the identity of the person to be entered according to the biological characteristics in the photographed image.
The biological characteristics can be one or more of characteristic information of face parts, fingerprints, palmprints, body parts or all outlines, gestures and the like of people to be entered in the self-photographing image.
For example, face information of a person in the video stream is identified and acquired, for example, by face recognition technology, and compared with facial feature information of scene persons stored in a database, so as to check the comparison result of the comparison similarity and a threshold value to identify the person. If the similarity is lower than a certain threshold, for example 80%, the identity authentication is not passed, and the monitoring personnel can be further warned to prompt the monitoring personnel to control.
Executing a subsequent step S203 under the condition that the identity of the personnel to be entered passes the authentication; and in the case that the identity authentication of the personnel to be entered is not passed, no subsequent step is executed.
Step S203: and processing according to the shot image to obtain a human body position locating map of the person to be entered.
In some embodiments, the human body part localization map is a human body skeleton map obtained by processing the captured image by a human body posture estimation model; alternatively, the human body part localization map is a human body pixel map segmented from the captured image.
The human skeleton map may be obtained by a skeleton detection algorithm. For example, the skeleton detection algorithm may use, for example, an openelse open source library. Through the open source library, all human skeleton information in the photographed image can be obtained and accessed in a matrix form. The specific return value is the coordinates of 18 human body feature points, for example, shown by points 1 to 18 in fig. 3. Of course, there are many implementations of skeleton detection algorithms in the prior art, and the skeleton detection algorithm is related to the single-person or multi-person gesture detection algorithm from 2014 to year, and will not be described herein.
Step S204: and performing dressing detection on a shot image area corresponding to at least one preset human body part in the human body part positioning map so as to obtain a dressing detection result of the person to be entered.
In some embodiments, the specific implementation manner of the step S204 may include: and matching and comparing the area characteristics of the shot image area corresponding to at least one preset human body part with the standard clothing characteristics of the scene clothing, and forming the dressing detection result according to at least one comparison result obtained by comparison.
Wherein the standard garment features comprise: any one or more of color, pattern, shape, and gray scale; the region features correspondingly include: any one or more of a pixel mean value of the photographed image area, a pattern feature included in the photographed image area, a shape feature included in the photographed image area, and a gray mean value of the photographed image area.
For example, if the predetermined human body part has a left shoulder, the RGB pixel values of the region corresponding to the left shoulder in the photographed image are averaged, and the RGB values are compared with the colors of the scene clothes (for example, the specific clothes of the laboratory and the hospital are white) to see whether the characteristics of the experimental clothes are met.
In order to obtain the dressing detection result more reliably, the predetermined human body part can be provided with a plurality of parts; each predetermined body part is assigned a weight; the dressing detection is performed in a sequence from high to low corresponding to the weight; if the comparison result corresponding to each preset human body part with relatively high weight is accumulated until the dressing detection result is enough to pass, the dressing detection is not carried out on the corresponding shooting image area of each preset human body part with relatively low weight.
In an embodiment of the first aspect of the present application, each of the predetermined human body parts includes: any one or more of the head, eyes, face, shoulders, abdomen, chest, legs, feet, upper body, and lower body.
Taking the human body position locating chart in fig. 3 as an example, it can be seen that the experimental clothes and the medical clothes are more prominent than most common clothes in that the clothes are covered on the lower half body, generally the upper side of the knee, and the color is white, so in some embodiments, the RGB value of the lower half body can be used as a main judging basis, and the information of the shoulder and/or the upper half body can be used as an auxiliary judging basis.
Accordingly, the predetermined portion to be subjected to the dressing detection can be set as: two symmetrical upper body parts each having the same first weight, and two symmetrical lower body parts each having the same second weight. The second weight is higher than the first weight.
The first upper body part and the second upper body part of the two upper body parts are one and the other of the two shoulder parts, i.e., for example, the parts (indicated by dotted circles) where the numbers 2 and 5 in fig. 3 are located, assuming that the photographed image areas A, B are respectively corresponding; the first lower body part and the second lower body part of the two lower body parts are respectively one and the other of the abdomen to left leg part (i.e., the parts indicated by numerals 8 and 9 in fig. 3, for example, are indicated by dotted circles), assuming that the photographed image region C is corresponding, and the abdomen to right leg part (i.e., the parts indicated by numerals 11 and 12 in fig. 3, are indicated by dotted circles), assuming that the photographed image region D is corresponding.
Illustratively, as shown in fig. 4, as one possible implementation manner of step S203, a process of performing dressing detection corresponding to the photographed image areas at a plurality of predetermined human body parts in the human body part localization map includes:
step S401: and obtaining the regional characteristics of the shooting image regions corresponding to the first upper body part, the second upper body part, the first lower body part and the second lower body part respectively.
Step S402: and respectively matching and comparing the regional characteristics of the shooting image regions corresponding to the first lower body part and the second lower body part with the standard clothing characteristics of the scene clothing.
Taking an experiment suit and a medical care suit as examples, in the step, the average value of RGB pixels in the photographed image area C, D may be respectively compared with the corresponding RGB pixel value of white (the pixel value of white may be 255), and if the same or the difference is within a preset deviation value, the same or the difference may be considered to be consistent; otherwise, the two are considered to be inconsistent.
Step S403: and if the comparison results corresponding to the first lower body part and the second lower body part are consistent, obtaining the dressing detection result to pass.
The dressing test results are that a person enters a laboratory by, for example, wearing a laboratory suit, indicating that the person is properly dressing in the scene.
Step S404: if the comparison results corresponding to the first lower body part and the second lower body part are not consistent, the dressing detection result is not passed.
Taking the experimental clothes and the medical clothes as examples, in the step, the average value of RGB pixels in the shot image areas C and D is not white, and the result of dressing detection is not passed because the personnel is not normally dressed.
The dressing test results are that a person enters a laboratory by indicating that the person is properly dressing in the scene, e.g., not wearing laboratory clothing. In this case, there may be two cases, one of which is that the person may be a laboratory person but is not normally dressed; alternatively, the person may not be a laboratory person, have identity problems, may be an unlawful molecule that may damage the laboratory, etc.; of course, if step S202 is performed before, the person with the identity being inconsistent may be screened out before, and if step S202 is not performed or step S202 erroneously identifies the person with the identity being inconsistent as a legal person, the dressing detection may also function as a further auxiliary for the person identity detection.
Step S405: and if only one comparison result corresponding to the first lower body part and the second lower body part is consistent, respectively comparing the area characteristics of the shooting image areas corresponding to the first upper body part and the second upper body part with the standard clothing characteristics of the scene clothing.
Specifically, if the dressing detection corresponding to each predetermined human body part (for example, the lower body part) with a higher weight is not enough to obtain the dressing detection result, the corresponding dressing detection is continued by each predetermined human body part (for example, the upper body part) with a lower weight to assist the judgment.
Taking an experiment suit and a medical care suit as an example, in this step, the average value of RGB pixels in one of the photographed image area C and the photographed image area D matches white, and the other does not match, so that the dressing detection of the photographed image area a and the photographed image area B is required.
Step S405 determines that if the comparison results corresponding to the first upper body portion and the second upper body portion are both consistent, the dressing detection result is obtained as pass.
Taking an experiment garment and a medical garment as examples, in the step, the average value of RGB pixels in one of the shooting image area C and the shooting image area D is consistent with white, the average value of RGB pixels in the other shooting image area A and the shooting image area B is inconsistent with white, and the dressing detection result is passed.
Step S405 determines that if the comparison results corresponding to the first upper body portion and the second upper body portion are not consistent, the dressing detection result is not passed.
Taking experimental clothes and medical care clothes as examples, in the step, the average value of RGB pixels in one of a shooting image area C and a shooting image area D is consistent with white, and the other is inconsistent; and the average value of RGB pixels of the shooting image area A and the shooting image area B are not consistent with white, and the dressing detection result is not passed.
Step S405 determines that if only one of the comparison results corresponding to the first upper body portion and the second upper body portion matches, then step S406 is performed: and obtaining the dressing detection result to be determined.
Taking experimental clothes and medical care clothes as examples, in the step, the average value of RGB pixels in one of a shooting image area C and a shooting image area D is consistent with white, and the other is inconsistent; and only one of the RGB pixel mean values in the photographed image area a and the photographed image area B coincides with white, and the other does not coincide, the dressing detection result is to be determined.
Further alternatively, the wearing detection can be repeated, or the wearing detection of other identity parts is combined to assist in judgment, or a supervisor is prompted to perform the wearing detection of the detected personnel in a manual mode.
For example, if the identity authentication in step S202 is failed and the identity authentication in step S407 is failed, an alarm may be given to prompt the supervisory personnel to control or limit the related personnel; and optionally, the feedback information obtained in all the steps, including the information of the scene (such as a laboratory) access personnel, personnel dressing conditions and the like, is stored in a background database so that the supervisory personnel can call to perform scene safety monitoring.
It will be appreciated that in the above embodiments, the apparatus is installed in a laboratory, hospital, or other setting: the analysis is performed by taking the example of the experiment clothes and the medical clothes, however, the person skilled in the art can be fully expanded to other scenes, such as schools, enterprises and institutions, etc., and can combine the rule setting of the dressing detection by the corresponding specific dressing, such as school clothes, police clothes, etc., and the characteristics of the color, pattern, size, etc. of the clothing can be utilized to form the rule of the dressing detection, and the rule can be used as the selection basis of the corresponding one or more predetermined human body parts, so the embodiment of fig. 4 is not limited thereto.
The scene wear is not limited to the upper garment or the lower garment, but may be, for example, a helmet, a mask, glasses, a headband, or the like worn on the head, or shoes, shoe covers, or socks worn on the foot.
In some embodiments, the person dressing detection method further comprises: and associating the dressing detection result with the information of the corresponding target personnel so as to provide the information for the management personnel or the target personnel. For example, the provision may be an external transmission, such as to a manager or a user terminal (e.g., cell phone, computer, tablet, etc.) of the target person, which may be graphically displayed by running a software program; alternatively, the information is transmitted to a display screen, a display panel, or the like provided in a public area, for example, to be displayed, thereby prompting a manager or the target person.
As shown in fig. 5, a schematic circuit diagram of a processing terminal according to an embodiment of the application is shown.
The processing terminal 500 in this embodiment includes:
the communication unit 501 is configured to communicate with an image capturing device to receive a captured image of a person to enter a scene by the image capturing device. Illustratively, the communication unit 501 includes wired or wireless communication circuitry; the wired communication circuit includes: a USB module, a wired network card, or a video input circuit, etc., so as to be directly or indirectly (e.g., through a switching device) connected to the image acquisition device through a type-matched transmission medium (e.g., a cable); the wireless communication circuit comprises, for example, a Bluetooth, wiFi, 2G/3G/4G/5G communication module, and establishes a wireless communication connection with a wireless communication circuit in the image acquisition device, which follows the same communication protocol.
The storage unit 502 is configured to store at least one computer program. The storage unit 502 may include, for example, high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. In some embodiments, the memory may also include memory wirelessly connected to the one or more processors, such as network-attached memory accessed via RF circuitry or external ports and a communication network, which may be the internet, one or more intranets, local area networks, wide area networks, storage local area networks, etc., or a suitable combination thereof. The memory controller may control access to memory by other components of the device, such as the CPU and peripheral interfaces.
A processing unit 503 is coupled to the communication unit 501 and the storage unit 502 for executing the at least one computer program for performing the personnel wear detection method of fig. 2 and/or fig. 4, for example. Illustratively, the processing unit 503 may include one or more general-purpose microprocessors, one or more special-purpose processors, one or more field programmable logic arrays, or any combination thereof.
As shown in fig. 6, a schematic diagram of functional modules of a human wear detection system according to an embodiment of the present application is shown. Illustratively, the person dressing detection system 600 may be implemented in the processing terminal of the foregoing embodiments (e.g., fig. 1, 5); the various functional modules in the personnel dressing detection system 600 (e.g., the acquisition module 601, the positioning map generation module 602, the dressing detection module 603, etc.) may be implemented by a combination of software/hardware/software/hardware, for example, by a processing unit in the processor in the embodiment of fig. 5 running a computer program.
It should be noted that, since the technical details of the specific implementation involved in the system embodiment of fig. 6 have been described in the previous method embodiments (for example, the corresponding embodiments of fig. 2 and 4), the description of other technical details is not repeated here.
The personnel dressing detection system 600 is applied to a scenario where dressing requirements exist.
The person wearing detection system 600 includes:
the acquiring module 601 is configured to acquire a captured image of a person to enter the scene.
And the positioning map generating module 602 is configured to process the captured image to obtain a human body position positioning map of the person to be entered therein.
The dressing detection module 603 is configured to perform dressing detection corresponding to a captured image area at least one predetermined human body part in the human body part positioning chart, so as to obtain a dressing detection result of the person to be entered.
Illustratively, before the step of processing according to the captured image to obtain a body part localization map of the person to be entered therein, the method further includes: identifying the identity of the person to be entered according to the biological characteristics in the photographed image; and executing the subsequent steps under the condition that the identity of the personnel to be entered passes.
Illustratively, the human body position localization map is a human body skeleton map obtained by processing the captured image by a human body posture estimation model; alternatively, the human body part localization map is a human body pixel map segmented from the captured image.
Illustratively, the dressing detection is performed on the photographed image area corresponding to at least one predetermined human body part in the human body part positioning chart, so as to obtain a dressing detection result of the person to be entered, including: matching and comparing the area characteristics of the shot image area corresponding to at least one preset human body part with the standard clothing characteristics of the scene clothing, and forming the dressing detection result according to at least one comparison result obtained by comparison; wherein the standard garment features comprise: any one or more of color, pattern, shape, and gray scale; the region features correspondingly include: any one or more of a pixel mean value of the photographed image area, a pattern feature included in the photographed image area, a shape feature included in the photographed image area, and a gray mean value of the photographed image area.
Illustratively, the predetermined body part has a plurality of; each predetermined body part is assigned a weight; the dressing detection is performed in a sequence from high to low corresponding to the weight; if the comparison result corresponding to each preset human body part with relatively high weight is accumulated until the dressing detection result is enough to pass, the dressing detection is not carried out on the corresponding shooting image area of each preset human body part with relatively low weight.
Illustratively, each of the predetermined body parts includes: any one or more of the head, eyes, face, shoulders, abdomen, chest, legs, feet, upper body, and lower body.
Illustratively, the dressing requirement corresponds to a scene dressing covering the upper body to the lower body of a person; the plurality of predetermined body parts includes: two symmetrical upper body parts respectively having the same first weight, and two symmetrical lower body parts respectively having the same second weight; wherein, the first upper body part and the second upper body part of the two upper body parts are respectively one and the other of the two shoulder parts; the first lower body part and the second lower body part of the two lower body parts are respectively one and the other of an abdomen to left leg part and an abdomen to right leg part; the second weight is greater than the first weight; performing dressing detection corresponding to photographed image areas at a plurality of predetermined human body parts in the human body part localization map, comprising: obtaining regional characteristics of shooting image regions corresponding to the first upper body part, the second upper body part, the first lower body part and the second lower body part respectively; matching and comparing the regional characteristics of the shooting image regions corresponding to the first lower body part and the second lower body part with the standard clothing characteristics of the scene clothing respectively; if the comparison results corresponding to the first lower body part and the second lower body part are consistent, the dressing detection result is obtained to pass; if the comparison results corresponding to the first lower body part and the second lower body part are not consistent, the dressing detection result is not passed; if only one comparison result corresponding to the first lower body part and the second lower body part is consistent, respectively comparing the area characteristics of the shooting image areas corresponding to the first upper body part and the second upper body part with the standard clothing characteristics of the scene clothing; if the comparison results corresponding to the first upper body part and the second upper body part are consistent, the dressing detection result is obtained to pass; if the comparison results corresponding to the first upper body part and the second upper body part are not consistent, the dressing detection result is not passed; if only one comparison result corresponding to the first upper body part and the second upper body part is consistent, the dressing detection result is obtained to be determined.
Illustratively, the scenario in which the dressing requirement exists includes: any one of laboratory, dust-free/aseptic working room, medical institution, radiation room, indoor space where infectious patient is located, enterprise/institution or its department requiring wearing uniform, education institution requiring wearing student's clothing.
The various functions implemented in the foregoing embodiments relate to computer software products; the computer software product is stored on a storage medium for, when executed, causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the various embodiments of the application, such as the various flow steps in the method embodiments of fig. 2, 4, etc.
In the embodiments provided herein, the computer-readable storage medium may include read-only memory, random-access memory, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, flash memory, U-disk, removable hard disk, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. In addition, any connection is properly termed a computer-readable medium. For example, if the instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. It should be understood, however, that computer-readable and data storage media do not include connections, carrier waves, signals, or other transitory media, but are intended to be directed to non-transitory, tangible storage media. Disk and disc, as used herein, includes Compact Disc (CD), laser disc, optical disc, digital Versatile Disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers.
In one or more exemplary aspects, the functions described by the computer program(s) involved in the flow of the method of the present application may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, these functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. The steps of a method or algorithm disclosed in the present application may be embodied in a processor-executable software module, which may be located on a tangible, non-transitory computer-readable and writable storage medium. Tangible, non-transitory computer readable and writable storage media may be any available media that can be accessed by a computer.
The flowcharts and block diagrams in the figures described above illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In summary, the personnel dressing detection method, the processing terminal and the storage medium of the application are applied to the scene with dressing requirements; specifically, a shooting image of a person to be entered in the scene is obtained; processing according to the photographed image to obtain a human body position locating map of the person to be entered; and performing dressing detection on a shot image area corresponding to at least one preset human body part in the human body part positioning map so as to obtain a dressing detection result of the person to be entered. According to the technical scheme, the dressing detection result of the person to be entered in the scene is automatically detected, the dressing detection result is efficient and accurate, the dressing detection result can be further utilized to monitor the dressing of the person to remind and promote the safety consciousness of related persons, and the failed person possibly is screened to limit the behavior of the failed person, so that the safety level of the scene is effectively improved.
The above embodiments are merely illustrative of the principles of the present application and its effectiveness, and are not intended to limit the application. Modifications and variations may be made to the above-described embodiments by those skilled in the art without departing from the spirit and scope of the application. Accordingly, it is intended that all equivalent modifications and variations of the application be covered by the claims, which are within the ordinary skill of the art, be within the spirit and scope of the present disclosure.

Claims (8)

1. A personnel dressing detection method is characterized by being applied to a scene with dressing requirements; the method comprises the following steps:
acquiring a shooting image of a person to be entered of the scene;
processing according to the photographed image to obtain a human body position locating map of the person to be entered;
performing dressing detection on a shot image area corresponding to at least one preset human body part in the human body part positioning map so as to obtain dressing detection results of the personnel to be entered; comprising the following steps: matching and comparing the area characteristics of the shot image area corresponding to at least one preset human body part with the standard clothing characteristics of the scene clothing, and forming the dressing detection result according to at least one comparison result obtained by comparison; wherein the standard garment features comprise: any one or more of color, pattern, shape, and gray scale; the region features correspondingly include: any one or more combinations of a pixel mean value of the photographed image area, a pattern feature contained in the photographed image area, a shape feature contained in the photographed image area, and a gray mean value of the photographed image area;
A plurality of predetermined human body parts are arranged; each predetermined body part is assigned a weight; the dressing detection is performed in a sequence from high to low corresponding to the weight; if the comparison result corresponding to each preset human body part with relatively high weight is accumulated until the dressing detection result is enough to pass, the dressing detection is not carried out on the corresponding shooting image area of each preset human body part with relatively low weight.
2. The person wearing detection method according to claim 1, characterized by further comprising, before the step of processing from the captured image to obtain a human body part localization map of the person to be entered therein:
identifying the identity of the person to be entered according to the biological characteristics in the photographed image;
and executing the subsequent steps under the condition that the identity of the personnel to be entered passes.
3. The person wearing apparel detection method according to claim 1, wherein the human body position localization map is a human body skeleton map obtained by processing the captured image by a human body posture estimation model; alternatively, the human body part localization map is a human body pixel map segmented from the captured image.
4. The person wearing detection method according to claim 1, wherein each of the predetermined human body parts includes: any one or more of the head, eyes, face, shoulders, abdomen, chest, legs, feet, upper body, and lower body.
5. The person dressing detection method according to claim 4, wherein the dressing requirement corresponds to a scene dressing covering the upper half to the lower half of the person; the plurality of predetermined body parts includes: two symmetrical upper body parts respectively having the same first weight, and two symmetrical lower body parts respectively having the same second weight; wherein, the first upper body part and the second upper body part of the two upper body parts are respectively one and the other of the two shoulder parts; the first lower body part and the second lower body part of the two lower body parts are respectively one and the other of an abdomen to left leg part and an abdomen to right leg part; the second weight is greater than the first weight;
performing dressing detection corresponding to photographed image areas at a plurality of predetermined human body parts in the human body part localization map, comprising:
obtaining regional characteristics of shooting image regions corresponding to the first upper body part, the second upper body part, the first lower body part and the second lower body part respectively;
Matching and comparing the regional characteristics of the shooting image regions corresponding to the first lower body part and the second lower body part with the standard clothing characteristics of the scene clothing respectively;
if the comparison results corresponding to the first lower body part and the second lower body part are consistent, the dressing detection result is obtained to pass;
if the comparison results corresponding to the first lower body part and the second lower body part are not consistent, the dressing detection result is not passed;
if only one comparison result corresponding to the first lower body part and the second lower body part is consistent, respectively comparing the area characteristics of the shooting image areas corresponding to the first upper body part and the second upper body part with the standard clothing characteristics of the scene clothing;
if the comparison results corresponding to the first upper body part and the second upper body part are consistent, the dressing detection result is obtained to pass;
if the comparison results corresponding to the first upper body part and the second upper body part are not consistent, the dressing detection result is not passed;
if only one comparison result corresponding to the first upper body part and the second upper body part is consistent, the dressing detection result is obtained to be determined.
6. The person dressing detection method according to claim 1, wherein the scene in which the dressing requirement exists includes: any one of a laboratory, a dust-free/sterile studio, a medical institution, a radiology room, an indoor space where an infectious patient is located, an enterprise/business unit or a department thereof requiring wearing uniform, and an education institution requiring wearing student's clothing; and/or, the personnel dressing detection method further comprises: and associating the dressing detection result with the information of the corresponding target personnel so as to provide the information for the management personnel or the target personnel.
7. A processing terminal, comprising:
the communication unit is used for communicating with the image acquisition device to receive a shot image of a person to be entered of a scene by the image acquisition device;
a storage unit for storing at least one computer program;
a processing unit coupled to the communication unit and the storage unit for running the at least one computer program to perform the person dressing detection method according to any one of claims 1 to 6.
8. A computer-readable storage medium, characterized in that at least one computer program is stored, which when executed performs the person wearing detection method of any one of 1 to 6.
CN202010461950.XA 2020-05-27 2020-05-27 Personnel dressing detection method, processing terminal and storage medium Active CN111626210B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010461950.XA CN111626210B (en) 2020-05-27 2020-05-27 Personnel dressing detection method, processing terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010461950.XA CN111626210B (en) 2020-05-27 2020-05-27 Personnel dressing detection method, processing terminal and storage medium

Publications (2)

Publication Number Publication Date
CN111626210A CN111626210A (en) 2020-09-04
CN111626210B true CN111626210B (en) 2023-09-22

Family

ID=72271912

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010461950.XA Active CN111626210B (en) 2020-05-27 2020-05-27 Personnel dressing detection method, processing terminal and storage medium

Country Status (1)

Country Link
CN (1) CN111626210B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112307913B (en) * 2020-10-20 2021-09-28 江苏濠汉信息技术有限公司 Protective equipment wearing detection method and device based on unmanned aerial vehicle vision
CN112906651B (en) * 2021-03-25 2023-07-11 中国联合网络通信集团有限公司 Target detection method and device
CN113096288A (en) * 2021-04-27 2021-07-09 深圳市智德森水务科技有限公司 Detection system is dressed to worker
CN113536917B (en) * 2021-06-10 2024-06-07 浙江大华技术股份有限公司 Dressing recognition method, system, electronic device and storage medium
CN114183881B (en) * 2022-02-14 2022-05-24 江苏恒维智信息技术有限公司常州经开区分公司 Intelligent thermal comfort control method based on visual assistance
CN115797876B (en) * 2023-02-08 2023-04-07 华至云链科技(苏州)有限公司 Equipment monitoring processing method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018119599A1 (en) * 2016-12-26 2018-07-05 深圳前海达闼云端智能科技有限公司 Method and device for searching for person and communication system
CN109255312A (en) * 2018-08-30 2019-01-22 罗普特(厦门)科技集团有限公司 A kind of abnormal dressing detection method and device based on appearance features
CN110472574A (en) * 2019-08-15 2019-11-19 北京文安智能技术股份有限公司 A kind of nonstandard method, apparatus of detection dressing and system
CN110795989A (en) * 2019-08-28 2020-02-14 广东电网有限责任公司 Intelligent safety monitoring system for electric power operation and monitoring method thereof
CN110826610A (en) * 2019-10-29 2020-02-21 上海眼控科技股份有限公司 Method and system for intelligently detecting whether dressed clothes of personnel are standard

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018119599A1 (en) * 2016-12-26 2018-07-05 深圳前海达闼云端智能科技有限公司 Method and device for searching for person and communication system
CN109255312A (en) * 2018-08-30 2019-01-22 罗普特(厦门)科技集团有限公司 A kind of abnormal dressing detection method and device based on appearance features
CN110472574A (en) * 2019-08-15 2019-11-19 北京文安智能技术股份有限公司 A kind of nonstandard method, apparatus of detection dressing and system
CN110795989A (en) * 2019-08-28 2020-02-14 广东电网有限责任公司 Intelligent safety monitoring system for electric power operation and monitoring method thereof
CN110826610A (en) * 2019-10-29 2020-02-21 上海眼控科技股份有限公司 Method and system for intelligently detecting whether dressed clothes of personnel are standard

Also Published As

Publication number Publication date
CN111626210A (en) 2020-09-04

Similar Documents

Publication Publication Date Title
CN111626210B (en) Personnel dressing detection method, processing terminal and storage medium
KR101215948B1 (en) Image information masking method of monitoring system based on face recognition and body information
US20230316444A1 (en) High definition camera and image recognition system for criminal identification
CN109002786B (en) Face detection method, face detection equipment and computer-readable storage medium
CN105718925A (en) Real person living body authentication terminal equipment based on near infrared and facial micro expression
CN111191532A (en) Face recognition method and device based on construction area and computer equipment
US11989975B2 (en) Iris authentication device, iris authentication method, and recording medium
KR102012672B1 (en) Anti-crime system and method using face recognition based people feature recognition
CN111553327B (en) Clothing identification method, device, equipment and medium
CN106296017A (en) Management method that a kind of guest room is moved in and device
CN108171138B (en) Biological characteristic information acquisition method and device
CN107169458A (en) Data processing method, device and storage medium
US20240054819A1 (en) Authentication control device, authentication system, authentication control method and non-transitory computer readable medium
KR101957677B1 (en) System for learning based real time guidance through face recognition and the method thereof
CN105989338A (en) Face recognition method and system thereof
JP2004259253A (en) Personal authentication device
CN110751125A (en) Wearing detection method and device
CN109492509A (en) Personal identification method, device, computer-readable medium and system
JP2020194493A (en) Monitoring system for nursing-care apparatus or hospital and monitoring method
CN109522782A (en) Household member's identifying system
JP2017167800A (en) Monitoring system, information processor, monitoring method, and monitoring program
US20230135997A1 (en) Ai monitoring and processing system
JP2020087305A (en) Information processing apparatus, information processing method and program
US11106895B1 (en) Video alert and secondary verification system and method
CN116798074A (en) Stranger identification method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant