CN114998778A - Wearing compliance detection method, detection device and computer readable storage medium - Google Patents

Wearing compliance detection method, detection device and computer readable storage medium Download PDF

Info

Publication number
CN114998778A
CN114998778A CN202210435070.4A CN202210435070A CN114998778A CN 114998778 A CN114998778 A CN 114998778A CN 202210435070 A CN202210435070 A CN 202210435070A CN 114998778 A CN114998778 A CN 114998778A
Authority
CN
China
Prior art keywords
target object
wearing
detected
image
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210435070.4A
Other languages
Chinese (zh)
Inventor
王原原
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202210435070.4A priority Critical patent/CN114998778A/en
Publication of CN114998778A publication Critical patent/CN114998778A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Abstract

The application discloses a wearing compliance detection method, a detection device and a computer readable storage medium, wherein the method comprises the following steps: acquiring a video to be detected and a preset wearing rule set, wherein the preset wearing rule set comprises at least two wearing rules; processing a video to be detected to obtain a first target object and a second target object; acquiring the position relation between a first target object and a second target object; and selecting the wearing rule matched with the position relation from the preset wearing rule set to obtain the current wearing rule, and determining whether the wearing identification result of the first target object conforms to the current wearing rule. Through the mode, the detection accuracy of whether wearing is in compliance can be improved.

Description

Wearing compliance detection method, wearing compliance detection device and computer readable storage medium
Technical Field
The application relates to the technical field of video analysis, in particular to a wearing compliance detection method, a wearing compliance detection device and a computer-readable storage medium.
Background
There are many places potential safety hazard in the scene of building site construction, consequently need carry out the standard to wearing of building site operation personnel to guarantee operation safety of operation personnel. For example, when working under a tower crane, the worker needs to pay attention to the fragments and the like which may fall down, so that the worker needs to wear a safety helmet; or, when the operation personnel when the high altitude construction, need wear the safety belt, avoid the disappearance, whether consequently need real time monitoring operation personnel's wearing to carry out the compliance, though there is different schemes to monitor at present, nevertheless because the complexity of building site scene, the effect of detection is not good.
Disclosure of Invention
The application provides a wearing compliance detection method, a detection device and a computer-readable storage medium, which can improve the detection accuracy of whether wearing is in compliance.
In order to solve the technical problem, the technical scheme adopted by the application is as follows: there is provided a wearing compliance detection method, the method comprising: acquiring a video to be detected and a preset wearing rule set, wherein the preset wearing rule set comprises at least two wearing rules; processing a video to be detected to obtain a first target object and a second target object; acquiring the position relation of a first target object and a second target object; and selecting the wearing rule matched with the position relation from the preset wearing rule set to obtain the current wearing rule, and determining whether the wearing identification result of the first target object conforms to the current wearing rule.
In order to solve the above technical problem, another technical solution adopted by the present application is: the detection device comprises a memory and a processor which are connected with each other, wherein the memory is used for storing a computer program, and the computer program is used for realizing the wearing compliance detection method in the technical scheme when being executed by the processor.
In order to solve the above technical problem, another technical solution adopted by the present application is: there is provided a computer-readable storage medium for storing a computer program, which, when executed by a processor, is configured to implement the wearing compliance detection method of the above technical solution.
Through above-mentioned scheme, this application's beneficial effect is: firstly, acquiring a video to be detected and a preset wearing rule set, wherein the preset wearing rule set comprises at least two wearing rules; then tracking and processing the video to be detected to obtain a first target object and a second target object; then acquiring the position relation of the first target object and the second target object, selecting a wearing rule matched with the position relation from a preset wearing rule set as a current wearing rule, and judging whether the wearing identification result of the first target object conforms to the current wearing rule or not; according to the scheme, the matched wearing regulations are selected through the position relation of the first target object and the second target object, whether the wearing of the operating personnel is in compliance or not is determined by adopting different wearing regulations under different position relations, the distinctive detection is realized, the detection diversity is improved, and the position relation corresponds to the wearing regulations, so that the detection accuracy is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts. Wherein:
FIG. 1 is a schematic flow chart diagram of one embodiment of a wear compliance detection method provided herein;
FIG. 2 is a block diagram of human and vehicle detection provided by the present application;
FIG. 3 is a schematic flow chart diagram of another embodiment of a wear compliance detection method provided herein;
FIG. 4 is a block diagram of a human head test provided herein;
FIG. 5 is a schematic structural diagram of an embodiment of a detection apparatus provided in the present application;
FIG. 6 is a schematic structural diagram of an embodiment of a computer-readable storage medium provided in the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be noted that the following examples are only illustrative of the present application, and do not limit the scope of the present application. Likewise, the following examples are only some examples of the present application, not all examples, and all other examples obtained by a person of ordinary skill in the art without making any creative effort fall within the protection scope of the present application.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
It should be noted that the terms "first", "second" and "third" in the present application are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of indicated technical features. Thus, a feature defined as "first," "second," or "third" may explicitly or implicitly include at least one of the feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless explicitly specified otherwise. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Referring to fig. 1, fig. 1 is a schematic flow chart of an embodiment of a wearing compliance detection method provided in the present application, the method including:
s11: and acquiring the video to be detected and a preset wearing rule set.
Shooting a target monitoring scene (such as an electric power scene or a construction site scene) by adopting the camera equipment so as to obtain a video to be detected; or receiving a video to be detected sent by other equipment; or reading the video to be detected from the storage device, wherein the video to be detected comprises a plurality of frames of images to be detected. At least two wearing regulations may be preconfigured to form a preset wearing regulation set, that is, the preset wearing regulation set includes at least two wearing regulations, and the wearing regulations are rules that the wearing requirements of the operator in the target monitoring scene need to meet, for example: the worker needs to wear articles such as safety helmets, safety belts, gloves, protective clothing or rubber shoes.
In one embodiment, in order to reduce the processing time, a portion of the image to be detected may be extracted from the video to be detected to perform subsequent processing operations, such as: and extracting a frame of image to be detected from the video to be detected every 3 frames.
S12: and processing the video to be detected to obtain a first target object and a second target object.
The following scheme can be adopted to extract a first target object and a second target object from a video to be detected:
1) and tracking the video to be detected to obtain a tracking result.
After the image to be detected is obtained, the image to be detected may be tracked to generate a tracking result, where the tracking result includes the image to be detected, the image to be detected includes a second target object and/or a third target object, and the categories of the third target object and the second target object may be different or the same, for example, the third target object is a person, the second target object is a vehicle, or both the third target object and the second target object are persons. Specifically, the image to be detected may be detected to obtain a detection result, where the detection result includes a detection frame of at least one third target object and a detection frame of at least one second target object in the image to be detected; and then associating the detection results of the multiple frames of images to be detected based on the detection frame of the third target object and the detection frame of the second target object to obtain a tracking result.
In a specific embodiment, an existing object detection model is adopted to detect an image to be detected so as to extract detection frames of at least two target objects in the image to be detected, wherein the at least two target objects comprise a third target object and a second target object; for example, as shown in fig. 2, the third target object is a person, the second target object is a vehicle, where a1 is a detection frame of the vehicle, and B1 to B3 are detection frames of the person. Specifically, the existing object detection model is a model obtained by training based on a target detection model and labeled people and vehicles, and the target detection model may be YOLOV 5; the method comprises the steps of collecting images in a target monitoring scene, marking targets (namely people and vehicles) in the images to obtain marking data; and then training the model based on the target detection model and the labeled data to obtain the existing object detection model.
After target detection is completed, target tracking association can be executed; specifically, based on the detection frame of each target object (including the third target object and the second target object), tracking and associating the detection results of multiple frames so that the same person and vehicle in different frames have the same Identification (ID); the target tracking correlation is mainly used for judging the wearing compliance of the same person, and the situation that multiple times of alarming occurs due to the fact that the target tracking correlation is generated in multiple frames of images to be detected when the wearing of the same person is not in compliance (for example, an operator does not wear a safety belt or a safety helmet according to requirements) is prevented, namely the target tracking correlation only needs to give an alarm once for the same person.
2) And screening the third target object in the tracking result to obtain the first target object.
After the tracking result is obtained, all images to be detected corresponding to third target objects with the same ID in the tracking result are screened to obtain at least one first target object, namely the first target object is one of all third target objects with the same ID. For example, assuming that the tracking result includes the to-be-detected image of the 1 st to 10 th frames of the operator P, and processing the 10 th frames of the image to find that the operator P is relatively clear in the 5 th to 6 th frames of the image, the operator P in the 5 th to 6 th frames of the image is taken as the first target object.
3) And identifying the image to be detected of the first target object to obtain a wearing identification result.
After the first target object is obtained, a wearing identification method in the related technology can be adopted to identify the image to be detected where the first target object is located, and a wearing identification result is generated, wherein the wearing identification result comprises an identification result of whether the first target object wears protective equipment, and the protective equipment can be a safety helmet, a safety belt, gloves, protective clothing, rubber shoes or the like.
S13: the position relation between the first target object and the second target object is obtained, the wearing rule matched with the position relation is selected from the preset wearing rule set, the current wearing rule is obtained, and whether the wearing identification result of the first target object accords with the current wearing rule or not is determined.
After the first target object is acquired, the position of the first target object and the position of the second target object can be acquired; then, according to the position of the first target object and the position of the second target object, determining the position relation between the first target object and the second target object; then according to the position relation between the two, dressing rules matched with the position relation are selected from a preset dressing rule set to serve as current dressing rules; and then judging whether the wearing identification result of the first target object meets the current wearing regulation or not to obtain a compliance detection result, wherein the compliance detection result comprises compliance or non-compliance. It is to be understood that the mapping relationship between the position relationship and the wearing rule may be established in advance, so that the position relationship may be directly used to find the matching wearing rule from the preset wearing rule set.
In one embodiment, the preset wearing rule set includes a first wearing rule and a second wearing rule, and the first wearing rule may be determined as the current wearing rule when a distance between the first target object and the second target object is less than or equal to a preset distance threshold; and when the distance between the first target object and the second target object is larger than a preset distance threshold value, determining the second wearing regulation as the current wearing regulation. Or the preset wearing regulation set comprises a first wearing regulation to a third wearing regulation, and the first wearing regulation is determined as the current wearing regulation when the distance between the first target object and the second target object is smaller than or equal to a first preset distance threshold value; when the distance between the first target object and the second target object is larger than a first preset distance threshold and smaller than a second preset distance threshold, determining the second wearing rule as the current wearing rule; and when the distance between the first target object and the second target object is greater than or equal to a second preset distance threshold value, determining the third wearing specification as the current wearing specification.
It is understood that the position relationship is not limited to distance measurement, but may also be measured by Intersection Over Union (IOU); the number of the wearing regulations in the preset wearing regulation set may be set according to specific needs, and the mapping relationship between the position relationship and the wearing regulation may also be adjusted according to the specific needs, which is not limited in this embodiment.
The embodiment provides a method for identifying whether wearing is standard or not based on video analysis, which comprises the steps of firstly tracking a video to be detected to obtain a tracking result; then, screening a third target object in the tracking result to obtain a first target object; then, identifying the to-be-detected image of the first target object to obtain a wearing identification result; then based on the position relation between the first target object and the second target object, selecting a wearing rule matched with the position relation from a preset wearing rule set as a current wearing rule, and determining whether a wearing identification result conforms to the current wearing rule to obtain a compliance detection result; in the embodiment, the matched wearing regulations are selected according to the position relation between the first target object and the second target object, whether the wearing of the operating personnel is in compliance or not is determined by adopting different wearing regulations under different position relations, so that the distinctive detection is realized, and the detection diversity is improved; moreover, since the positional relationship corresponds to the wearing regulation, it is helpful to improve the accuracy of detecting whether wearing is in compliance.
Referring to fig. 3, fig. 3 is a schematic flow chart of another embodiment of a wearing compliance detection method provided in the present application, the method including:
s31: and acquiring the video to be detected and a preset wearing rule set.
The video to be detected comprises a plurality of frames of images to be detected, the preset wearing rule set comprises at least two wearing rules, and the at least two wearing rules comprise a first wearing rule and a second wearing rule.
S32: and tracking the video to be detected to obtain a tracking result.
The tracking result comprises at least one frame of image to be detected corresponding to the third target object, a detection result of the third target object, at least one frame of image to be detected corresponding to the second target object and a detection result of the second target object, and the third target object and the second target can exist in the same image to be detected.
S33: and screening the third target object based on the image quality of the third target object and the orientation of the third target object to obtain a screening result.
The screening result comprises a first target object, a detection result of the first target object and an image to be detected where the first target object is located, and can be divided into two parts: a part satisfying the first screening condition (referred to as a first screening result) and a part satisfying the second screening condition (referred to as a second screening result), i.e., the screening results include the first screening result and the second screening result; the following describes how to generate the first screening result and the second screening result by taking the third target object and the second target object as a person and a vehicle, respectively, as an example:
(1) human image quality screening
On the basis of tracking the associated target, evaluating the image quality of the detected human body area by adopting an image quality evaluation method in the related technology, namely, evaluating the quality of the image area where the detection frame of the third target object is located to obtain an image quality score; judging whether the image quality score is larger than a preset quality threshold value or not; and if the image quality score is larger than the preset quality threshold, determining that the first screening condition is met, determining that the third target object is the first target object, and putting the detection result of the first target object and the image to be detected where the first target object is located into the first screening result.
Further, in the present solution, a Video multi-evaluation Fusion (VMAF) based method may be adopted to process the detection frame of the human body, so as to obtain the image quality score.
(2) Human orientation screening
Processing an image to be detected where a third target object is located by adopting a trained human body orientation model on the basis of tracking the associated target to obtain a target orientation, wherein the target orientation is the orientation of the third target object in the image to be detected relative to the camera equipment; judging whether the target orientation is a preset orientation or not and whether the image quality score is larger than a preset quality threshold or not; when the target orientation is a preset orientation and the image quality score is larger than a preset quality threshold value, determining that a first screening condition is met, determining that a third target object is a first target object, and putting a detection result of the first target object and an image to be detected where the first target object is located into a second screening result.
Further, the human body's orientation includes a forward direction, a lateral direction and a backward direction, wherein if the angle of the left and right deflection of the human body's chest and the camera device relative to each other (i.e. the lens of the camera device and the human body's chest are opposite in direction) or both falls within a first preset angle range, the human body's orientation is considered to be the forward direction, and the first preset angle range can be [ -30 °, 30 ° ]; if the directions of the chest of the human body and the lens of the camera device are consistent or the left and right deflection angles of the chest of the human body and the lens of the camera device fall within a second preset angle range, the human body is considered to face backwards, and the first preset angle range and the second preset angle range are different; when the above two conditions are not satisfied, the human body is oriented in a lateral direction.
It should be noted that, because the safety belt may be covered and not easily detected when the human body is oriented in the lateral direction, the preset orientation is set to be the forward direction and the backward direction, that is, the safety belt detection processing is performed on the human body facing the camera device or the human body facing the camera device, so as to greatly reduce the false alarm.
S34: and identifying the image to be detected where the first target object is located to obtain a wearing identification result.
The wearing recognition result comprises a first wearing recognition result and a second wearing recognition result, after the first screening result is obtained, secondary target detection is executed, namely, a pre-trained head detection model is adopted to detect and process an image to be detected where a first target object is located in the first screening result, so as to obtain the head of the first target object, for example, as shown in fig. 4, a detection frame of the head of a person is obtained by inputting a detected person into the head detection model; and then, identifying the image where the head of the first target object is located by adopting a first identification model to obtain a first wearing identification result, wherein the first wearing identification result comprises whether the head of the first target object wears first protective equipment or not.
Further, marking the head part in the corresponding image to be detected based on the human body marked in the step to obtain marking data; then, the head detection model (such as Yolov5) is trained based on the head detection model and the labeled data, so as to obtain the trained head detection model.
In one embodiment, the first protective equipment comprises a safety helmet, and the first identification model is a safety helmet identification model; after the head of the operator is detected, the operation of helmet recognition is carried out: identifying whether the head of the worker wears a safety helmet or not based on the trained safety helmet identification model; specifically, the helmet identification model is a target identification model, and the training comprises the following specific steps: collecting head images of people wearing safety helmets and people not wearing safety helmets to obtain training data; and classifying the head images in the training data based on a target recognition model to perform model training, wherein the target classification model can be a Residual Network (ResNet). Because the invalid characteristics in the human body can be removed by the human head, the characteristics of the safety helmet are convenient to extract, and the effect of the model can be improved.
After the second screening result is obtained, the second identification model is adopted to identify the image to be detected where the first target object is located in the second screening result, so that a second wearing identification result is obtained, and the second wearing identification result comprises whether the first target object wears the second protection device or not.
In one embodiment, the second protection device comprises a safety belt, the second identification model is a safety belt detection model, and safety belt identification is performed on the human bodies on the front side and the back side based on the human bodies detected in the above steps, so as to detect whether the human bodies wear the safety belt or not; the training of the safety belt detection model comprises the following steps: collecting a human body image wearing a safety belt, and marking the safety belt on the human body image to obtain marking data; the seat belt detection model is then trained based on the target detection model (e.g., YOLOv5) and the annotation data.
In summary, for seat belt recognition, a human body with a preset orientation (i.e., forward and backward) and a better image quality is selected; for helmet identification, human bodies with good image quality are screened, and preselection of the first target object is achieved through the operation.
S35: and determining the position relation based on the position of the first target object and the preset area corresponding to the second target object.
Judging whether the position of the first target object falls in a preset area to obtain a position relation, wherein the preset area is a set range around the second target object, such as: a region of radius R centered on the second target. It is understood that the preset area may be an area under an image coordinate system or an area under a world coordinate system, such as: the preset area is an area under an image coordinate system, and when a second target object exists in the image to be detected where the first target object is located, whether the position of the first target object falls in the preset area or not can be judged; or, the preset area is an area in the world coordinate system, and when the second target object does not exist in the to-be-detected image in which the first target object is located, the position of the first target object in the image coordinate system may be converted into a position in the world coordinate system, and then it is determined whether the position falls within the preset area.
Further, when a second target object exists in the to-be-detected image where the first target object is located, calculating the intersection ratio of the detection frame of the first target object and the detection frame of the second target object in the to-be-detected image to obtain the current intersection ratio; judging whether the current intersection ratio is larger than a preset intersection ratio, wherein the preset intersection ratio can be 0.5; if the current intersection ratio is larger than the preset intersection ratio, determining that the position relation is that the position of the first target object falls in the preset area; and if the current intersection ratio is smaller than or equal to the preset intersection ratio, determining that the position relation is that the position of the first target object falls outside the preset area.
It is to be understood that, in other embodiments, if the first target object is located in the image to be detected without the second target object, the first target object is not within the preset region of the second target object by default.
S36: when the position relationship is that the position of the first target object falls within the preset area, determining that the current wearing rule is the first wearing rule, and determining whether the wearing identification result conforms to the first wearing rule.
If the position of the first target object is in the preset area, determining that the current wearing rule is a first wearing rule, and determining whether the wearing identification result meets the first wearing rule, wherein the first wearing rule comprises that the first target object wears first protective equipment and second protective equipment.
Further, taking the first protective device as a safety helmet and the second protective device as a safety belt as an example, when the position of the first target object is in a preset area, the first target object wears the safety belt and the first target object wears the safety helmet, it is determined that the first target object is worn to meet a first wearing rule; and when the position of the first target object is in the preset area but the first target object does not wear a safety belt or a safety helmet, generating alarm information.
S37: when the position relation is that the position of the first target object falls outside the preset area, determining that the current wearing rule is the second wearing rule, and determining whether the wearing identification result conforms to the second wearing rule.
And if the position of the first target object does not fall within the preset area of the second target object, determining that the current wearing rule is a second wearing rule, and determining whether the wearing identification result conforms to the second wearing rule, wherein the second wearing rule comprises that the first target object wears the first protective equipment.
Further, taking the first protective device as a safety helmet as an example, when the first target object does not fall within the preset area and the safety helmet is worn by the first target object, it is determined that the wearing of the first target object meets the second wearing rule. When the first target object does not fall into the preset area and the safety helmet is not worn by the first target object, alarm information is generated, and an alarm is given when the wearing of the operator is not in compliance.
It can be understood that, in order to improve the accuracy of the alarm, the wearing compliance analysis may be performed on the same person by voting, for example, if the to-be-detected images corresponding to the first target object total 10 frames, and the set value is 5 frames, and the wearing recognition result of 7 frames of the 10 frames of images meets the specification, and is greater than the set value, it is determined that the wearing of the first target object in the whole operation process is in compliance with the specification, and there is no potential safety hazard.
In other embodiments, the number of violations may be obtained by recording that the wearing of the same first target object does not satisfy the current wearing for a specified number of times; judging whether the violation times are greater than preset times or not; and if the violation times are greater than the preset times, generating alarm information, and realizing alarm when the wearing of the operating personnel is not in compliance. For example, if the ratio of the counted violation number to the number of processing frames in 5 seconds is greater than a preset ratio (e.g., 50%) within a specified time, such as 5 seconds, an alarm is issued.
It should be noted that, in order to improve the accuracy of identification and avoid false alarm, if the human body in the image to be detected does not satisfy the screening condition in the above process, no processing is performed, no alarm is temporarily performed, and analysis is performed when the screening condition is satisfied.
The embodiment provides a detection scheme for vehicles on a construction site, which adaptively selects wearing regulations of corresponding areas by identifying human bodies in different areas (namely, whether people are in a set range around a vehicle), improves the identification effect, and can be applied to construction scenes on the construction site; in addition, when the safety belt is detected, the third target object is screened based on the orientation of the third target object, and the safety belt detection is only carried out on the human bodies on the front side and the back side, so that the accuracy rate of identification is facilitated, and the generalization performance is strong; in addition, whether the safety helmet is worn by a human body or not is identified based on the head of the human body, and the identification effect of the safety helmet can be improved.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an embodiment of the detection apparatus provided in the present application, the detection apparatus 50 includes a memory 51 and a processor 52 connected to each other, the memory 51 is used for storing a computer program, and the computer program is used for implementing the wear compliance detection method in the foregoing embodiment when being executed by the processor 52.
In the related technology, the judgment of the wearing compliance of the operating personnel only aims at the detection of a safety helmet or a work clothes, and the support for the scene of a large park of a construction site is lacked; the embodiment provides the joint detection of safety helmet and safety belt to building site scene, supports the operation when car and no car and dresses the detection, and whether the people need detect when being located the vehicle region and wear safety helmet and safety belt promptly, and the people only need detect when the vehicle region is not and whether wears the safety helmet, realizes that different regions adopt different wearing rules to judge, promotes the accuracy that detects.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an embodiment of a computer-readable storage medium 60 provided in the present application, where the computer-readable storage medium 61 is used for storing a computer program 61, and when the computer program 61 is executed by a processor, the computer program is used for implementing the wearing compliance detection method in the foregoing embodiment.
The computer-readable storage medium 60 may be a server, a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and various media capable of storing program codes.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is only one type of logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings, or which are directly or indirectly applied to other related technical fields, are intended to be included within the scope of the present application.

Claims (13)

1. A wear compliance detection method, comprising:
acquiring a video to be detected and a preset wearing rule set, wherein the preset wearing rule set comprises at least two wearing rules;
processing the video to be detected to obtain a first target object and a second target object;
acquiring the position relation between the first target object and the second target object;
and selecting the wearing rule matched with the position relation from a preset wearing rule set to obtain a current wearing rule, and determining whether the wearing identification result of the first target object conforms to the current wearing rule.
2. The wearing compliance detecting method according to claim 1, wherein the step of processing the video to be detected to obtain a first target object and a second target object comprises:
tracking the video to be detected to obtain a tracking result, wherein the tracking result comprises an image to be detected, and the image to be detected comprises the second target object and/or a third target object;
screening a third target object in the tracking result to obtain the first target object;
before the step of determining whether the wearing identification result of the first target object meets the current wearing specification, the method comprises the following steps:
and identifying the image to be detected where the first target object is located to obtain the wearing identification result.
3. The wearing compliance detecting method according to claim 2, wherein the at least two wearing regulations include a first wearing regulation and a second wearing regulation, and the step of acquiring the positional relationship between the first target object and the second target object includes:
and determining the position relation based on the position of the first target object and a preset area corresponding to the second target object.
4. The wearing compliance detecting method according to claim 3, wherein the step of selecting the wearing compliance matching the positional relationship from a preset wearing compliance set to obtain the current wearing compliance includes:
when the position relation is that the position of the first target object is within the preset area, determining that the current wearing rule is a first wearing rule, wherein the first wearing rule comprises that the first target object wears first protective equipment and second protective equipment;
when the position relation is that the position of the first target object is outside the preset area, determining that the current wearing rule is a second wearing rule, wherein the second wearing rule comprises that the first target object wears the first protective device.
5. The wearing compliance detection method of claim 4, wherein the step of screening the third target object in the tracking result to obtain the first target object comprises:
and screening the third target object based on the image quality of the third target object and the orientation of the third target object to obtain a screening result, wherein the screening result comprises the first target object, a detection result of the first target object and an image to be detected where the first target object is located.
6. The wearing compliance detection method of claim 5, wherein the screening results include a first screening result and a second screening result, and the step of screening the third target object based on the image quality of the third target object and the orientation of the third target object to obtain the screening results includes:
performing quality evaluation processing on the image area where the detection frame of the third target object is located to obtain an image quality score;
judging whether the image quality score is larger than a preset quality threshold value or not;
if so, determining that the third target object is the first target object, and putting the detection result of the first target object and the image to be detected where the first target object is located into the first screening result.
7. The wearing compliance detecting method of claim 6, wherein the step of screening the third target object based on the image quality of the third target object and the orientation of the third target object to obtain the screening result further comprises:
processing the image to be detected where the third target object is located by adopting a human body orientation model to obtain a target orientation, wherein the target orientation is the orientation of the third target object in the image to be detected relative to the camera equipment;
and when the target orientation is a preset orientation and the image quality score is greater than a preset quality threshold value, determining that the third target object is the first target object, and putting the detection result of the first target object and the image to be detected where the first target object is located into the second screening result.
8. The wear compliance detection method of claim 6, wherein the wear identification result comprises a first wear identification result, the method further comprising:
detecting the image to be detected in which the first target object is located in the first screening result by adopting a head detection model to obtain the head of the first target object;
the step of identifying the to-be-detected image of the first target object to obtain the wearing identification result comprises the following steps of:
and identifying the image of the head of the first target object by adopting a first identification model to obtain the first wearing identification result, wherein the first wearing identification result comprises whether the head of the first target object wears the first protection equipment.
9. The wearing compliance detection method according to claim 8, wherein the wearing recognition result further includes a second wearing recognition result, and the step of performing recognition processing on the image to be detected where the first target object is located to obtain the wearing recognition result further includes:
and identifying the image to be detected where the first target object is located in the second screening result by using a second identification model to obtain a second wearing identification result, wherein the second wearing identification result comprises whether the first target object wears the second protective device.
10. The wearing compliance detection method according to claim 3, wherein the step of determining the positional relationship based on the position of the first target object and a preset region corresponding to the second target object includes:
calculating the intersection ratio of the detection frame of the first target object and the detection frame of the second target object to obtain the current intersection ratio;
judging whether the current cross-over ratio is larger than a preset cross-over ratio or not;
if so, determining that the position relation is that the position of the first target object falls in the preset area;
if not, determining that the position relation is that the position of the first target object is outside the preset area.
11. The wear compliance detection method of claim 1, further comprising:
recording the number of times that the wearing of the same first target object does not meet the current wearing regulation, and obtaining the number of violation times;
judging whether the violation times are greater than preset times or not;
and if so, generating alarm information.
12. A detection apparatus, comprising a memory and a processor connected to each other, wherein the memory is configured to store a computer program, which when executed by the processor is configured to implement the wear compliance detection method of any one of claims 1-11.
13. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, is configured to implement the wear compliance detection method of any one of claims 1-11.
CN202210435070.4A 2022-04-24 2022-04-24 Wearing compliance detection method, detection device and computer readable storage medium Pending CN114998778A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210435070.4A CN114998778A (en) 2022-04-24 2022-04-24 Wearing compliance detection method, detection device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210435070.4A CN114998778A (en) 2022-04-24 2022-04-24 Wearing compliance detection method, detection device and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN114998778A true CN114998778A (en) 2022-09-02

Family

ID=83025811

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210435070.4A Pending CN114998778A (en) 2022-04-24 2022-04-24 Wearing compliance detection method, detection device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN114998778A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116311556A (en) * 2023-04-06 2023-06-23 北京数通魔方科技有限公司 Management and control method and management and control system based on artificial intelligence

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116311556A (en) * 2023-04-06 2023-06-23 北京数通魔方科技有限公司 Management and control method and management and control system based on artificial intelligence
CN116311556B (en) * 2023-04-06 2023-08-11 北京数通魔方科技有限公司 Management and control method and management and control system based on artificial intelligence

Similar Documents

Publication Publication Date Title
CN111445524B (en) Scene understanding-based construction site worker unsafe behavior identification method
CN106372662B (en) Detection method and device for wearing of safety helmet, camera and server
CN109830078B (en) Intelligent behavior analysis method and intelligent behavior analysis equipment suitable for narrow space
CN112235537B (en) Transformer substation field operation safety early warning method
CN108053427A (en) A kind of modified multi-object tracking method, system and device based on KCF and Kalman
CN103714631B (en) ATM cash dispenser intelligent monitor system based on recognition of face
CN110390229B (en) Face picture screening method and device, electronic equipment and storage medium
CN112434669B (en) Human body behavior detection method and system based on multi-information fusion
CN114783037B (en) Object re-recognition method, object re-recognition apparatus, and computer-readable storage medium
CN112069988A (en) Gun-ball linkage-based driver safe driving behavior detection method
CN112613449A (en) Safety helmet wearing detection and identification method and system based on video face image
CN114894337B (en) Temperature measurement method and device for outdoor face recognition
CN114998778A (en) Wearing compliance detection method, detection device and computer readable storage medium
CN111325133A (en) Image processing system based on artificial intelligence recognition
CN111460985A (en) On-site worker track statistical method and system based on cross-camera human body matching
CN113947783A (en) Personnel riding monitoring management method and system
CN115880722A (en) Intelligent identification method, system and medium worn by power distribution operating personnel
CN115797856A (en) Intelligent construction scene safety monitoring method based on machine vision
CN109800656B (en) Positioning method and related product
CN114220117A (en) Wearing compliance detection method and device and computer readable storage medium
CN112597903B (en) Electric power personnel safety state intelligent identification method and medium based on stride measurement
KR101840042B1 (en) Multi-Imaginary Fence Line Setting Method and Trespassing Sensing System
CN111383248A (en) Method and device for judging red light running of pedestrian and electronic equipment
WO2020217812A1 (en) Image processing device that recognizes state of subject and method for same
Sun et al. An improved YOLO V5-based algorithm of safety helmet wearing detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination