CN112101260B - Method, device, equipment and storage medium for identifying safety belt of operator - Google Patents

Method, device, equipment and storage medium for identifying safety belt of operator Download PDF

Info

Publication number
CN112101260B
CN112101260B CN202011002640.8A CN202011002640A CN112101260B CN 112101260 B CN112101260 B CN 112101260B CN 202011002640 A CN202011002640 A CN 202011002640A CN 112101260 B CN112101260 B CN 112101260B
Authority
CN
China
Prior art keywords
image
preset
target operator
safety belt
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011002640.8A
Other languages
Chinese (zh)
Other versions
CN112101260A (en
Inventor
方燕琼
涂小涛
郑培文
胡春潮
伍晓泉
李晓枫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Southern Power Grid Power Technology Co Ltd
Original Assignee
China Southern Power Grid Power Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Southern Power Grid Power Technology Co Ltd filed Critical China Southern Power Grid Power Technology Co Ltd
Priority to CN202011002640.8A priority Critical patent/CN112101260B/en
Publication of CN112101260A publication Critical patent/CN112101260A/en
Application granted granted Critical
Publication of CN112101260B publication Critical patent/CN112101260B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Abstract

The invention discloses a method, a device, equipment and a storage medium for identifying an operator safety belt, which comprise the following steps: acquiring an operation area image; preprocessing the operation area image to generate a preprocessed image; detecting whether a target operator exists in the preprocessed image; when a target operator exists in the preprocessing image, determining an area image to be identified of the target operator from the preprocessing image; generating comprehensive features according to the direction gradient histogram HOG features and the color histogram HOC features extracted from the region images to be identified; and inputting the comprehensive characteristics into a preset SVM classification model, and outputting a safety belt identification result. Therefore, the accuracy of safety belt identification is improved, operators can be timely reminded, and safety accidents are avoided.

Description

Method, device, equipment and storage medium for identifying safety belt of operator
Technical Field
The present invention relates to the field of image recognition technologies, and in particular, to a method, an apparatus, a device, and a storage medium for recognizing a safety belt of an operator.
Background
At present, some work on the electric power construction operation site needs operation personnel to finish in the high altitude, when the height reaches a certain degree, the operation personnel need wear the safety belt to operate, so that the safety of the operation personnel is guaranteed, but part of operation personnel often do not wear the safety belt for self reasons to cause personal safety problems, various objective problems exist in human supervision, and personal safety accidents are extremely easy to occur if supervision is not timely.
Aiming at the safety supervision of construction sites and the situation that the safety belt is worn by the operator ascending a height, the safety belt wearing detection method based on image identification is adopted. Firstly, extracting a foreground from an obtained video image by using a background difference method, binarizing the foreground to divide a moving target, distinguishing whether the moving target represents a moving target of a person or not by using a scale filtering method and the like according to the characteristics of the target, and tracking and marking the moving target representing the person. And finally, setting two detection lines in the middle of the pavement, and judging whether the person wears the safety belt or not by detecting the distribution condition of the chromaticity values of the pixel points in 2/3 parts of the detection lines when the moving object of the person reaches the middle of the two detection lines.
However, the above identification method can only carry out the wearing detection of the safety belt for all people in the image detection line, and in the actual operation process, management personnel and the like may exist in the image, and the above identification method may have the situation of false identification, meanwhile, because the operation scene is complex, the proportion of the safety belt in the picture is small, and only the safety belt identification method based on the color histogram (histogram of oriented color, HOC) is easily affected by factors such as weather, illumination, shadow or angle, and the like, so that the identification accuracy is low.
Disclosure of Invention
The invention provides a method, a device, equipment and a storage medium for identifying an operator safety belt, which solve the technical problems that the identification method of the safety belt in the prior art cannot automatically identify the operator, is easily influenced by environmental factors, and has low identification accuracy.
The invention provides a method for identifying an operator safety belt, which comprises the following steps:
acquiring an operation area image;
preprocessing the operation area image to generate a preprocessed image;
detecting whether a target operator exists in the preprocessed image;
when a target operator exists in the preprocessing image, determining an area image to be identified of the target operator from the preprocessing image;
generating comprehensive features according to the direction gradient histogram HOG features and the color histogram HOC features extracted from the region images to be identified;
and inputting the comprehensive characteristics into a preset SVM classification model, and outputting a safety belt identification result.
Optionally, the step of preprocessing the operation area image to generate a preprocessed image includes:
detecting a plurality of color components of the work area image; the plurality of color components includes a red component, a green component, and a blue component;
Calculating a gray value of the operation region image using the red component, the green component, and the blue component;
constructing a gray level image according to the gray level value of the operation area image;
and denoising the gray level image to generate a preprocessed image.
Optionally, the step of detecting whether the target operator exists in the preprocessed image includes:
inputting the preprocessed image into a preset personnel detection model and a preset ladder detection model, and generating a first image frame and a second image frame on the preprocessed image; the first image frame is used for identifying the position of a person, and the second image frame is used for identifying the position of a ladder;
calculating a first ratio between an area of the overlapping portion and an area of the second image frame when there is an overlapping portion between the first image frame and the second image frame;
calculating a first distance between the bottom edge of the overlapping portion and the bottom edge of the second image frame;
calculating a second distance between a top edge of the second image frame and a bottom edge of the second image frame;
determining a second ratio between the first distance and the second distance;
And when the first proportion is greater than or equal to a first preset threshold value and the second proportion is greater than or equal to a second preset threshold value, determining that a target operator exists in the preprocessed image.
Optionally, when the target operator exists in the preprocessed image, the step of determining the image of the area to be identified of the target operator from the preprocessed image includes:
when a target operator exists in the preprocessed image, capturing an image in the first image frame from the preprocessed image as a personnel area image;
inputting the personnel area image into a preset two-dimensional gesture estimation network model, and determining two-dimensional joint coordinates of personnel corresponding to the personnel area image;
if the number of matches between the two-dimensional joint coordinates and the preset two-dimensional climbing gesture coordinates is greater than a first preset number threshold, determining that the gesture of the person corresponding to the person region image is a climbing gesture;
and intercepting a preset proportion image of the personnel area image to be used as an area image to be identified of the target operator.
Optionally, the method further comprises:
inputting the two-dimensional joint coordinates into a preset three-dimensional gesture estimation network model, and determining the three-dimensional joint coordinates of the person corresponding to the person region image;
If the matching number of the three-dimensional joint coordinates and the preset three-dimensional climbing gesture coordinates is larger than a second preset number threshold, determining that the gesture of the person corresponding to the person region image is a climbing gesture;
and intercepting a preset proportion image of the personnel area image to be used as an area image to be identified of the target operator.
Optionally, the step of generating a composite feature according to the direction gradient histogram HOG feature and the color histogram HOC feature extracted from the region image to be identified includes:
removing the background of the area image to be identified and filtering to generate an image to be processed;
dividing the image to be processed into a plurality of pixel small blocks;
calculating gradient direction histograms of the pixel small blocks to obtain HOG characteristics;
loading the image to be processed in a preset HSV color space, and determining the tone and saturation values of each pixel point in the image to be processed;
constructing a color histogram based on the hue and saturation values of each pixel point to obtain an HOC characteristic;
and carrying out normalization processing on the HOG features and the HOC features, and splicing to generate comprehensive features.
Optionally, the safety belt identification result includes that the target operator has safety belts and the target operator has no safety belts, and the step of inputting the comprehensive characteristics into a preset SVM classification model and outputting the safety belt identification result includes:
Judging whether a safety belt exists in an area image to be identified corresponding to the comprehensive characteristics according to the comprehensive characteristics through the preset SVM classification model;
if yes, outputting a prompt of the safety belt of the target operator;
if not, outputting a prompt that the target operator has no safety belt.
The invention provides an operator safety belt identification device, which comprises:
the to-be-operated area image acquisition module is used for acquiring an operation area image;
the preprocessing image generation module is used for preprocessing the operation area image to generate a preprocessing image;
the target operator detection module is used for detecting whether target operators exist in the preprocessed image;
the area image to be identified determining module is used for determining an area image to be identified of the target operator from the preprocessing image when the target operator exists in the preprocessing image;
the comprehensive feature generation module is used for generating comprehensive features according to the direction gradient histogram HOG features and the color histogram HOC features extracted from the region image to be identified;
and the safety belt identification result output module is used for inputting the comprehensive characteristics into a preset SVM classification model and outputting a safety belt identification result.
Optionally, the preprocessing image generating module includes:
a color component detection sub-module for detecting a plurality of color components of the work area image; the plurality of color components includes a red component, a green component, and a blue component;
a gray value calculation sub-module for calculating a gray value of the operation area image using the red component, the green component, and the blue component;
a gray image construction sub-module for constructing a gray image according to the gray value of the operation area image;
and the preprocessing image generation sub-module is used for carrying out denoising processing on the gray level image to generate a preprocessing image.
Optionally, the target operator detection module includes:
the image frame generation sub-module is used for inputting the preprocessed image into a preset personnel detection model and a preset ladder detection model, and generating a first image frame and a second image frame on the preprocessed image; the first image frame is used for identifying the position of a person, and the second image frame is used for identifying the position of a ladder;
a first proportion calculating sub-module for calculating a first proportion between an area of the overlapping portion and an area of the second image frame when there is an overlapping portion between the first image frame and the second image frame;
A first distance calculating sub-module for calculating a first distance between a bottom edge of the overlapping portion and a bottom edge of the second image frame;
a second distance calculation sub-module for calculating a second distance between a top edge of the second image frame and a bottom edge of the second image frame;
a second ratio determination sub-module for determining a second ratio between the first distance and the second distance;
and the target operator detection sub-module is used for determining that the target operator exists in the preprocessed image when the first proportion is greater than or equal to a first preset threshold value and the second proportion is greater than or equal to a second preset threshold value.
Optionally, the to-be-identified area image determining module includes:
a personnel area image intercepting sub-module, configured to intercept an image in the first image frame from the preprocessed image as a personnel area image when a target operator exists in the preprocessed image;
the two-dimensional joint coordinate determination submodule is used for inputting the personnel area image into a preset two-dimensional gesture estimation network model and determining the two-dimensional joint coordinate of the personnel corresponding to the personnel area image;
the first climbing gesture determining submodule is used for determining that the gesture of the person corresponding to the person region image is a climbing gesture if the matching number of the two-dimensional joint coordinates and the preset two-dimensional climbing gesture coordinates is larger than a first preset number threshold;
The first to-be-identified area image intercepting sub-module is used for intercepting a predetermined proportion image of the personnel area image to serve as the to-be-identified area image of the target operating personnel.
Optionally, the to-be-identified area image determining module further includes:
the three-dimensional joint coordinate determination submodule is used for inputting the two-dimensional joint coordinate into a preset three-dimensional gesture estimation network model and determining the three-dimensional joint coordinate of the person corresponding to the person region image;
the second climbing gesture determining submodule is used for determining that the gesture of the person corresponding to the person region image is a climbing gesture if the matching number of the three-dimensional joint coordinates and the preset three-dimensional climbing gesture coordinates is larger than a second preset number threshold;
the second to-be-identified area image intercepting sub-module is used for intercepting a predetermined proportion image of the personnel area image to serve as the to-be-identified area image of the target operating personnel.
Optionally, the integrated feature generation module includes:
the image to be processed generating sub-module is used for removing the background of the area image to be identified and filtering to generate an image to be processed;
the pixel small block dividing sub-module is used for dividing the image to be processed into a plurality of pixel small blocks;
The HOG feature determining submodule is used for calculating gradient direction histograms of the pixel small blocks to obtain HOG features;
a pixel value determining submodule, configured to load the image to be processed in a preset HSV color space, and determine values of hue and saturation of each pixel point in the image to be processed;
the HOG feature determining submodule is used for constructing a color histogram based on the hue and saturation values of each pixel point to obtain HOC features;
and the comprehensive feature generation sub-module is used for carrying out normalization processing on the HOG features and the HOC features and splicing the HOG features and the HOC features to generate comprehensive features.
Optionally, the safety belt identification result includes that the target operator has a safety belt and the target operator has no safety belt, and the safety belt identification result output module includes:
the safety belt judging sub-module is used for judging whether a safety belt exists in the region image to be identified corresponding to the comprehensive characteristics according to the comprehensive characteristics through the preset SVM classification model;
the first prompting submodule is used for outputting a prompt that a target operator has a safety belt if the target operator has the safety belt;
and the second prompting sub-module is used for outputting a prompt that the target operator has no safety belt if not.
The invention also provides an electronic device comprising a memory and a processor, wherein the memory stores a computer program, and the computer program, when executed by the processor, causes the processor to execute the steps of the method for identifying the safety belt of the operator according to any embodiment.
The present invention also provides a computer-readable storage medium having stored thereon a computer program which, when executed by the processor, implements the method of identifying a worker seat belt as described in any of the above embodiments.
From the above technical scheme, the invention has the following advantages:
in the embodiment of the invention, the acquired operation area image is preprocessed to generate the preprocessed image, when a target operator exists in the preprocessed image, the area image to be identified of the target operator is determined, the corresponding HOG characteristic and HOC characteristic are extracted according to the area image to be identified to obtain the comprehensive characteristic, and finally the comprehensive characteristic is input into a preset SVM classification model, so that a safety belt identification result is output to determine whether the target operator carries a safety belt, the technical problem that the safety belt identification method in the prior art cannot automatically identify the operator and is easily influenced by environmental factors, so that the identification accuracy is low is solved, the accuracy of safety belt identification is improved, the operator can be timely reminded, and safety accidents are avoided.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the invention, and that other drawings can be obtained from these drawings without inventive faculty for a person skilled in the art.
FIG. 1 is a flow chart of steps of a method for identifying a safety belt of an operator according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating steps of a method for identifying a safety belt of an operator according to an alternative embodiment of the present invention;
FIG. 3 is a schematic diagram of an overlapping portion of a first image frame and a second image frame according to an embodiment of the present invention;
FIG. 4 is a flowchart of steps for determining an image of a region to be identified of the target operator according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a two-dimensional posture estimation result and a three-dimensional posture estimation result according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a three-dimensional pose estimation network model according to an embodiment of the present invention;
FIG. 7 is a block flow diagram of determining whether an operator is in a climbing posture in an embodiment of the present invention;
Fig. 8 is a block diagram of a safety belt recognition device for an operator according to an embodiment of the present invention.
Detailed Description
In the prior art, a classifier for detecting pedestrians is usually trained by collecting positive samples of the targets of the pedestrians and negative samples of the targets of no pedestrians in a scene to be monitored based on characteristics of a direction gradient histogram (Histogram of Oriented Gradient, HOG) and a linear support vector machine (Support Vector Machine, SVM), the targets of the pedestrians entering the scene are detected, and then the detected pedestrians are subjected to dressing analysis and judgment, so that the pedestrians are easily influenced by environmental factors, whether the pedestrians are operators or not cannot be identified, and the identification accuracy is low. The embodiment of the invention provides a method, a device, equipment and a storage medium for identifying an operator safety belt, which are used for solving the technical problems that the identification method of the safety belt in the prior art cannot automatically identify the operator, is easily influenced by environmental factors and causes lower identification accuracy.
In order to make the objects, features and advantages of the present invention more comprehensible, the technical solutions in the embodiments of the present invention are described in detail below with reference to the accompanying drawings, and it is apparent that the embodiments described below are only some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, fig. 1 is a flowchart illustrating steps of a method for identifying a safety belt of an operator according to an embodiment of the present invention.
The invention provides a method for identifying an operator safety belt, which comprises the following steps:
step 101, acquiring an operation area image;
in the embodiment of the invention, when whether the operator carries the safety belt or not needs to be judged, the image of the working area where the operator is located can be acquired through the camera or other image acquisition devices.
102, preprocessing the operation area image to generate a preprocessed image;
in a specific implementation, the acquired operation area image may have noise or color indistinguishable, and at this time, the operation area image may be preprocessed to eliminate noise in the operation area image, and gray values in the operation area image are unified to generate a preprocessed image.
Step 103, detecting whether a target operator exists in the preprocessed image;
after the preprocessed image is obtained, whether a target operator exists in the preprocessed image can be detected first, and if the target operator exists, whether the target operator has a safety belt or not can be continuously judged in order to reduce the use load of a processor and improve the safety belt recognition efficiency.
If the image is not present, the preprocessed images generated by the images of other working areas can be continuously acquired for detection.
104, determining an area image to be identified of the target operator from the preprocessing image when the target operator exists in the preprocessing image;
in one example of the invention, when a target operator is present in the pre-processed image, it is indicated that the pre-processed image is required to detect whether the target operator is carrying a seat belt, but to avoid misdetection situations, such as when the target operator is already on the ground, or when the target operator is not present on a ladder. And determining an area image to be identified of the target operator from the preprocessing image at the moment so as to detect whether the target operator is in a climbing operation state.
Step 105, generating comprehensive features according to the direction gradient histogram HOG features and the color histogram HOC features extracted from the region image to be identified;
after the area image to be identified is acquired, the condition that the target operator is in a climbing operation state is described, corresponding HOG features and HOC features are extracted from the area image to be identified, so that whether the safety belt exists and the position of the safety belt are identified, and in order to improve the identification efficiency of the follow-up safety belt, the HOG features and the HOC features are combined, and comprehensive features are generated.
It is worth mentioning that HOG (Histogram of Oriented Gradient, directional gradient histogram) features refer to a descriptor that can quickly describe local gradient features of an object. Firstly, dividing a window into a plurality of blocks, then dividing each block into a plurality of cells, counting a gradient direction histogram in each cell as a characteristic vector of the cell, connecting the characteristic vector of each cell as a characteristic vector of a block, and finally connecting the characteristic vectors of the blocks to obtain the HOG characteristic descriptor of the window.
The HOC (histogram of oriented color, color histogram) feature refers to a descriptor that can describe the proportion of different colors in the entire image.
And 106, inputting the comprehensive characteristics into a preset SVM classification model, and outputting a safety belt identification result.
The support vector machine (Support Vector Machine, SVM) refers to a two-class classification model, the basic model of which is defined as the linear classifier with the largest interval on the feature space, and the learning strategy of which is the interval maximization, and can be finally converted into a solution of a convex quadratic programming problem.
In the embodiment of the invention, the acquired operation area image is preprocessed to generate the preprocessed image, when a target operator exists in the preprocessed image, the area image to be identified of the target operator is determined, the corresponding HOG characteristic and HOC characteristic are extracted according to the area image to be identified to obtain the comprehensive characteristic, and finally the comprehensive characteristic is input into a preset SVM classification model, so that a safety belt identification result is output to determine whether the target operator carries a safety belt, the technical problem that the safety belt identification method in the prior art cannot automatically identify the operator and is easily influenced by environmental factors, so that the identification accuracy is low is solved, the accuracy of safety belt identification is improved, the operator can be timely reminded, and safety accidents are avoided.
Referring to fig. 2, fig. 2 is a flowchart illustrating steps of a method for identifying a safety belt of an operator according to an alternative embodiment of the present invention.
The invention provides a method for identifying an operator safety belt, which comprises the following steps:
step 201, acquiring a working area image;
in the embodiment of the present invention, the specific implementation process of step 201 is similar to that of step 101 described above, and will not be repeated here.
Step 202, preprocessing the operation area image to generate a preprocessed image;
optionally, the step 202 may include the steps of:
detecting a plurality of color components of the work area image; the plurality of color components includes a red component, a green component, and a blue component;
calculating a gray value of the operation region image using the red component, the green component, and the blue component;
constructing a gray level image according to the gray level value of the operation area image;
and denoising the gray level image to generate a preprocessed image.
In the embodiment of the invention, the generally obtained operation area images are color images, and the sensitivity of human eye vision to three colors of red, blue and green is different, so that the gray scale processing is required to be performed on the color images by using a weighted average method to construct gray scale images.
Taking a YUV color space as an example, a gray image is constructed by taking a Y component in the YUV space, and the corresponding linear relation between the brightness Y and three color components of R (red component), G (green component) and B (blue component) is shown in the following formula:
Y=0.299×R+0.587×G+0.114×B
where Y represents both the luminance value and the gray value of the image, R, G, B represents the values over the R, G, B three components in the image RGB color space.
YUV is a kind of compiling true-color space (color space), and proper nouns such as Y' UV, YUV, YCbCr, YPbPr may be called YUV, which overlap each other. "Y" represents brightness (Luminance or Luma), i.e., gray scale values, "U" and "V" represent Chrominance (Chroma) to describe the image color and saturation for a given pixel color.
After the gray image is constructed, noise may exist in the image during the acquisition process due to environmental influences, such as camera shake, and the constructed gray image needs to be subjected to denoising processing to generate a preprocessed image.
For example, a neighborhood mean filter method may be used for noise processing, and the processing steps of the algorithm are as follows: an image f (x, y) is set as an N-order square matrix, the smoothed image is expressed as g (x, y), and the average value of the pixels in the image f (x, y) and the appointed field thereof is used as a new value of the pixels so as to eliminate abrupt pixel points, thereby filtering certain noise. The mathematical meaning of the neighborhood mean filtering method is shown in the following formula:
The set of coordinates of the pixels in the neighborhood centered on the pixel (x, y) is denoted by S in the above formula, and a is the number of all the pixels in the set S.
Step 203, detecting whether a target operator exists in the preprocessed image;
further, the step 203 may include the steps of:
inputting the preprocessed image into a preset personnel detection model and a preset ladder detection model, and generating a first image frame and a second image frame on the preprocessed image; the first image frame is used for identifying the position of a person, and the second image frame is used for identifying the position of a ladder;
calculating a first ratio between an area of the overlapping portion and an area of the second image frame when there is an overlapping portion between the first image frame and the second image frame;
calculating a first distance between the bottom edge of the overlapping portion and the bottom edge of the second image frame;
calculating a second distance between a top edge of the second image frame and a bottom edge of the second image frame;
determining a second ratio between the first distance and the second distance;
and when the first proportion is greater than or equal to a first preset threshold value and the second proportion is greater than or equal to a second preset threshold value, determining that a target operator exists in the preprocessed image.
In a specific implementation, the preset personnel detection model and the preset ladder detection model can be obtained by training a target detection model such as M2Det, and by using a large number of images of personnel and ladders with real mark frames, respectively training to obtain a personnel detector (M2 Det 1) and a ladder detector (M2 Det 2); inputting images to a personnel detector (M2 det 1) and a ladder detector (M2 det 2), calculating eigenvectors through a network, and correcting the model by combining a back propagation method to obtain a personnel detection model and a ladder detection model.
In the embodiment of the invention, after the first image frame and the second image frame are acquired, whether the target operator exists in the preprocessed image or not can be judged according to the position relation of the first image frame and the second image frame, wherein the target operator refers to an operator on a ladder.
Referring to FIG. 3, FIG. 3 is a schematic diagram showing an overlapping portion of a first image frame and a second image frame according to an embodiment of the present invention, wherein the overlapping portion includes a first image frame B p And a second image frame B 1 The area of the first image frame is S bp The area of the second image frame is S b1 The area of the overlapping part is S i The distance between the top edge and the bottom edge of the first image frame is H 1 The specific distance between the bottom edge of the overlapping part and the bottom edge of the first image frame is H d
In a specific implementation, when there is an overlap between the first image frame and the second image frame, S is calculated i And S is equal to b1 Is a first ratio T of 1 =S i /S bl And H 1 And H is d Is a second ratio T of 2 =H d /H 1 . And when the first proportion reaches a first preset threshold value and the second proportion reaches a second preset threshold value, determining that a target operator exists in the preprocessed image.
Optionally, when there is no overlapping portion between the first image frame and the second image frame, it is determined that there is no target operator in the preprocessed image.
Step 204, when a target operator exists in the preprocessed image, determining an area image to be identified of the target operator from the preprocessed image;
referring to FIG. 4, in one example of the invention, the step 204 may include the following steps S1-S4:
s1, when a target operator exists in the preprocessed image, capturing an image in the first image frame from the preprocessed image as a personnel area image;
s2, inputting the personnel area image into a preset two-dimensional gesture estimation network model, and determining two-dimensional joint coordinates of personnel corresponding to the personnel area image;
In a specific implementation, when a target operator exists in the preprocessed image, an image in the first image frame can be cut out from the preprocessed image and used as a person area image, and the person area image is input into a two-dimensional gesture estimation network model, so that two-dimensional joint coordinates of a person corresponding to the person area image can be determined.
It should be noted that the process of determining the two-dimensional joint coordinates through the two-dimensional pose estimation network model may be as follows:
extracting features from the input personnel area image by adopting the same network structure of the first 10 layers of VGG-19 network;
the extracted person region image features are input into a two-branch multi-stage convolutional neural network, where a first branch predicts a set of two-dimensional confidence maps of body part locations (e.g., elbows, knees, etc.). A second branch predicts a set of partial affinity two-dimensional vector fields encoding the degree of association between body part positions;
and analyzing the confidence level and the two-dimensional vector field through greedy reasoning, and generating a two-dimensional node for all people in the image.
Further description of the two-dimensional pose estimation method:
firstly, the first 10 layers of VGG-19 network are utilized to extract the characteristic F, then the characteristic F is processed by a continuous multi-stage network, each stage (t) of the network comprises two branches, and the output results are S respectively t (part confidence map) and L t (part affinity map). Finally, the support area and the position information and the direction information of the body are saved by using the characteristic representation PAFs (part affinity fields).
And detecting key parts of the human body through repeated iterative CNN networks, wherein each CNN network has two branches, namely CNN_S and CNN_L. The network of the first stage and the subsequent stage are morphologically different. The two network branches of each stage are used to calculate a site confidence map (part confidence maps, joint points) and a site affinity domain (part affinity fields, limb trunk), respectively. The input received in the first stage of the network is the characteristic F, and S is obtained after the processing of the network 1 And L 1 . Starting from the second phase, the input of the phase t network comprises three parts: s, S t-1 ,L t-1 ,F。
The inputs to the network for each stage are:
by repeatedly overlappingInstead, until the network converges. Then, two sites were judged by PAFs (d j1 And d j2 ) If connected, PAFs calculates d j1 And d j2 Linear integration over wire ifDirection and->The direction of the (connection vector) is consistent and the value of the linear integral E will be large, the likelihood that these two parts are the torso will be high.
And finally, traversing all collocations, calculating an integral sum, finding out the trunk of each part of the body, wherein adjacent trunks have to share the joint points, and combining all trunks through the joint points to obtain the body skeleton of the owner, so that the two-dimensional joint point coordinates of the owner are obtained.
S3, if the number of matches between the two-dimensional joint coordinates and the preset two-dimensional climbing gesture coordinates is greater than a first preset number threshold, determining that the gesture of the person corresponding to the person region image is a climbing gesture;
in the embodiment of the invention, a two-dimensional climbing gesture library is pre-constructed, the two-dimensional joint point coordinates are matched with the joint point coordinates of the gestures in the climbing gesture library, and if the matching value of the two-dimensional joint point coordinates and the joint point coordinates of one of the climbing gestures in the two-dimensional climbing gesture library exceeds a set first preset coordinate threshold, the two-dimensional joint point has the climbing gesture, otherwise, the two-dimensional joint point is not in the climbing gesture.
S4, intercepting a preset proportion image of the personnel area image to serve as an area image to be identified of the target operating personnel.
The predetermined proportion image may be set according to the type of the safety belt, for example, a five-point safety belt is worn on the shoulder/chest, waist and thigh of the human body, namely, 2/5 to 4/5 of the human body, so that the position of 2/5 to 4/5 of the person region image is selected as the region image to be identified of the target operator.
Further, the step 204 may further include the steps of:
inputting the two-dimensional joint coordinates into a preset three-dimensional gesture estimation network model, and determining the three-dimensional joint coordinates of the person corresponding to the person region image;
if the matching number of the three-dimensional joint coordinates and the preset three-dimensional climbing gesture coordinates is larger than a second preset number threshold, determining that the gesture of the person corresponding to the person region image is a climbing gesture;
and intercepting a preset proportion image of the personnel area image to be used as an area image to be identified of the target operator.
Referring to fig. 5, fig. 5 shows a schematic diagram of a two-dimensional posture estimation result and a three-dimensional posture estimation result in the embodiment of the present invention.
Inputting a two-dimensional human body joint graph into a three-dimensional gesture estimation network model, and mapping the input two-dimensional human body joint point into a potential space by using a second hierarchical layer to obtain human body gesture characteristics; the two-dimensional human body posture features are subjected to feature transformation and encoding through four building blocks, and finally the encoded features are mapped to an output space through an additional SemGConv layer to obtain three-dimensional joint point coordinates.
In the embodiment of the invention, the three-dimensional climbing gesture library is pre-constructed, the three-dimensional joint point coordinates are matched with the joint point coordinates of the gestures in the climbing gesture library, and if the matching value of the three-dimensional joint point coordinates and the joint point coordinates of one of the climbing gestures in the three-dimensional climbing gesture library exceeds a set second preset coordinate threshold, the three-dimensional joint point has the climbing gesture, otherwise, the three-dimensional joint point is not in the climbing gesture.
In one example of the present invention, two-dimensional joint coordinates may be input to a three-dimensional pose estimation network model to obtain three-dimensional joint coordinates, where a building block of the three-dimensional pose estimation network model is a residual block, and a schematic diagram of the three-dimensional pose estimation network model may be shown in fig. 6, where a frame includes two SemGConv layers with 128 channels and a non-local layer, and other modules are similar to the modules in the frame, and are not repeated herein, and all SemGConv layers except the last layer perform batch normalization and ReLU function activation.
Step 205, generating comprehensive features according to the direction gradient Histogram (HOG) features and the color Histogram (HOC) features extracted from the region image to be identified;
in another example of the present invention, the step 205 may include the steps of:
removing the background of the area image to be identified and filtering to generate an image to be processed;
dividing the image to be processed into a plurality of pixel small blocks;
calculating gradient direction histograms of the pixel small blocks to obtain HOG characteristics;
loading the image to be processed in a preset HSV color space, and determining the tone and saturation values of each pixel point in the image to be processed;
Constructing a color histogram based on the hue and saturation values of each pixel point to obtain an HOC characteristic;
and carrying out normalization processing on the HOG features and the HOC features, and splicing to generate comprehensive features.
In the embodiment of the invention, the HOG feature calculation steps are as follows:
inputting images of videos randomly extracted from a real-time video sequence after background subtraction;
gradient calculation: filtering the input image, wherein the filtering wave kernel is [ -1,0,1]And [ -1,0,1] T Calculating gradients of the image in the horizontal direction and the vertical direction, respectively, to thereby calculate a gradient magnitude of each pixel pAnd a gradient direction Θ (p), as shown in the following formula:
/>
wherein v x And v x Respectively representing a horizontal component and a vertical component of the gradient obtained after filtering, wherein Θ (p) is an unsigned real number with a value range of 0-180 degrees;
dividing an input image into small blocks with the same size, and combining a plurality of small blocks into a middle block;
and (3) obtaining a direction channel: equally dividing the value range of theta (p) from 0 DEG to 180 DEG into n channels;
selecting a histogram: and counting gradient direction histograms of the pixels in each small block, wherein the abscissa of the histograms is the n direction channels selected, and the ordinate of the histograms is the sum of the gradient magnitudes of the pixels belonging to a certain direction channel. Finally, a group of vectors is obtained;
Normalization: normalizing the vector by taking the middle block where the corresponding pixel of the vector is positioned as a unit;
forming HOG features: all the vectors processed above are connected to form a set of vectors, i.e. HOG features.
The HOC feature calculation steps are as follows:
transforming the input image into an HSV color space;
the values of the hue and the saturation of each pixel point in the image are respectively and evenly divided into m channels, and m is a total of two groups 2 Seed channel combination;
according to the method for generating the normalized HOG histogram, normalized HOC features are generated, which will not be described in detail herein.
And finally, the normalized HOG features and the HOC features are spliced in series to generate comprehensive features.
The normalization process of the HOG features and the HOC features may be as follows:
and carrying out normalization processing on the HOG features and the HOC features by using a linear transformation and a range method, and outputting normalized comprehensive feature vectors, wherein a normalization reference formula of the linear transformation method is as follows:
the range method normalized reference formula is as follows:
wherein x is i Representing the magnitude, y, of one of the feature values i The magnitudes of the feature values after normalization, min (x) and max (x), represent the minimum and maximum values of the feature values in the feature.
In a specific implementation, the belt identification result includes that the target operator has a belt and the target operator has no belt, and the step 106 may be replaced by the following steps 206 to 208:
step 206, judging whether a safety belt exists in the region image to be identified corresponding to the comprehensive characteristics according to the comprehensive characteristics through the preset SVM classification model;
in the embodiment of the invention, the comprehensive characteristics are input into a preset SVM classification model to judge whether the safety belt exists in the region image to be identified.
It should be noted that the preset SVM classification model may be obtained through pre-training, and the training process may be as follows: firstly, the comprehensive characteristics obtained by determining the images of the area to be identified with the safety belt and the area without the safety belt are sent to an SVM classifier for training, and the maximum boundary hyperplane is obtained, namely the farthest hyperplane from boundary observation points with the safety belt and the area without the safety belt, which indicates that the training is completed at the moment, and the preset SVM classification model is obtained.
Step 207, if yes, outputting a prompt of the target operator for a safety belt;
step 208, if not, outputting a prompt that the target operator has no safety belt.
In the specific implementation, after a preset SVM classification model is obtained, the comprehensive characteristics can be input into the preset SVM classification model for classification judgment, and if safety belts exist in the region images to be identified corresponding to the comprehensive characteristics, the prompt of the safety belts of target operators is output; and if the safety belt does not exist in the region image to be identified corresponding to the comprehensive characteristics, outputting a prompt that the target operator does not have the safety belt.
In the embodiment of the invention, the acquired operation area image is preprocessed to generate the preprocessed image, when a target operator exists in the preprocessed image, the area image to be identified of the target operator is determined, the corresponding HOG characteristic and HOC characteristic are extracted according to the area image to be identified to obtain the comprehensive characteristic, and finally the comprehensive characteristic is input into a preset SVM classification model, so that a safety belt identification result is output to determine whether the target operator carries a safety belt, the technical problem that the safety belt identification method in the prior art cannot automatically identify the operator and is easily influenced by environmental factors, so that the identification accuracy is low is solved, the accuracy of safety belt identification is improved, the operator can be timely reminded, and safety accidents are avoided.
Referring to fig. 7, fig. 7 shows a block flow diagram for determining whether an operator is in a climbing posture in an embodiment of the present invention, including:
1. receiving an input image;
2. inputting the input images to a personnel detector and a ladder detector respectively, and judging whether an operator is on a ladder (namely, whether the operator is a target operator or not);
3. inputting an input image into a two-dimensional posture estimation module to perform two-dimensional posture estimation, determining whether the input image is a climbing posture, generating two-dimensional joint coordinates and inputting the two-dimensional joint coordinates into a three-dimensional posture estimation module;
4. the three-dimensional posture estimation module carries out three-dimensional posture estimation and determines whether the three-dimensional posture is a climbing posture or not;
5. receiving the judgment results of the two-dimensional posture estimation module and the three-dimensional posture estimation module on the climbing posture;
6. if the judgment result is that the operator is in the climbing gesture, and the operator is on the ladder, outputting a target image.
Referring to fig. 8, fig. 8 shows a block diagram of a safety belt identification device for an operator according to an embodiment of the present invention.
The safety belt identification device for the operators provided by the embodiment of the invention comprises the following components:
an image acquisition module 801 for an area to be worked, which is used for acquiring an image of the area to be worked;
A preprocessing image generation module 802, configured to preprocess the job area image, and generate a preprocessing image;
a target operator detection module 803, configured to detect whether a target operator exists in the preprocessed image;
the area image to be identified determining module 804 is configured to determine an area image to be identified of a target operator from the preprocessed image when the target operator exists in the preprocessed image;
a comprehensive feature generating module 805, configured to generate a comprehensive feature according to the direction gradient histogram HOG feature and the color histogram HOC feature extracted from the region image to be identified;
the seat belt recognition result output module 806 is configured to input the integrated features to a preset SVM classification model, and output a seat belt recognition result.
Optionally, the preprocessing image generating module 802 includes:
a color component detection sub-module for detecting a plurality of color components of the work area image; the plurality of color components includes a red component, a green component, and a blue component;
a gray value calculation sub-module for calculating a gray value of the operation area image using the red component, the green component, and the blue component;
A gray image construction sub-module for constructing a gray image according to the gray value of the operation area image;
and the preprocessing image generation sub-module is used for carrying out denoising processing on the gray level image to generate a preprocessing image.
Optionally, the target operator detection module 803 includes:
the image frame generation sub-module is used for inputting the preprocessed image into a preset personnel detection model and a preset ladder detection model, and generating a first image frame and a second image frame on the preprocessed image; the first image frame is used for identifying the position of a person, and the second image frame is used for identifying the position of a ladder;
a first proportion calculating sub-module for calculating a first proportion between an area of the overlapping portion and an area of the second image frame when there is an overlapping portion between the first image frame and the second image frame;
a first distance calculating sub-module for calculating a first distance between a bottom edge of the overlapping portion and a bottom edge of the second image frame;
a second distance calculation sub-module for calculating a second distance between a top edge of the second image frame and a bottom edge of the second image frame;
A second ratio determination sub-module for determining a second ratio between the first distance and the second distance;
and the target operator detection sub-module is used for determining that the target operator exists in the preprocessed image when the first proportion is greater than or equal to a first preset threshold value and the second proportion is greater than or equal to a second preset threshold value.
Optionally, the to-be-identified area image determining module 804 includes:
a personnel area image intercepting sub-module, configured to intercept an image in the first image frame from the preprocessed image as a personnel area image when a target operator exists in the preprocessed image;
the two-dimensional joint coordinate determination submodule is used for inputting the personnel area image into a preset two-dimensional gesture estimation network model and determining the two-dimensional joint coordinate of the personnel corresponding to the personnel area image;
the first climbing gesture determining submodule is used for determining that the gesture of the person corresponding to the person region image is a climbing gesture if the matching number of the two-dimensional joint coordinates and the preset two-dimensional climbing gesture coordinates is larger than a first preset number threshold;
the first to-be-identified area image intercepting sub-module is used for intercepting a predetermined proportion image of the personnel area image to serve as the to-be-identified area image of the target operating personnel.
Optionally, the to-be-identified area image determining module 804 further includes:
the three-dimensional joint coordinate determination submodule is used for inputting the two-dimensional joint coordinate into a preset three-dimensional gesture estimation network model and determining the three-dimensional joint coordinate of the person corresponding to the person region image;
the second climbing gesture determining submodule is used for determining that the gesture of the person corresponding to the person region image is a climbing gesture if the matching number of the three-dimensional joint coordinates and the preset three-dimensional climbing gesture coordinates is larger than a second preset number threshold;
the second to-be-identified area image intercepting sub-module is used for intercepting a predetermined proportion image of the personnel area image to serve as the to-be-identified area image of the target operating personnel.
Optionally, the integrated feature generating module 805 includes:
the image to be processed generating sub-module is used for removing the background of the area image to be identified and filtering to generate an image to be processed;
the pixel small block dividing sub-module is used for dividing the image to be processed into a plurality of pixel small blocks;
the HOG feature determining submodule is used for calculating gradient direction histograms of the pixel small blocks to obtain HOG features;
a pixel value determining submodule, configured to load the image to be processed in a preset HSV color space, and determine values of hue and saturation of each pixel point in the image to be processed;
The HOG feature determining submodule is used for constructing a color histogram based on the hue and saturation values of each pixel point to obtain HOC features;
and the comprehensive feature generation sub-module is used for carrying out normalization processing on the HOG features and the HOC features and splicing the HOG features and the HOC features to generate comprehensive features.
Optionally, the belt identification result includes that the target operator has a belt and that the target operator has no belt, and the belt identification result output module 806 includes:
the safety belt judging sub-module is used for judging whether a safety belt exists in the region image to be identified corresponding to the comprehensive characteristics according to the comprehensive characteristics through the preset SVM classification model;
the first prompting submodule is used for outputting a prompt that a target operator has a safety belt if the target operator has the safety belt;
and the second prompting sub-module is used for outputting a prompt that the target operator has no safety belt if not.
The embodiment of the invention also provides electronic equipment, which comprises a memory and a processor, wherein the memory stores a computer program, and the computer program, when executed by the processor, causes the processor to execute the steps of the operator safety belt identification method according to any one of the embodiments.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, which when executed by the processor, implements the method for identifying an operator safety belt according to any one of the above embodiments.
It will be clear to those skilled in the art that, for convenience and brevity of description, reference may be made to the corresponding process in the foregoing method embodiment for the specific working process of the apparatus described above, which is not described herein again.
In the several embodiments provided by the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (9)

1. A method for identifying an operator seat belt, comprising:
acquiring an operation area image;
preprocessing the operation area image to generate a preprocessed image;
detecting whether a target operator exists in the preprocessed image;
when a target operator exists in the preprocessing image, determining an area image to be identified of the target operator from the preprocessing image;
generating comprehensive features according to the direction gradient histogram HOG features and the color histogram HOC features extracted from the region images to be identified;
inputting the comprehensive characteristics into a preset SVM classification model, and outputting a safety belt identification result;
The step of detecting whether the target operator exists in the preprocessed image comprises the following steps:
inputting the preprocessed image into a preset personnel detection model and a preset ladder detection model, and generating a first image frame and a second image frame on the preprocessed image; the first image frame is used for identifying the position of a person, and the second image frame is used for identifying the position of a ladder;
calculating a first ratio between an area of the overlapping portion and an area of the second image frame when there is an overlapping portion between the first image frame and the second image frame;
calculating a first distance between the bottom edge of the overlapping portion and the bottom edge of the second image frame;
calculating a second distance between a top edge of the second image frame and a bottom edge of the second image frame;
determining a second ratio between the first distance and the second distance;
and when the first proportion is greater than or equal to a first preset threshold value and the second proportion is greater than or equal to a second preset threshold value, determining that a target operator exists in the preprocessed image.
2. The method of claim 1, wherein the step of preprocessing the job area image to generate a preprocessed image comprises:
Detecting a plurality of color components of the work area image; the plurality of color components includes a red component, a green component, and a blue component;
calculating a gray value of the operation region image using the red component, the green component, and the blue component;
constructing a gray level image according to the gray level value of the operation area image;
and denoising the gray level image to generate a preprocessed image.
3. The method according to claim 1, wherein the step of determining an area image to be recognized of the target worker from the pre-processed image when the target worker is present in the pre-processed image, comprises:
when a target operator exists in the preprocessed image, capturing an image in the first image frame from the preprocessed image as a personnel area image;
inputting the personnel area image into a preset two-dimensional gesture estimation network model, and determining two-dimensional joint coordinates of personnel corresponding to the personnel area image;
if the number of matches between the two-dimensional joint coordinates and the preset two-dimensional climbing gesture coordinates is greater than a first preset number threshold, determining that the gesture of the person corresponding to the person region image is a climbing gesture;
And intercepting a preset proportion image of the personnel area image to be used as an area image to be identified of the target operator.
4. A method according to claim 3, further comprising:
inputting the two-dimensional joint coordinates into a preset three-dimensional gesture estimation network model, and determining the three-dimensional joint coordinates of the person corresponding to the person region image;
if the matching number of the three-dimensional joint coordinates and the preset three-dimensional climbing gesture coordinates is larger than a second preset number threshold, determining that the gesture of the person corresponding to the person region image is a climbing gesture;
and intercepting a preset proportion image of the personnel area image to be used as an area image to be identified of the target operator.
5. The method according to claim 1, wherein the step of generating a composite feature from the direction gradient histogram HOG feature and the color histogram HOC feature extracted from the region image to be identified comprises:
removing the background of the area image to be identified and filtering to generate an image to be processed;
dividing the image to be processed into a plurality of pixel small blocks;
calculating gradient direction histograms of the pixel small blocks to obtain HOG characteristics;
Loading the image to be processed in a preset HSV color space, and determining the tone and saturation values of each pixel point in the image to be processed;
constructing a color histogram based on the hue and saturation values of each pixel point to obtain an HOC characteristic;
and carrying out normalization processing on the HOG features and the HOC features, and splicing to generate comprehensive features.
6. The method of claim 1, wherein the seat belt recognition result includes a target operator having a seat belt and a target operator having no seat belt, and wherein the step of inputting the integrated feature into a preset SVM classification model and outputting the seat belt recognition result includes:
judging whether a safety belt exists in an area image to be identified corresponding to the comprehensive characteristics according to the comprehensive characteristics through the preset SVM classification model;
if yes, outputting a prompt of the safety belt of the target operator;
if not, outputting a prompt that the target operator has no safety belt.
7. An operator seat belt identification device, comprising:
the to-be-operated area image acquisition module is used for acquiring an operation area image;
the preprocessing image generation module is used for preprocessing the operation area image to generate a preprocessing image;
The target operator detection module is used for detecting whether target operators exist in the preprocessed image;
the area image to be identified determining module is used for determining an area image to be identified of the target operator from the preprocessing image when the target operator exists in the preprocessing image;
the comprehensive feature generation module is used for generating comprehensive features according to the direction gradient histogram HOG features and the color histogram HOC features extracted from the region image to be identified;
the safety belt identification result output module is used for inputting the comprehensive characteristics into a preset SVM classification model and outputting a safety belt identification result;
the target operator detection module comprises:
the image frame generation sub-module is used for inputting the preprocessed image into a preset personnel detection model and a preset ladder detection model, and generating a first image frame and a second image frame on the preprocessed image; the first image frame is used for identifying the position of a person, and the second image frame is used for identifying the position of a ladder;
a first proportion calculating sub-module for calculating a first proportion between an area of the overlapping portion and an area of the second image frame when there is an overlapping portion between the first image frame and the second image frame;
A first distance calculating sub-module for calculating a first distance between a bottom edge of the overlapping portion and a bottom edge of the second image frame;
a second distance calculation sub-module for calculating a second distance between a top edge of the second image frame and a bottom edge of the second image frame;
a second ratio determination sub-module for determining a second ratio between the first distance and the second distance;
and the target operator detection sub-module is used for determining that the target operator exists in the preprocessed image when the first proportion is greater than or equal to a first preset threshold value and the second proportion is greater than or equal to a second preset threshold value.
8. An electronic device comprising a memory and a processor, wherein the memory stores a computer program that, when executed by the processor, causes the processor to perform the steps of the method for identifying a safety belt for an operator according to any one of claims 1 to 6.
9. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when executed by a processor, implements the operator seat belt identification method according to any one of claims 1-6.
CN202011002640.8A 2020-09-22 2020-09-22 Method, device, equipment and storage medium for identifying safety belt of operator Active CN112101260B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011002640.8A CN112101260B (en) 2020-09-22 2020-09-22 Method, device, equipment and storage medium for identifying safety belt of operator

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011002640.8A CN112101260B (en) 2020-09-22 2020-09-22 Method, device, equipment and storage medium for identifying safety belt of operator

Publications (2)

Publication Number Publication Date
CN112101260A CN112101260A (en) 2020-12-18
CN112101260B true CN112101260B (en) 2023-09-26

Family

ID=73755832

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011002640.8A Active CN112101260B (en) 2020-09-22 2020-09-22 Method, device, equipment and storage medium for identifying safety belt of operator

Country Status (1)

Country Link
CN (1) CN112101260B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112613452B (en) * 2020-12-29 2023-10-27 广东电网有限责任公司清远供电局 Personnel line-crossing identification method, device, equipment and storage medium
CN112991211A (en) * 2021-03-12 2021-06-18 中国大恒(集团)有限公司北京图像视觉技术分公司 Dark corner correction method for industrial camera

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105957107A (en) * 2016-04-27 2016-09-21 北京博瑞空间科技发展有限公司 Pedestrian detecting and tracking method and device
CN108416289A (en) * 2018-03-06 2018-08-17 陕西中联电科电子有限公司 A kind of working at height personnel safety band wears detection device and detection method for early warning
CN109635758A (en) * 2018-12-18 2019-04-16 武汉市蓝领英才科技有限公司 Wisdom building site detection method is dressed based on the high altitude operation personnel safety band of video
CN110046557A (en) * 2019-03-27 2019-07-23 北京好运达智创科技有限公司 Safety cap, Safe belt detection method based on deep neural network differentiation
CN111144263A (en) * 2019-12-20 2020-05-12 山东大学 Construction worker high-fall accident early warning method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105957107A (en) * 2016-04-27 2016-09-21 北京博瑞空间科技发展有限公司 Pedestrian detecting and tracking method and device
CN108416289A (en) * 2018-03-06 2018-08-17 陕西中联电科电子有限公司 A kind of working at height personnel safety band wears detection device and detection method for early warning
CN109635758A (en) * 2018-12-18 2019-04-16 武汉市蓝领英才科技有限公司 Wisdom building site detection method is dressed based on the high altitude operation personnel safety band of video
CN110046557A (en) * 2019-03-27 2019-07-23 北京好运达智创科技有限公司 Safety cap, Safe belt detection method based on deep neural network differentiation
CN111144263A (en) * 2019-12-20 2020-05-12 山东大学 Construction worker high-fall accident early warning method and device

Also Published As

Publication number Publication date
CN112101260A (en) 2020-12-18

Similar Documents

Publication Publication Date Title
CA2867365C (en) Method, system and computer storage medium for face detection
CN113361495B (en) Method, device, equipment and storage medium for calculating similarity of face images
Ajmal et al. A comparison of RGB and HSV colour spaces for visual attention models
CN109918971B (en) Method and device for detecting number of people in monitoring video
CN104484658A (en) Face gender recognition method and device based on multi-channel convolution neural network
JP2014041476A (en) Image processing apparatus, image processing method, and program
CN112101260B (en) Method, device, equipment and storage medium for identifying safety belt of operator
CN112733914B (en) Underwater target visual identification classification method based on support vector machine
Mythili et al. Color image segmentation using ERKFCM
WO2020223963A1 (en) Computer-implemented method of detecting foreign object on background object in image, apparatus for detecting foreign object on background object in image, and computer-program product
CN111259756A (en) Pedestrian re-identification method based on local high-frequency features and mixed metric learning
CN112464850B (en) Image processing method, device, computer equipment and medium
CN105184771A (en) Adaptive moving target detection system and detection method
CN107491714B (en) Intelligent robot and target object identification method and device thereof
Du et al. Vision-based traffic light detection for intelligent vehicles
Hiremath et al. Detection of multiple faces in an image using skin color information and lines-of-separability face model
Singh et al. Template matching for detection & recognition of frontal view of human face through Matlab
CN115830641B (en) Employee identification method and device, electronic equipment and storage medium
Low et al. Experimental study on multiple face detection with depth and skin color
Wu et al. Real-time 2D hands detection and tracking for sign language recognition
CN114677667A (en) Transformer substation electrical equipment infrared fault identification method based on deep learning
Sombatpiboonporn et al. Human edge segmentation from 2D images by histogram of oriented gradients and edge matching algorithm
Fan et al. A novel saliency computation model for traffic sign detection
Zhang et al. A new method for face detection in colour images for emotional bio-robots
CN110097032A (en) A kind of recognition methods again of the pedestrian based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Room 501-503, annex building, Huaye building, No.1-3 Chuimao new street, Xihua Road, Yuexiu District, Guangzhou City, Guangdong Province 510000

Applicant after: China Southern Power Grid Power Technology Co.,Ltd.

Address before: Room 501-503, annex building, Huaye building, No.1-3 Chuimao new street, Xihua Road, Yuexiu District, Guangzhou City, Guangdong Province 510000

Applicant before: GUANGDONG DIANKEYUAN ENERGY TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant