CN115830719A - Construction site dangerous behavior identification method based on image processing - Google Patents

Construction site dangerous behavior identification method based on image processing Download PDF

Info

Publication number
CN115830719A
CN115830719A CN202310119396.0A CN202310119396A CN115830719A CN 115830719 A CN115830719 A CN 115830719A CN 202310119396 A CN202310119396 A CN 202310119396A CN 115830719 A CN115830719 A CN 115830719A
Authority
CN
China
Prior art keywords
value
image
pixel point
head position
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310119396.0A
Other languages
Chinese (zh)
Other versions
CN115830719B (en
Inventor
黄鹏
丁飞雪
李海霞
孟召朋
刘婧艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Xuhua Construction Group Co ltd
Original Assignee
Qingdao Xuhua Construction Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Xuhua Construction Group Co ltd filed Critical Qingdao Xuhua Construction Group Co ltd
Priority to CN202310119396.0A priority Critical patent/CN115830719B/en
Publication of CN115830719A publication Critical patent/CN115830719A/en
Application granted granted Critical
Publication of CN115830719B publication Critical patent/CN115830719B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The invention relates to the technical field of image processing, in particular to a construction site dangerous behavior identification method based on image processing, which is used for acquiring a head position image of each construction worker in a key frame image of a construction site; acquiring a correction gradient value and a corresponding gradient angle of each pixel point in the head position image so as to determine an initial characteristic pixel point in the head position image; acquiring key points in the initial characteristic pixel points; acquiring a standard head image of each construction worker who correctly wears the safety helmet; comparing the key points in the head position image with the pixel points at the same position in the standard head image corresponding to the construction worker to obtain the state value of the safety helmet worn by the corresponding construction worker; and identifying dangerous behaviors of the construction site according to the state value of the safety helmet. The invention improves the accuracy of identifying dangerous behaviors of building workers who do not correctly wear the safety helmet.

Description

Construction site dangerous behavior identification method based on image processing
Technical Field
The invention relates to the technical field of image processing, in particular to a construction site dangerous behavior identification method based on image processing.
Background
The construction site of the construction site is different from a natural scene, has various complex factors such as strong personnel mobility, high construction operation intensity frequency and the like, and the safety management of the construction site is always an important link for emphasizing the management of the construction site. However, due to the difficulties of complex construction site and strong mobility of personnel in the construction site, the abnormal behavior of the workers in the construction site cannot be found in time in the safety inspection mode, and safety accidents are easily induced when a large amount of building materials are processed.
The dangerous abnormal behaviors common in construction sites of construction sites are that lucky psychology and physical comfort do not follow the wearing requirements of safety helmets, construction materials are processed and carried by the safety helmets which are not worn correctly, safety accidents in construction sites of the construction sites are easily caused at the moment, and the life safety of construction workers is seriously threatened.
At present, a method for identifying dangerous behaviors of building workers who wear helmets incorrectly on a construction site of a building site usually comprises the steps of collecting images of the construction site of the building site to obtain head regions of the building workers, obtaining feature points of the head regions by using a traditional FAST feature extraction algorithm, and identifying the dangerous behaviors of wearing helmets incorrectly by comparing gray level differences of the feature points and pixel points at the same positions in template images. However, the traditional FAST feature algorithm extracts feature points and only uses gray values for calculation and characterization, so that the expressiveness obtained by extracting the feature points is not strong, and further, errors exist in identification of dangerous behaviors of incorrect wearing of the safety helmet.
Disclosure of Invention
In order to solve the problem that errors exist in identification of dangerous behaviors of a helmet which is not worn correctly due to characteristic points extracted by a traditional FAST characteristic algorithm, the invention aims to provide a construction site dangerous behavior identification method based on image processing, and the adopted technical scheme is as follows:
one embodiment of the invention provides a construction site dangerous behavior identification method based on image processing, which comprises the following steps:
acquiring a key frame image of a construction site to obtain a head position image of each construction worker in the key frame image;
acquiring a correction gradient value and a corresponding gradient angle of a corresponding pixel point according to the gray value difference between each pixel point in the head position image and the surrounding pixel points; determining initial characteristic pixel points in the head position image according to the corrected gradient value difference and the corresponding gradient angle difference between each pixel point in the head position image and other pixel points in the set neighborhood;
acquiring key points in the initial characteristic pixel points according to the coordinates of each initial characteristic pixel point in the head position image;
acquiring a standard head image of each construction worker who correctly wears the safety helmet; obtaining a state value of the corresponding construction worker for wearing the safety helmet according to the color difference and the correction gradient value difference between the key point in the head position image and the pixel point at the same position in the standard head image of the corresponding construction worker;
and acquiring the state value of the safety helmet worn by each construction worker, and identifying dangerous behaviors of the construction site according to the state value of the safety helmet worn.
Further, the determining initial feature pixel points in the head position image includes:
forming a corrected gradient feature vector of a corresponding pixel point by the corrected gradient value of each pixel point in the head position image and the corresponding gradient angle;
taking any pixel point in the head position image as a target pixel point, and taking the target pixel point as the center of a circle to obtain a circle corresponding to a preset radius; calculating the cosine similarity of the modified gradient feature vectors between the target pixel point and each pixel point on the circle;
calculating a cosine similarity mean value according to the cosine similarity corresponding to each pixel point on the circle; and when the cosine similarity corresponding to the pixel points on the circles with the preset number is smaller than the cosine similarity mean value, determining that the target pixel point is an initial characteristic pixel point.
Further, the obtaining of the key point in the initial feature pixel point includes:
constructing a Gaussian distribution model corresponding to an abscissa according to the abscissa value of each initial characteristic pixel point in the head position image; constructing a Gaussian distribution model corresponding to a vertical coordinate according to the vertical coordinate value of each initial characteristic pixel point in the head position image;
and taking the initial characteristic pixel point corresponding to the Gaussian distribution model with the abscissa value and the ordinate value meeting the requirement as a key point.
Further, the obtaining of the state value of the helmet worn by the corresponding construction worker includes:
acquiring pixel points at the same coordinate position in a standard head image corresponding to a construction worker according to the coordinates of the key points in the head position image, and recording the pixel points as matching pixel points;
converting the head position image from RGB color space to HSV color space to obtain the H channel value of each pixel point in the head position image, and converting the standard head image corresponding to the construction worker from RGB color space to HSV color space to obtain the H channel value of each pixel point in the standard head image; calculating the difference absolute value of the H channel value between the key point and the corresponding matched pixel point and the difference absolute value of the corrected gradient value to obtain the product of the difference absolute value of the H channel value and the difference absolute value of the corrected gradient value;
and adding products corresponding to all key points in the head position image, wherein the added result is used as a state value of the corresponding helmet worn by the construction worker.
Further, the identification of dangerous behaviors of the construction site according to the state values of the safety helmet includes:
and setting a safety helmet wearing state threshold, and confirming that the corresponding construction worker is in a dangerous state without wearing a safety helmet when the safety helmet wearing state value is greater than the safety helmet wearing state threshold.
Further, the obtaining of the correction gradient value includes:
acquiring neighborhood pixels in four neighborhoods of any pixel in the head position image, acquiring gray value difference absolute values between the pixel and two neighborhood pixels in the horizontal direction respectively, acquiring a mean value of the gray value difference absolute values in the horizontal direction, and recording the mean value as a first value; acquiring gray value difference absolute values between the pixel points and two adjacent pixel points in the vertical direction respectively, acquiring a mean value of the gray value difference absolute values in the vertical direction, and recording the mean value as a second value;
and performing quadratic evolution on an addition result between the square result of the first value and the square result of the second value, and taking an obtained result as a corrected gradient value of the pixel point.
Further, the obtaining of the gradient angle includes:
and calculating the ratio of the first value to the second value, adding the result obtained by taking the ratio as an independent variable of the arc tangent function and a preset translation factor, and taking the added result as a gradient angle.
Further, the acquiring a key frame image of a construction site includes:
acquiring construction work video data of construction workers based on a preset sampling frequency; calculating the sum of gray values of corresponding frame images according to the gray value of a pixel point in each frame image in the construction work video data, calculating the absolute value of the difference value of the sum of gray values between two adjacent frame images, if the absolute value of the difference value is greater than an empirical difference threshold, reserving the image frame corresponding to the larger sum of gray values in the two adjacent frame images, and reserving the image frame which is the key frame image of the construction site.
Further, the obtaining the head position image of each construction worker in the key frame image comprises:
and acquiring the minimum bounding rectangle of the head position of each construction worker in the key frame image by using the YoloV5 model as the head position image of the corresponding construction worker.
The invention has the following beneficial effects:
because the feature points extracted by the traditional FAST feature algorithm are only calculated and characterized by gray values, the expressiveness obtained by extracting the feature points is not strong, and further, errors exist in the identification of dangerous behaviors of incorrectly wearing safety helmets; when a constructor is in two different states of wearing the safety helmet and not wearing the safety helmet, different difference changes can occur in the gradient calculation characteristics, the extraction effect of the pixel points is improved by acquiring the correction gradient value and the corresponding gradient angle of each pixel point in the head position image, and more accurate theoretical support is provided for the identification of dangerous behaviors of the subsequent construction site; in order to calculate and extract the difference characteristic, the traditional FAST characteristic extraction algorithm is used for comparing and acquiring gray level images, but the characteristic points extracted by the traditional FAST characteristic extraction algorithm are only calculated and characterized by gray level values, so that the expressiveness obtained by extracting the characteristic points is not strong, and the head model of the construction worker in a construction site cannot be aimed at, therefore, according to the corrected gradient value difference and the corresponding gradient angle difference between each pixel point in the head position image and other pixel points in the set neighborhood, the initial characteristic pixel points in the head position image are determined, and the primary characteristic point acquisition is completed; considering that the difference between adjacent pixel points is small, the extracted initial characteristic pixel points possibly have the defect of semantic representation overlapping, some redundant initial characteristic pixel points can increase the calculation cost of subsequent construction worker dangerous behavior identification, the initial characteristic pixel points are screened to obtain key points in the initial characteristic pixel points, so that the key point extraction is ensured to be more consistent with the head analysis scene of a construction worker, the wearing safety helmet state value of the corresponding construction worker is obtained according to the color difference and the correction gradient value difference between the key points in the head position image and the pixel points at the same position in the standard head image of the corresponding construction worker, the wearing safety helmet state of the construction worker is detected more quickly and accurately through template matching, and meanwhile, the accuracy of identifying the construction site dangerous behavior according to the wearing safety helmet state value is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flowchart illustrating steps of a construction site hazardous behavior identification method based on image processing according to an embodiment of the present invention;
fig. 2 is a schematic distribution diagram of other pixel points on a circle formed by the pixel point i according to an embodiment of the present invention.
Detailed Description
To further illustrate the technical means and effects of the present invention adopted to achieve the predetermined objects, the following detailed description, the structure, the features and the effects of the method for identifying dangerous behaviors of a construction site based on image processing according to the present invention will be provided with reference to the accompanying drawings and the preferred embodiments. In the following description, different "one embodiment" or "another embodiment" refers to not necessarily the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following describes a specific scheme of the construction site dangerous behavior identification method based on image processing in detail with reference to the accompanying drawings.
Referring to fig. 1, a method for identifying dangerous behaviors of a construction site based on image processing according to an embodiment of the present invention is shown, where the method includes:
and S001, acquiring a key frame image of the construction site to obtain a head position image of each construction worker in the key frame image.
Particularly, in order to acquire the real-time state of the construction workers on the construction site, construction site safety accidents caused by lucky psychology can be avoided as much as possible. And installing high-resolution CCD shooting and collecting equipment according to the frequent safety accident positions in the common safety accident reports so as to obtain construction work video data of construction workers. The obtained construction work video data can be regarded as image data with short intervals, the frame rate of the construction work video data and the visual abnormal-free frame rate of human eyes are kept consistent to be 60FPS, namely, 60 images of the construction site are shot every second to serve as one construction work video data.
Because the construction site has the characteristics of complex environment and strong personnel mobility, when different frame images in construction work video data are directly processed and calculated, the possibility that the target of a construction worker in a certain frame image is lost can occur, and meanwhile, if each frame image is calculated, the calculation cost is too high, and dangerous abnormal behaviors of the construction worker can not be detected and identified in real time. Therefore, for the construction work video data obtained by shooting and collecting, the sum of the gray values of the corresponding frame images is calculated according to the gray values of the pixel points in each frame image, the absolute value of the difference value of the sum of the gray values between the two adjacent frame images is calculated, the experience difference threshold value is 95, if the absolute value of the difference value is larger than the experience difference threshold value, the difference between the two adjacent frame images is considered to be large, the construction worker can move quickly, the target of the construction worker is lost, the image frame with the smaller sum of the gray values in the two adjacent frame images is eliminated, the image frame with the larger sum of the gray values in the two adjacent frame images is reserved, and the reserved image frame is used as the key frame image of the construction site in the construction work video data, namely the key frame image where the construction worker is located.
Meanwhile, in consideration of the possibility of construction of a construction site at night, brightness adjustment and enhancement need to be carried out on the shot and collected images, and the Retinex algorithm (MSRCR) with color recovery is used for image enhancement on all key frame images, so that the influence that the appearance shape of a construction worker is difficult to obtain due to less illumination in the images shot at night is eliminated. The Retinex algorithm with color recovery is a well-known technique and is not described herein.
Because the construction working environment of the construction site is complex, in order to reduce the calculation influence of invalid background pixel points on the dangerous behaviors of subsequent construction workers, the YoloV5 model is used for acquiring the minimum circumscribed rectangle of the head position of each construction worker in the key frame image as the head position image of the corresponding construction worker.
As an example, the keyframe image is passed through a yolo v5 model to detect and identify the position of the head of the construction worker in the keyframe image, where the yolo v5 model is a commonly used multi-target tracking algorithm, and a training process of the yolo v5 model is well known to those skilled in the art and will not be described in detail. And taking the key frame image as the input of the YoloV5 model, obtaining the position of the head of the construction worker in the key frame image by using the YoloV5 model, wherein the position is obtained by taking the minimum bounding rectangle as a return parameter, and the minimum bounding rectangle of the head position of each construction worker in the key frame image is the head position image of the corresponding construction worker.
S002, acquiring a correction gradient value and a corresponding gradient angle of a corresponding pixel point according to the gray value difference between each pixel point in the head position image and the surrounding pixel points; and determining initial characteristic pixel points in the head position image according to the difference of the corrected gradient values and the corresponding gradient angle difference between each pixel point in the head position image and other pixel points in the set neighborhood.
Specifically, for convenience of understanding, the following embodiments of the present invention perform an example analysis with the head position image of the t-th construction worker in the key frame image. In the conventional gradient calculation process, the gray values of two adjacent pixel points are usually used for calculation to obtain the magnitude of the gradient value at the pixel point. However, because the actual working environment of the construction worker in the scene of the embodiment of the invention is complex, the gray values of the two traditional pixel points are directly used for calculation to obtain the corresponding gradient numerical value, and the characteristic information corresponding to each pixel point cannot be accurately obtained. In order to avoid the influence and improve the accuracy effect of identifying dangerous behaviors of construction workers in the construction site, the embodiment of the invention obtains the correction gradient value and the corresponding gradient angle of the corresponding pixel point according to the gray value difference between each pixel point in the head position image and the surrounding pixel points.
The method for acquiring the correction gradient value and the corresponding gradient angle of the pixel point comprises the following steps: acquiring neighborhood pixels in four neighborhoods of any pixel in the head position image, acquiring gray value difference absolute values between the pixel and two neighborhood pixels in the horizontal direction respectively, acquiring a mean value of the gray value difference absolute values in the horizontal direction, and recording the mean value as a first value; acquiring gray value difference absolute values between the pixel points and two adjacent pixel points in the vertical direction respectively, acquiring a mean value of the gray value difference absolute values in the vertical direction, and recording the mean value as a second value; performing quadratic evolution on an addition result between the square result of the first value and the square result of the second value, and taking an obtained result as a correction gradient value of the pixel point; and calculating the ratio of the first value to the second value, adding the result obtained by taking the ratio as an independent variable of the arc tangent function and a preset translation factor, and taking the added result as a gradient angle.
As an example, with the head position image of the t-th construction worker in the keyframe image
Figure SMS_1
For example, for a head position image
Figure SMS_2
The center coordinate is
Figure SMS_3
The calculation formula of the correction gradient value of the pixel point is as follows:
Figure SMS_4
Figure SMS_5
wherein the content of the first and second substances,
Figure SMS_9
as head position images
Figure SMS_15
The center coordinate is
Figure SMS_22
The gray value of the pixel point;
Figure SMS_8
as head position images
Figure SMS_17
The center coordinate is
Figure SMS_24
The gray value of the pixel point;
Figure SMS_29
as head position images
Figure SMS_7
The center coordinate is
Figure SMS_14
The gray value of the pixel point;
Figure SMS_21
as head position images
Figure SMS_27
The center coordinate is
Figure SMS_10
The gray value of the pixel point;
Figure SMS_16
as head position images
Figure SMS_23
The center coordinate is
Figure SMS_28
The gray value of the neighborhood pixel point;
Figure SMS_11
as head position images
Figure SMS_18
Middle seatIs marked as
Figure SMS_25
The mean value of the gray value difference absolute values between the located pixel point and the two neighborhood pixel points in the horizontal direction, namely a first value;
Figure SMS_30
as head position images
Figure SMS_6
The center coordinate is
Figure SMS_13
The mean value of the gray value difference absolute values between the located pixel point and the two neighborhood pixel points in the vertical direction is also a second value;
Figure SMS_20
is to take the absolute value sign;
Figure SMS_26
as head position images
Figure SMS_12
The center coordinate is
Figure SMS_19
The corrected gradient value of the pixel point.
It should be noted that when a construction worker is in two different states of wearing a safety helmet and not wearing the safety helmet, different difference changes occur in the gradient calculation characteristics, so that the gradient characteristics of the central pixel point are extracted according to the difference of gray values by introducing the gray values of the central pixel point and the pixel points at the front and rear positions seen in the horizontal direction and the vertical direction around the central pixel point, so that the extraction effect of the pixel points is improved, and more accurate theoretical support is provided for the identification of dangerous behaviors of a subsequent construction site.
The calculation formula of the gradient angle of the pixel point is as follows:
Figure SMS_31
wherein the content of the first and second substances,
Figure SMS_32
as head position images
Figure SMS_33
The center coordinate is
Figure SMS_34
Gradient angle of pixel point;
Figure SMS_35
is an arctangent function;
Figure SMS_36
is the ratio of the first value to the second value;
Figure SMS_37
for presetting a translation factor, the experience value is taken as
Figure SMS_38
It should be noted that by adding a translation factor
Figure SMS_39
The angle of the gradient direction is in the interval
Figure SMS_40
In addition, the calculation cost of the positive and negative symbols is avoided, and the overall real-time effect of the algorithm is improved to a certain extent.
Calculating the correction gradient value and the corresponding gradient angle of each pixel point in the head position image of each construction worker through a calculation formula for correcting the gradient value and a calculation formula for the gradient angle, and further forming the correction gradient value and the corresponding gradient angle of each pixel point in the head position image into correction gradient feature vectors of corresponding pixel points
Figure SMS_41
In order to calculate and extract the difference characteristic, the traditional FAST characteristic extraction algorithm is used for comparing and obtaining the gray level image, but the characteristic points extracted by the traditional FAST characteristic extraction algorithm are only calculated and characterized by using the gray level value, so that the expressiveness obtained by extracting the characteristic points is not strong, and the head model of the construction worker on the construction site cannot be aimed at.
The method for acquiring the initial characteristic pixel points in the head position image comprises the following steps: taking any pixel point in the head position image as a target pixel point, and taking the target pixel point as the center of a circle to obtain a circle corresponding to a preset radius; calculating cosine similarity of the modified gradient feature vectors between the target pixel point and each pixel point on the circle; calculating a cosine similarity mean value according to the cosine similarity corresponding to each pixel point on the circle; and when the cosine similarity corresponding to the pixel points on the circles with the preset number is smaller than the cosine similarity mean value, determining the target pixel point as an initial characteristic pixel point.
As an example, with the head position image of the t-th construction worker in the keyframe image
Figure SMS_42
For example, for a head position image
Figure SMS_43
The pixel point i in the figure, with pixel point i as the centre of a circle, preset radius for 3 makes the circle of pixel point i, as shown in fig. 2, the little black point in the figure is the centre of a circle (pixel point i), corresponds the circle and passes through 16 other pixel points (black frame), other pixel point positions that the black frame in the upper left corner corresponds begin to be numbered 1, according to clockwise in proper order to black frame number to 16, calculate the cosine similarity of the correction gradient eigenvector between every other pixel point on pixel point i and the circle respectively, then the computational formula of cosine similarity is:
Figure SMS_44
wherein the content of the first and second substances,
Figure SMS_45
the cosine similarity of the modified gradient feature vector between the pixel point i and the jth other pixel point on the corresponding circle is obtained;
Figure SMS_46
the corrected gradient feature vector of the pixel point i is obtained;
Figure SMS_47
the corrected gradient feature vector of the jth other pixel point on the circle corresponding to the pixel point i;
Figure SMS_48
the modulo length of the vector is taken.
It should be noted that the larger the difference of the modified gradient feature vectors between two pixel points is, the smaller the corresponding cosine similarity is.
According to a cosine similarity calculation formula, calculating cosine similarity between each pixel point on a circle where the pixel point i is located and the pixel point i, and voting the pixel point i according to the cosine similarity corresponding to the pixel points at different positions on the circle to judge whether the pixel point i is an initial characteristic pixel point.
To avoid the conventional voting to judge the pixel point
Figure SMS_49
Whether the initial characteristic pixel points are the initial characteristic pixel points or not is judged by artificially setting subjective factor influence of an experience threshold, calculating a cosine similarity mean value according to cosine similarities corresponding to pixel points at different positions on a circle, taking the cosine similarity mean value as a judgment threshold, and confirming that the pixel points i are the initial characteristic pixel points of the heads of construction workers when half of the cosine similarities corresponding to the pixel points on the circle are smaller than the cosine similarity mean value, wherein the calculation formula of the cosine similarity mean value is as follows:
Figure SMS_50
wherein, the first and the second end of the pipe are connected with each other,
Figure SMS_51
the cosine similarity mean value corresponding to the pixel point i is obtained;
Figure SMS_52
the cosine similarity of the modified gradient feature vector between the pixel point i and the kth pixel point on the corresponding circle is obtained;
Figure SMS_53
in the embodiment of the present invention, N =16 is the number of pixels on the circle corresponding to the pixel i.
And then, acquiring all initial characteristic pixel points in the head position image of the t-th construction worker according to the judgment method of the pixel points i in the head position image of the t-th construction worker.
And S003, acquiring key points in the initial characteristic pixel points according to the coordinates of each initial characteristic pixel point in the head position image.
Specifically, all the initial characteristic pixel points in the head position image of the t-th construction worker are obtained according to the step S002. The method is characterized in that the method is also a head key point of a construction worker, because the difference between adjacent pixel points is small, the defect that semantic representation overlapping possibly exists when initial characteristic pixel points are extracted is overcome, some redundant initial characteristic pixel points can increase the calculation cost of subsequent construction worker dangerous behavior identification, and the real-time effect is not facilitated, so that in order to avoid the defect, the initial characteristic pixel points obtained by extraction are subjected to distribution optimization by Gaussian distribution to obtain key points in the initial characteristic pixel points.
Wherein, obtain the key point in the initial characteristic pixel, include: constructing a Gaussian distribution model corresponding to the abscissa according to the abscissa value of each initial characteristic pixel point in the head position image; constructing a Gaussian distribution model corresponding to a vertical coordinate according to the vertical coordinate value of each initial characteristic pixel point in the head position image; and taking the initial characteristic pixel point corresponding to the Gaussian distribution model with the abscissa value and the ordinate value meeting the requirement as a key point.
As an example, for allPerforming Gaussian fitting on the abscissa values of the initial characteristic pixel points to obtain a Gaussian distribution model corresponding to the abscissa, wherein the mean of the abscissa values is respectively calculated according to the abscissa values of each initial characteristic pixel point
Figure SMS_54
And variance of abscissa value
Figure SMS_55
Taking the mean value and the variance of the abscissa value as the expectation and the variance of the Gaussian distribution model corresponding to the abscissa, and recording the Gaussian distribution model corresponding to the abscissa as
Figure SMS_56
(ii) a Similarly, gaussian fitting is carried out on the longitudinal coordinate values of all the initial characteristic pixel points to obtain a Gaussian distribution model corresponding to the longitudinal coordinate, wherein the mean value of the longitudinal coordinate values is respectively calculated according to the longitudinal coordinate value of each initial characteristic pixel point
Figure SMS_57
And variance of ordinate values
Figure SMS_58
Taking the mean value and the variance of the ordinate values as the expectation and the variance of the Gaussian distribution model corresponding to the ordinate, and recording the Gaussian distribution model corresponding to the ordinate as
Figure SMS_59
Constructing a Gaussian distribution model by coordinate information of all initial characteristic pixel points in the head position image of a construction worker, and taking the initial characteristic pixel points corresponding to the Gaussian distribution model with the abscissa value and the ordinate value both meeting the requirement of corresponding to the Gaussian distribution model as key points, namely the abscissa value of the key point k
Figure SMS_60
Compliance
Figure SMS_61
And the ordinate value of the key point k
Figure SMS_62
Compliance
Figure SMS_63
And the coordinate information of the key points is ensured to obey the corresponding Gaussian distribution model, so that the influence of overlapping of feature expression among the key points with smaller intervals is avoided.
Step S004, obtaining a standard head image of each construction worker who correctly wears the safety helmet; obtaining a state value of the corresponding construction worker for wearing the safety helmet according to the color difference and the correction gradient value difference between the key point in the head position image and the pixel point at the same position in the standard head image of the corresponding construction worker; and acquiring the state value of the helmet worn by each building worker, and identifying dangerous behaviors of the building site according to the state value of the helmet worn.
Specifically, in order to facilitate quick and efficient identification and judgment of whether a current construction worker wears a safety helmet, a head position image of the construction worker in a state that the safety helmet is correctly worn needs to be shot and obtained as a standard head image. The standard head image acquisition method comprises the following steps: and establishing a standard head image library for all construction workers in the construction site to correctly wear the safety helmet, wherein one construction worker corresponds to one standard head image for correctly wearing the safety helmet, each construction worker has an identity ID, and the standard head image for correctly wearing the safety helmet of each construction worker in the key frame image is obtained in the standard head image library through the identity ID of each construction worker. When subsequent template matching can be avoided like this, the contrast difference of different construction worker's head region inconsistent in size.
The embodiment of the invention compares the real-time head position image of a construction worker with a standard head image of a safety helmet correctly worn to detect the wearing condition of the construction worker, and specifically comprises the following steps: acquiring pixel points at the same coordinate position in a standard head image corresponding to a construction worker according to the coordinates of the key points in the head position image, and recording the pixel points as matching pixel points; converting the head position image from RGB color space to HSV color space to obtain the H channel value of each pixel point in the head position image, and converting the standard head image corresponding to the construction worker from RGB color space to HSV color space to obtain the H channel value of each pixel point in the standard head image; calculating the difference absolute value of the H channel value between the key point and the corresponding matched pixel point and the difference absolute value of the corrected gradient value to obtain the product of the difference absolute value of the H channel value and the difference absolute value of the corrected gradient value; and adding products corresponding to all key points in the head position image, wherein the added result is used as a state value of the corresponding helmet worn by the construction worker.
As an example, in order to obtain color feature information of the head position of the construction worker quickly, the head position image needs to be converted from RGB color space to HSV color space to obtain the H channel value of each pixel point in the head position image
Figure SMS_64
. Similarly, the standard head image corresponding to the construction worker is converted from the RGB color space to the HSV color space to obtain the H channel value of each pixel point in the standard head image
Figure SMS_65
Obtaining the correction gradient value of each pixel point in the standard head image of the t-th construction worker according to the obtaining method of the correction gradient value of each pixel point in the head position image of the t-th construction worker
Figure SMS_66
. And then obtaining the wearing safety helmet state value of the t-th construction worker according to the color difference and the correction gradient value difference between the key point in the head position image of the t-th construction worker and the pixel point at the same position in the standard head image, wherein the calculation formula of the wearing safety helmet state value is as follows:
Figure SMS_67
wherein the content of the first and second substances,
Figure SMS_69
the safety helmet state value of the t-th construction worker;
Figure SMS_72
the total number of key points in the head position image for the t-th construction worker;
Figure SMS_75
an H channel value of the a key point in the head position image of the t construction worker;
Figure SMS_70
the H channel value of a pixel point which is the same as the coordinate of the a-th pixel point in the standard head image of the t-th construction worker is also the H channel value of a matching pixel point of the a-th pixel point;
Figure SMS_73
correcting the gradient value of the a-th key point in the head position image of the t-th construction worker;
Figure SMS_74
the correction gradient value of a pixel point with the same coordinate as the a-th pixel point in the standard head image of the t-th construction worker is also the correction gradient value of the matching pixel point of the a-th pixel point;
Figure SMS_76
is an absolute value taking function;
Figure SMS_68
the absolute value of the difference value of the H channel value is obtained;
Figure SMS_71
to correct the absolute value of the difference in gradient values.
It should be noted that, because the safety helmet worn on the construction site has a specific color, the greater the color difference between the key point in the head position image and the pixel point at the same position in the standard head image, the greater the absolute value of the difference between the H channel values
Figure SMS_77
The larger and the largerThe larger the color difference between the construction worker and the helmet is correctly worn, the corresponding state value of the helmet is shown
Figure SMS_78
The larger the size, the more the construction worker is said to not correctly wear the safety helmet; since the key points in the head position image of the construction worker have the characteristic of obvious difference with surrounding pixel points, the absolute value of the difference value of the gradient values is corrected by comparing the correction gradient value between each key point and the pixel point at the same position in the standard head image
Figure SMS_79
The larger the difference between the gradient characteristics of the construction worker and the safety helmet is correctly worn, the larger the difference is, the more the construction worker does not correctly wear the safety helmet, and the corresponding safety helmet wearing state value
Figure SMS_80
The larger.
Based on a calculation formula of the wearing helmet state value of the tth building worker in the key frame image, acquiring the wearing helmet state value of each building worker in the key frame image, and further identifying dangerous behaviors of the building site according to the wearing helmet state value: and setting a safety helmet wearing state threshold, and confirming that the corresponding construction worker is in a dangerous state without wearing a safety helmet when the safety helmet wearing state value is greater than the safety helmet wearing state threshold.
As an example, the wearing safety helmet state value is mapped to the interval [0,1] by using a range normalization function to obtain a normalized wearing safety helmet state value, the wearing safety helmet state threshold value is set to be 0.65, when the wearing safety helmet state value is greater than the wearing safety helmet state threshold value, the corresponding construction worker carries out related construction engineering operation without wearing a safety helmet, and is in a high-risk state, the construction worker at the position needs to be timely fed back for reminding, and construction safety accidents caused by hidden dangerous behaviors in a construction site are avoided.
It should be noted that: the precedence order of the above embodiments of the present invention is only for description, and does not represent the merits of the embodiments. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
All the embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from other embodiments.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that are within the spirit of the present invention are intended to be included therein.

Claims (9)

1. The construction site dangerous behavior identification method based on image processing is characterized by comprising the following steps of:
acquiring a key frame image of a construction site to obtain a head position image of each construction worker in the key frame image;
acquiring a correction gradient value and a corresponding gradient angle of a corresponding pixel point according to the gray value difference between each pixel point in the head position image and the surrounding pixel points; determining initial characteristic pixel points in the head position image according to the difference of the corrected gradient values and the corresponding gradient angle difference between each pixel point in the head position image and other pixel points in the set neighborhood;
acquiring key points in the initial characteristic pixel points according to the coordinates of each initial characteristic pixel point in the head position image;
acquiring a standard head image of each construction worker who correctly wears the safety helmet; obtaining a state value of the corresponding construction worker for wearing the safety helmet according to the color difference and the correction gradient value difference between the key point in the head position image and the pixel point at the same position in the standard head image of the corresponding construction worker;
and acquiring the state value of the helmet worn by each building worker, and identifying dangerous behaviors of the building site according to the state value of the helmet worn.
2. The image processing-based construction site dangerous behavior recognition method of claim 1, wherein the determining of initial characteristic pixel points in the head position image comprises:
forming a corrected gradient feature vector of a corresponding pixel point by the corrected gradient value of each pixel point in the head position image and the corresponding gradient angle;
taking any pixel point in the head position image as a target pixel point, and taking the target pixel point as the center of a circle to obtain a circle corresponding to a preset radius; calculating the cosine similarity of the modified gradient feature vectors between the target pixel point and each pixel point on the circle;
calculating a cosine similarity mean value according to the cosine similarity corresponding to each pixel point on the circle; and when the cosine similarity corresponding to the pixel points on the circles with the preset number is smaller than the cosine similarity mean value, determining the target pixel point as an initial characteristic pixel point.
3. The image processing-based construction site dangerous behavior identification method according to claim 1, wherein the obtaining of the key points in the initial feature pixel points comprises:
constructing a Gaussian distribution model corresponding to the abscissa according to the abscissa value of each initial characteristic pixel point in the head position image; constructing a Gaussian distribution model corresponding to a vertical coordinate according to the vertical coordinate value of each initial characteristic pixel point in the head position image;
and taking the initial characteristic pixel point corresponding to the Gaussian distribution model with the abscissa value and the ordinate value meeting the requirement as a key point.
4. The image processing-based construction site dangerous behavior recognition method according to claim 1, wherein said obtaining a helmet wearing state value corresponding to a construction worker comprises:
acquiring pixel points at the same coordinate position in a standard head image corresponding to a construction worker according to the coordinates of the key points in the head position image, and recording the pixel points as matching pixel points;
converting the head position image from RGB color space to HSV color space to obtain the H channel value of each pixel point in the head position image, and converting the standard head image corresponding to the construction worker from RGB color space to HSV color space to obtain the H channel value of each pixel point in the standard head image;
calculating the difference absolute value of the H channel value between the key point and the corresponding matched pixel point and the difference absolute value of the corrected gradient value to obtain the product of the difference absolute value of the H channel value and the difference absolute value of the corrected gradient value;
and adding products corresponding to all key points in the head position image, wherein the added result is used as a state value of the corresponding helmet worn by the construction worker.
5. The image processing-based construction site dangerous behavior recognition method according to claim 1, wherein the recognition of construction site dangerous behavior according to the state value of the helmet includes:
and setting a safety helmet wearing state threshold, and confirming that the corresponding construction worker is in a dangerous state without wearing a safety helmet when the safety helmet wearing state value is greater than the safety helmet wearing state threshold.
6. The image processing-based construction site dangerous behavior recognition method of claim 1, wherein the obtaining of the correction gradient value comprises:
acquiring neighborhood pixels in four neighborhoods of any pixel in the head position image, acquiring gray value difference absolute values between the pixel and two neighborhood pixels in the horizontal direction respectively, acquiring a mean value of the gray value difference absolute values in the horizontal direction, and recording the mean value as a first value; acquiring gray value difference absolute values between the pixel points and two adjacent pixel points in the vertical direction respectively, acquiring a mean value of the gray value difference absolute values in the vertical direction, and recording the mean value as a second value;
and performing quadratic evolution on an addition result between the square result of the first value and the square result of the second value, and taking an obtained result as a correction gradient value of the pixel point.
7. The image processing-based construction site dangerous behavior recognition method of claim 6, wherein the obtaining of the gradient angle comprises:
and calculating the ratio of the first value to the second value, adding the result obtained by taking the ratio as an independent variable of the arc tangent function and a preset translation factor, and taking the added result as a gradient angle.
8. The image processing-based construction site dangerous behavior recognition method of claim 1, wherein the acquiring of the key frame image of the construction site comprises:
acquiring construction work video data of construction workers based on a preset sampling frequency; calculating the sum of gray values of corresponding frame images according to the gray values of pixel points in each frame image in the construction work video data, calculating the absolute value of the difference value of the sum of the gray values between two adjacent frame images, if the absolute value of the difference value is larger than an empirical difference threshold, reserving the image frame corresponding to the larger sum of the gray values in the two adjacent frame images, and reserving the reserved image frame as the key frame image of the construction site.
9. The image processing-based construction site dangerous behavior recognition method of claim 1, wherein the obtaining of the head position image of each construction worker in the key frame image comprises:
and acquiring the minimum bounding rectangle of the head position of each construction worker in the key frame image by using the YoloV5 model as the head position image of the corresponding construction worker.
CN202310119396.0A 2023-02-16 2023-02-16 Building site dangerous behavior identification method based on image processing Active CN115830719B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310119396.0A CN115830719B (en) 2023-02-16 2023-02-16 Building site dangerous behavior identification method based on image processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310119396.0A CN115830719B (en) 2023-02-16 2023-02-16 Building site dangerous behavior identification method based on image processing

Publications (2)

Publication Number Publication Date
CN115830719A true CN115830719A (en) 2023-03-21
CN115830719B CN115830719B (en) 2023-04-28

Family

ID=85521529

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310119396.0A Active CN115830719B (en) 2023-02-16 2023-02-16 Building site dangerous behavior identification method based on image processing

Country Status (1)

Country Link
CN (1) CN115830719B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116777845A (en) * 2023-05-26 2023-09-19 浙江嘉宇工程管理有限公司 Building site safety risk intelligent assessment method and system based on artificial intelligence

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107103617A (en) * 2017-03-27 2017-08-29 国机智能科技有限公司 The recognition methods of safety cap wearing state and system based on optical flow method
CN108549873A (en) * 2018-04-19 2018-09-18 北京华捷艾米科技有限公司 Three-dimensional face identification method and three-dimensional face recognition system
WO2019237520A1 (en) * 2018-06-11 2019-12-19 平安科技(深圳)有限公司 Image matching method and apparatus, computer device, and storage medium
WO2021000702A1 (en) * 2019-06-29 2021-01-07 华为技术有限公司 Image detection method, device, and system
CN115294533A (en) * 2022-09-30 2022-11-04 南通羿云智联信息科技有限公司 Building construction state monitoring method based on data processing
CN115471874A (en) * 2022-10-28 2022-12-13 山东新众通信息科技有限公司 Construction site dangerous behavior identification method based on monitoring video

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107103617A (en) * 2017-03-27 2017-08-29 国机智能科技有限公司 The recognition methods of safety cap wearing state and system based on optical flow method
CN108549873A (en) * 2018-04-19 2018-09-18 北京华捷艾米科技有限公司 Three-dimensional face identification method and three-dimensional face recognition system
WO2019237520A1 (en) * 2018-06-11 2019-12-19 平安科技(深圳)有限公司 Image matching method and apparatus, computer device, and storage medium
WO2021000702A1 (en) * 2019-06-29 2021-01-07 华为技术有限公司 Image detection method, device, and system
CN115294533A (en) * 2022-09-30 2022-11-04 南通羿云智联信息科技有限公司 Building construction state monitoring method based on data processing
CN115471874A (en) * 2022-10-28 2022-12-13 山东新众通信息科技有限公司 Construction site dangerous behavior identification method based on monitoring video

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
S.A.M. AL-SUMAIDAEE: "Multi-gradient features and elongated quinary pattern encoding for image-based facial expression recognition" *
阮晓虎;李卫军;覃鸿;董肖莉;张丽萍;: "一种基于特征匹配的人脸配准判断方法" *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116777845A (en) * 2023-05-26 2023-09-19 浙江嘉宇工程管理有限公司 Building site safety risk intelligent assessment method and system based on artificial intelligence
CN116777845B (en) * 2023-05-26 2024-02-13 浙江嘉宇工程管理有限公司 Building site safety risk intelligent assessment method and system based on artificial intelligence

Also Published As

Publication number Publication date
CN115830719B (en) 2023-04-28

Similar Documents

Publication Publication Date Title
JP7113657B2 (en) Information processing device, information processing method, and program
CN102704215B (en) Automatic cutting method of embroidery cloth based on combination of DST file parsing and machine vision
CN106339702A (en) Multi-feature fusion based face identification method
CN102592288B (en) Method for matching pursuit of pedestrian target under illumination environment change condition
CN110232389A (en) A kind of stereoscopic vision air navigation aid based on green crop feature extraction invariance
CN105469069A (en) Safety helmet video detection method for production line data acquisition terminal
CN107066969A (en) A kind of face identification method
CN109506628A (en) Object distance measuring method under a kind of truck environment based on deep learning
CN115830719A (en) Construction site dangerous behavior identification method based on image processing
CN115115841B (en) Shadow spot image processing and analyzing method and system
CN112001244A (en) Computer-aided iris comparison method and device
CN115035088A (en) Helmet wearing detection method based on yolov5 and posture estimation
CN114049589A (en) Transformer substation monitoring system based on artificial intelligence
CN103533332B (en) A kind of 2D video turns the image processing method of 3D video
CN114241542A (en) Face recognition method based on image stitching
CN113435280A (en) Testimony verification method
CN112241695A (en) Method for recognizing portrait without safety helmet and with face recognition function
CN107220612B (en) Fuzzy face discrimination method taking high-frequency analysis of local neighborhood of key points as core
CN106846609A (en) It is a kind of based on perceiving the bank note face amount of Hash towards recognition methods
CN101777127B (en) Human body head detection method
CN104615985A (en) Identification method for person-face similarity
CN115273150A (en) Novel identification method and system for wearing safety helmet based on human body posture estimation
Yi et al. Face detection method based on skin color segmentation and facial component localization
CN103093195A (en) Number and image area clone recognition technology based on boundary energy
CN111626150A (en) Commodity identification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant