CN114708544A - Intelligent violation monitoring helmet based on edge calculation and monitoring method thereof - Google Patents

Intelligent violation monitoring helmet based on edge calculation and monitoring method thereof Download PDF

Info

Publication number
CN114708544A
CN114708544A CN202210248908.9A CN202210248908A CN114708544A CN 114708544 A CN114708544 A CN 114708544A CN 202210248908 A CN202210248908 A CN 202210248908A CN 114708544 A CN114708544 A CN 114708544A
Authority
CN
China
Prior art keywords
violation
scene
image
layer
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210248908.9A
Other languages
Chinese (zh)
Inventor
杨海平
陈涛
陈梦月
唐思琪
苟先太
钱照国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Jiaoda Prestressed Engineering Testing Technology Co ltd
Cscec Southwest Consulting Co ltd
Southwest Jiaotong University
Original Assignee
Sichuan Jiaoda Prestressed Engineering Testing Technology Co ltd
Cscec Southwest Consulting Co ltd
Southwest Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Jiaoda Prestressed Engineering Testing Technology Co ltd, Cscec Southwest Consulting Co ltd, Southwest Jiaotong University filed Critical Sichuan Jiaoda Prestressed Engineering Testing Technology Co ltd
Priority to CN202210248908.9A priority Critical patent/CN114708544A/en
Publication of CN114708544A publication Critical patent/CN114708544A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Abstract

The invention discloses an intelligent violation monitoring helmet based on edge calculation and a monitoring method thereof. The monitoring method effectively realizes abnormal behavior monitoring, and when illegal behaviors occur in a construction site, the acousto-optic alarm device of the helmet can be automatically triggered to identify personnel, so that the aim of safe production is fulfilled; the illegal behavior can be identified under the condition that the network is not smooth, so that the calculated amount of the edge-carried equipment of the helmet is reduced, and the requirement on real-time performance is met; the illegal behavior recognition can be completed along with the movement of the personnel, the monitoring range is widened, and the cost and the workload of the operators are reduced.

Description

Intelligent violation monitoring helmet based on edge calculation and monitoring method thereof
Technical Field
The invention belongs to the technical field of illegal behavior monitoring, and particularly relates to an intelligent illegal behavior monitoring helmet based on edge calculation and a monitoring method thereof.
Background
Safety production is always a serious topic for construction of construction sites, and in order to guarantee that construction is carried out orderly, illegal behaviors of the construction sites must be accurately identified and warned. At present, a fixed camera of a case in a public place is used for collecting video images, and then the videos are transmitted to a background server to automatically identify abnormal behaviors in the videos. However, this method has the following disadvantages:
1) when the frame rate of the acquired video is higher, delay exists in the transmission process, so that abnormal behaviors in the video cannot be identified in real time;
2) the monitoring range of the fixed cameras is narrow, and if the monitoring range needs to be enlarged, the number of the cameras can be increased, so that the cost is increased;
3) when the network is not smooth, behavior recognition cannot be carried out.
In conclusion, the current camera fixing mode cannot meet the requirement of monitoring the illegal behaviors in the construction site.
Disclosure of Invention
Aiming at the defects in the prior art, the intelligent monitoring method for the illegal behavior based on the edge calculation solves the problems of identification delay, narrow monitoring range, high cost and limitation to network conditions in the existing behavior monitoring method.
In order to achieve the purpose of the invention, the invention adopts the technical scheme that: an intelligent violation monitoring helmet based on edge calculation, comprising:
the scene camera is used for acquiring construction site images in different scenes;
the scene detection module is used for detecting whether scene change occurs according to the construction site image collected by the scene camera;
the 4G communication module is used for sending the scene image corresponding to the current scene detection result to the remote server;
the helmet is provided with edge equipment, and is used for calling an illegal behavior recognition model built in the intelligent monitoring helmet to recognize illegal behaviors of the current scene image according to a scene detection result;
and the remote server is used for calling the violation identification model to identify the violation according to the received image corresponding to the current scene detection result.
Further, the scene camera comprises an infrared camera and a visible light camera;
the infrared camera is used for collecting images of a construction site at night, and the visible light camera is used for collecting images of the construction site at daytime;
when the signal of the 4G communication module is weak or the scene changes, calling a built-in violation identification model through the edge device of the helmet to identify the violation;
when the signal of the 4G communication module is strong and the scene is not changed, the current scene image is sent to the remote server through the 4G communication module, and a built-in violation identification model is called to identify the violation.
An intelligent monitoring method of an intelligent violation monitoring helmet based on edge calculation comprises the following steps:
s1, collecting scene images of a construction site;
s2, determining the signal strength of the current 4G communication module;
when the signal is weak, the process proceeds to step S4;
when the signal is strong, the process proceeds to step S3;
s3, judging whether the scene changes according to the collected construction site images;
if yes, go to step S4;
if not, go to step S5;
s4, directly transmitting the current scene image to the helmet edge equipment, and entering the step S6;
s5, sending the current scene image to a remote server through a 4G communication module, and entering the step S6;
and S6, identifying the illegal behavior of the current scene image through an illegal behavior identification model built in the edge device of the helmet or the remote server, and obtaining an illegal behavior identification result.
Further, the step S3 is specifically:
s31, extracting characteristic points of the image of the construction site;
s32, matching the feature points of the two adjacent images;
s33, judging based on the feature point matching result
Figure BDA0003546192950000031
Whether the result is true or not;
if yes, the scene changes, and the process goes to step S4;
if not, the scene is not changed, and the process goes to step S5;
in the formula, b is the number of feature points of the two preceding and succeeding images to be matched, a is the number of feature points of the preceding image, and k is a set scene change threshold.
Further, the violation behavior recognition model in step S6 includes an image preprocessing layer, a convolution network layer, a pooling layer, a full-link layer, and a regression classification layer, which are connected in sequence;
the input and output relation of the full connection layer is as follows:
Figure BDA0003546192950000032
in the formula, aiIs the output of the full connection layer, WijWeight occupied by the current input, biAs a bias parameter, xjIs the input of the full connection layer.
Further, in step S6, the method for identifying the illegal action of the current scene image by using the illegal action identification model specifically includes:
a1, extracting a candidate region of the current scene image by an image preprocessing layer by adopting a region merging algorithm;
a2, extracting a characteristic region of the current scene image through the convolution layer;
a3, mapping the extracted quadruple coordinates corresponding to the candidate region to the feature region through a convolution network;
a4, inputting the mapped feature areas into a pooling layer to obtain a corresponding target feature map;
and A5, sequentially inputting the target feature map into the full-connection layer and the regression classification layer, and acquiring a result identification frame in the target feature map, namely acquiring an illegal behavior identification result in the current scene image.
Further, the step a1 is specifically:
a11, generating an area set containing more than two areas based on the current scene image;
a12, calculating the similarity of two adjacent areas in the area set;
a13, combining two areas with the highest similarity into one area;
a14, judging whether the number of the areas in the current area set is more than 1;
if yes, returning to the step A12;
if not, go to step A15;
and A15, taking the area in the current area set as a candidate area in the current scene area.
Further, the similarity S (r) calculated in the step a12i,rj) Comprises the following steps:
S(ri,rj)=a1Scolor(ri,rj)+a2Stexture(ri,rj)+a3Ssize(ri,rj)+a4Sfill(ri,rj)
in the formula, Scolor(ri,rj) As color similarity, a1For the corresponding predetermined value probability, Stexture(ri,rj) For texture similarity, a2For the corresponding predetermined value probability, Ssize(ri,rj) To measure the similarity, a3For the corresponding predetermined value probability, Sfill(ri,rj) For overlapping similarity, a4Is a corresponding predetermined value probability, and1+a2+a3+a4=1;
wherein the color similarity Scolor(ri,rj) Comprises the following steps:
Figure BDA0003546192950000041
texture similarity Stexture(ri,rj) Comprises the following steps:
Figure BDA0003546192950000042
size similarity Ssize(ri,rj) Comprises the following steps:
Figure BDA0003546192950000043
overlapping similarity Sfill(ri,rj) Comprises the following steps:
Figure BDA0003546192950000051
in the formula, ri,rjIs two adjacent areas in the area set, min (-) is a minimum function, n is the number of the areas in the area set,
Figure BDA0003546192950000052
is a region riAnd rjThe color histogram of (a) is calculated,
Figure BDA0003546192950000053
is a region riAnd rjTexture histogram of (2), size (r)i) And size (r)j) Are respectively regions riAnd rjSize (im) is the size of the current scene image, size (BB)ij) Is a region riAnd rjThe combined region size.
Further, in the step a5, the regression classification layer processes the input sample by using a detection classification probability algorithm and a detection frame regression algorithm;
wherein, the classification output probability S obtained by processing the input sample by adopting a detection classification probability algorithmjComprises the following steps:
Figure BDA0003546192950000054
in the formula, ajFor the j-th value, a, in the vector of input sampleskFor the kth value of the input sample vector, T is the number of classes;
The loss L of an output result obtained by a detection classification probability algorithm in the regression classification layer is-logSj(ii) a Loss function corresponding to output image when processing input sample by adopting detection frame regression algorithm
Figure BDA0003546192950000055
Comprises the following steps:
Figure BDA0003546192950000056
Figure BDA0003546192950000057
in the formula, tiThe location of the box is identified for the prediction output by the regression classification layer,
Figure BDA0003546192950000058
is and tiThe position of the corresponding actual result target box,
Figure BDA0003546192950000059
is a loss function curve, and the expression is:
Figure BDA00035461929500000510
the invention has the beneficial effects that:
(1) the intelligent illegal behavior monitoring method provided by the invention effectively realizes abnormal behavior monitoring, and when the illegal behavior occurs in a construction site, the sound and light alarm device of the helmet can be automatically triggered to identify personnel, so that the aim of safe production is fulfilled;
(2) the intelligent helmet provided by the invention is provided with a behavior recognition model, so that the illegal behavior can be recognized under the condition of unsmooth network;
(3) the method can finish the illegal behavior identification along with the movement of personnel, thereby improving the monitoring range and reducing the cost;
(4) the monitoring method can selectively identify the violation behavior according to the scene change, thereby not only reducing the calculated amount of the edge-carried equipment of the helmet, but also meeting the requirement of real-time property;
(5) the method has the characteristic of high automatic processing level, and greatly reduces the workload of operators.
Drawings
Fig. 1 is a block diagram of an intelligent helmet structure for violation behavior based on edge calculation according to the present invention.
Fig. 2 is a flowchart of a method for monitoring an intelligent helmet for violation based on edge calculation according to the present invention.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
Example 1:
the invention has the basic idea that the illegal behavior monitoring including that the safety helmet is worn irregularly, personnel enter dangerous areas and protective articles are not worn during electric welding operation is carried out according to the image data acquired by the safety helmet, wherein the monitoring mode comprises two modes of utilizing edge computing equipment carried by the safety helmet and a remote server. Specifically, as shown in fig. 1, the intelligent violation monitoring helmet based on edge calculation in this embodiment includes:
the scene camera is used for acquiring construction site images in different scenes;
the scene detection module is used for detecting whether scene change occurs according to the construction site image collected by the scene camera;
the 4G communication module is used for sending the scene image corresponding to the current scene detection result to the remote server;
the helmet is provided with edge equipment, and is used for calling an illegal behavior recognition model built in the intelligent monitoring helmet to recognize illegal behaviors of the current scene image according to a scene detection result;
and the remote server is used for calling the violation identification model to identify the violation according to the received image corresponding to the current scene detection result.
Specifically, the scene camera in the embodiment includes an infrared camera and a visible light camera; the system comprises an infrared camera, a visible light camera and a controller, wherein the infrared camera is used for collecting images of a construction site at night, and the visible light camera is used for collecting images of the construction site at daytime;
when the signal of the 4G communication module is weak or the scene changes, calling a built-in violation identification model through the edge device of the helmet to identify the violation;
when the signal of the 4G communication module is strong and the scene is not changed, the current scene image is sent to the remote server through the 4G communication module, and a built-in violation identification model is called to identify the violation.
Example 2:
the embodiment provides a method for realizing intelligent monitoring of illegal behaviors based on the intelligent monitoring helmet, as shown in fig. 2, the method comprises the following steps:
s1, collecting scene images of a construction site;
s2, determining the signal strength of the current 4G communication module;
when the signal is weak, the process proceeds to step S4;
when the signal is strong, the process proceeds to step S3;
s3, judging whether the scene changes according to the collected construction site images;
if yes, go to step S4;
if not, go to step S5;
s4, directly transmitting the current scene image to the helmet self-contained edge device, and entering the step S6;
s5, sending the current scene image to a remote server through a 4G communication module, and entering the step S6;
and S6, identifying the violation of the current scene image through the violation identification model built in the edge device of the helmet or the remote server, and acquiring a violation identification result.
The scene image of the construction site in step S1 of the present embodiment includes a night construction site scene image captured by the infrared camera and a day construction site scene image captured by the visible light camera.
In step S2 of this embodiment, by determining the signal strength of the 4G communication module, when the 4G signal strength value is within the range of 75dBm to-85 dBm, the signal strength is strong, and when the other condition is weak, it is determined which violation identification method is adopted, so as to improve the monitoring timeliness and accuracy of the violation.
Step S3 of this embodiment specifically includes:
s31, extracting characteristic points of the image of the construction site;
s32, matching the feature points of the two adjacent images;
s33, judging based on the feature point matching result
Figure BDA0003546192950000081
Whether the result is true or not;
if yes, the scene is changed, and the process goes to step S4;
if not, the scene is not changed, and the step S5 is entered;
in the formula, b is the number of feature points of the two preceding and succeeding images to be matched, a is the number of feature points of the preceding image, and k is a set scene change threshold.
The above step S31 includes the following steps:
(1) constructing a scale space:
L(x,y,σ)=G(x,y,σ)*I(x,y)
Figure BDA0003546192950000091
D(x,y,σ)=(G(x,y,kσ))-I(x,y)
in the formula, (x, y) is a space coordinate, L (x, y, σ) is a scale space of a two-dimensional image, G (x, y, σ) is a scale-variable gaussian function, D (x, y, σ) is a differential scale space, I (x, y) is an actual image, k corresponds to different scales, k belongs to [1,3], σ corresponds to the degree of smoothness of the image, and σ is 0.5 as the σ value is larger, the image is blurred.
(2) Determining the size of the characteristic point:
after the difference scale space is obtained, comparing each pixel point of each layer of image with the image domains and the size domains of the upper layer, the lower layer and the adjacent pixel points of the layer, so that the size of the corresponding feature point can be found;
Figure BDA0003546192950000092
in the formula, D (x) is a fitting function, and the difference scale space D (x, y, σ) is substituted into the above and derived to be 0, so that the size of the feature point can be obtained;
(3) describing the characteristic points:
the above process calculates the size and direction information of the feature point, the rotation coordinate axis is the feature point direction, the gradient information of 8 directions is calculated in the 4 × 4 window in the feature point scale, and the feature point is represented by 128-dimensional feature phasor.
In the above step S32, considering that the feature points are represented by 128-dimensional feature vectors, the calculation is performed by using covariance, and the formula is as follows:
Figure BDA0003546192950000093
Figure BDA0003546192950000094
in the formula, u and v are respectively the characteristic points of the front and rear images, ui×viI-th dimension vectors of the feature points of the two images respectively, and d (u, v) is used for matching all the feature points with the matched imagesDistance of feature points, d1And d2Respectively representing the minimum distance and the next minimum distance in d (u, v), and when gamma is smaller than a set threshold value, the two feature points are matched, otherwise, the two feature points are not matched.
The scene change threshold set in step S33 is 0.3.
Value of scene change
Figure BDA0003546192950000101
Then, if the scene does not change within the set frame number, the image data in the period of time is transmitted to a remote server for processing by using a self-contained 4G communication module, and the video frame number of the image participating in calculating the scene change value in the next round is increased by 5, wherein the maximum value is 20; if the scene changes, the edge computing equipment of the helmet is directly called to process locally, and violation behavior identification is carried out.
In this embodiment, the violation behavior recognition model in step S6 includes an image preprocessing layer, a convolution network layer, a pooling layer, a full-link layer, and a regression classification layer, which are connected in sequence;
the image preprocessing layer and the regression classification layer in the recognition model are abstract concepts and respectively correspond to A1 and A5 in the subsequent steps; the input and output relation of the full connection layer is as follows:
Figure BDA0003546192950000102
in the formula, aiIs the output of the full connection layer, WijWeight occupied by the current input, biAs a bias parameter, xjIs the input of the full connection layer.
Specifically, based on the model structure, in step S6, the method for identifying the illegal action of the current scene image by using the illegal action identification model specifically includes:
a1, extracting a candidate region of the current scene image by an image preprocessing layer by adopting a region merging algorithm;
a2, extracting a characteristic region of the current scene image through the convolution layer;
a3, mapping the extracted quadruple coordinates corresponding to the candidate region to the feature region through a convolution network;
a4, inputting the mapped feature region into a pooling layer to obtain a corresponding target feature map;
and A5, sequentially inputting the target feature map into the full-connection layer and the regression classification layer, and acquiring a result identification frame in the target feature map, namely acquiring an illegal behavior identification result in the current scene image.
The step a1 is specifically:
a11, generating an area set containing more than two areas based on the current scene image;
a12, calculating the similarity of two adjacent areas in the area set;
similarity S (r) in this stepi,rj) Comprises the following steps:
S(ri,rj)=a1Scolor(ri,rj)+a2Stexture(ri,rj)+a3Ssize(ri,rj)+a4Sfill(ri,rj)
in the formula, Scolor(ri,rj) As color similarity, a1For the corresponding predetermined value probability, Stexture(ri,rj) For texture similarity, a2For the corresponding predetermined value probability, Ssize(ri,rj) To measure the similarity, a3For the corresponding predetermined value probability, Sfill(ri,rj) For overlapping similarity, a4Is a corresponding predetermined value probability, and1+a2+a3+a4=1;
wherein the color similarity Scolor(ri,rj) Comprises the following steps:
Figure BDA0003546192950000111
texture similarity Stexture(ri,rj) Comprises the following steps:
Figure BDA0003546192950000112
size similarity Ssize(ri,rj) Comprises the following steps:
Figure BDA0003546192950000113
overlapping similarity Sfill(ri,rj) Comprises the following steps:
Figure BDA0003546192950000114
in the formula, ri,rjIs two adjacent areas in the area set, min (-) is a minimum function, n is the number of the areas in the area set,
Figure BDA0003546192950000115
is a region riAnd rjThe color histogram of (a) is obtained,
Figure BDA0003546192950000116
is a region riAnd rjTexture histogram of (2), size (r)i) And size (r)j) Are respectively regions riAnd rjSize (im) is the size of the current scene image, size (BB)ij) Is a region riAnd rjThe size of the region after merging;
a13, combining the two areas with the highest similarity into one area;
a14, judging whether the number of the areas in the current area set is more than 1;
if yes, returning to the step A12;
if not, go to step A15;
and A15, taking the area in the current area set as a candidate area in the current scene area.
In the step a2, when the feature regions are extracted by the convolutional layers, convolutional kernels exist between the convolutional layers, and a multi-layer feature region can be obtained by performing a convolution operation on the input image by using the convolutional kernel of the first layer to obtain the feature region of the first layer, performing a convolution operation on the feature region of the previous layer by using the next convolutional kernel, and so on.
The mapping manner in the step a3 is as follows: the coordinate system of the original image is mapped to the feature region extracted through the convolutional network, and since the original image is reduced after passing through the convolutional layer, in order to use the candidate region and the feature region in a matching manner, the quadruple coordinates corresponding to the candidate region need to be mapped correspondingly, for example, the generated feature region is reduced to be n times of the original image, and the quadruple coordinates corresponding to the candidate region should be divided by n.
In step a4, the pooling layer divides the mapped feature region into equal-sized portions by grid division, and obtains a target feature map with a size of k × k after the maximum pooling process of the pooling layer by dividing the grid into k × k.
In the step a5, the regression classification layer processes the input sample by using a detection classification probability algorithm and a detection frame regression algorithm;
wherein, the classification output probability S obtained by processing the input sample by adopting a detection classification probability algorithmjComprises the following steps:
Figure BDA0003546192950000121
in the formula, ajFor the j-th value, a, in the vector of input sampleskIs the kth value of the input sample vector, T is the number of categories;
the loss L of an output result obtained by a detection classification probability algorithm in the regression classification layer is-logSj
Loss function corresponding to output image when processing input sample by adopting detection frame regression algorithm
Figure BDA0003546192950000131
Comprises the following steps:
Figure BDA0003546192950000132
Figure BDA0003546192950000133
in the formula, tiThe location of the box is identified for the prediction output by the regression classification layer,
Figure BDA0003546192950000134
is given asiThe position of the corresponding actual result target box,
Figure BDA0003546192950000135
is a loss function curve, and the expression is:
Figure BDA0003546192950000136

Claims (9)

1. an intelligent violation monitoring helmet based on edge calculation, comprising:
the scene camera is used for acquiring construction site images in different scenes;
the scene detection module is used for detecting whether scene change occurs according to the construction site image collected by the scene camera;
the 4G communication module is used for sending the scene image corresponding to the current scene detection result to the remote server;
the helmet is provided with edge equipment, and is used for calling an illegal behavior recognition model built in the intelligent monitoring helmet to recognize illegal behaviors of the current scene image according to a scene detection result;
and the remote server is used for calling the violation identification model to identify the violation according to the received image corresponding to the current scene detection result.
2. The intelligent violation monitoring helmet based on edge computing as claimed in claim 1, wherein the scene camera comprises an infrared camera and a visible light camera;
the infrared camera is used for collecting images of a construction site at night, and the visible light camera is used for collecting images of a construction site at daytime;
when the signal of the 4G communication module is weak or the scene changes, calling a built-in violation identification model through the edge device of the helmet to identify the violation;
when the signal of the 4G communication module is strong and the scene is not changed, the current scene image is sent to the remote server through the 4G communication module, and a built-in violation identification model is called to identify the violation.
3. The intelligent monitoring method for the intelligent monitoring helmet based on the edge computing for the illegal behaviors is characterized by comprising the following steps of:
s1, collecting scene images of a construction site;
s2, determining the signal strength of the current 4G communication module;
when the signal is weak, the process proceeds to step S4;
when the signal is strong, the process proceeds to step S3;
s3, judging whether the scene changes according to the collected construction site images;
if yes, go to step S4;
if not, go to step S5;
s4, directly transmitting the current scene image to the helmet edge equipment, and entering the step S6;
s5, sending the current scene image to a remote server through a 4G communication module, and entering the step S6;
and S6, identifying the illegal behavior of the current scene image through an illegal behavior identification model built in the edge device of the helmet or the remote server, and obtaining an illegal behavior identification result.
4. The intelligent violation behavior monitoring method based on edge calculation as claimed in claim 3, wherein the step S3 specifically comprises:
s31, extracting characteristic points of the image of the construction site;
s32, matching the feature points of the two adjacent images;
s33, judging based on the feature point matching result
Figure FDA0003546192940000021
Whether the result is true or not;
if yes, the scene is changed, and the process goes to step S4;
if not, the scene is not changed, and the process goes to step S5;
in the formula, b is the number of feature points of the two preceding and succeeding images to be matched, a is the number of feature points of the preceding image, and k is a set scene change threshold.
5. The intelligent monitoring method for illegal behaviors based on edge calculation according to claim 4, wherein the illegal behavior recognition model in step S6 comprises an image preprocessing layer, a convolution network layer, a pooling layer, a full-link layer and a regression classification layer which are connected in sequence;
the input and output relation of the full connection layer is as follows:
Figure FDA0003546192940000022
in the formula, aiIs the output of the full connection layer, WijWeight occupied by the current input, biAs a bias parameter, xjIs the input of the full connection layer.
6. The intelligent violation monitoring method based on edge calculation according to claim 5, wherein in step S6, the method for identifying the violation of the current scene image by the violation identification model specifically comprises:
a1, extracting a candidate region of the current scene image by an image preprocessing layer by adopting a region merging algorithm;
a2, extracting the characteristic area of the current scene image through the convolution layer;
a3, mapping the extracted quadruple coordinates corresponding to the candidate region to the feature region through a convolution network;
a4, inputting the mapped feature areas into a pooling layer to obtain a corresponding target feature map;
and A5, sequentially inputting the target feature map into the full-connection layer and the regression classification layer, and acquiring a result identification frame in the target feature map, namely acquiring an illegal behavior identification result in the current scene image.
7. The intelligent violation behavior monitoring method based on edge calculation as claimed in claim 6, wherein the step A1 specifically comprises:
a11, generating an area set containing more than two areas based on the current scene image;
a12, calculating the similarity of two adjacent areas in the area set;
a13, combining two areas with the highest similarity into one area;
a14, judging whether the number of the areas in the current area set is more than 1;
if yes, returning to the step A12;
if not, go to step A15;
and A15, taking the area in the current area set as a candidate area in the current scene area.
8. The intelligent monitoring method for violation behaviors based on edge calculation of claim 7, wherein the similarity S (r) calculated in the step A12 is S (r)i,rj) Comprises the following steps:
S(ri,rj)=a1Scolor(ri,rj)+a2Stexture(ri,rj)+a3Ssize(ri,rj)+a4Sfill(ri,rj)
in the formula, Scolor(ri,rj) As color similarity, a1For the corresponding predetermined value probability, Stexture(ri,rj) For texture similarity, a2For the corresponding predetermined value probability, Ssize(ri,rj) To measure the similarity, a3For the corresponding predetermined value probability, Sfill(ri,rj) For overlapping similarity, a4Is a corresponding predetermined value probability, and1+a2+a3+a4=1;
wherein the color similarity Scolor(ri,rj) Comprises the following steps:
Figure FDA0003546192940000041
texture similarity Stexture(ri,rj) Comprises the following steps:
Figure FDA0003546192940000042
size similarity Ssize(ri,rj) Comprises the following steps:
Figure FDA0003546192940000043
overlapping similarity Sfill(ri,rj) Comprises the following steps:
Figure FDA0003546192940000044
in the formula, ri,rjIs two adjacent areas in the area set, min (-) is a minimum function, n is the number of the areas in the area set,
Figure FDA0003546192940000045
is a region riAnd rjThe color histogram of (a) is calculated,
Figure FDA0003546192940000046
is a region riAnd rjTexture histogram of (2), size (r)i) And size (r)j) Are respectively regions riAnd rjSize (im) is the size of the current scene image, size (BB)ij) Is a region riAnd rjThe combined region size.
9. The intelligent monitoring method for the violation behaviors based on edge calculation according to claim 6, wherein in the step A5, the regression classification layer processes the input samples by using a detection classification probability algorithm and a detection bounding box regression algorithm;
wherein, the classification output probability S obtained by processing the input sample by adopting a detection classification probability algorithmjComprises the following steps:
Figure FDA0003546192940000047
in the formula, ajFor the j-th value, a, in the vector of input sampleskIs the kth value of the input sample vector, T is the number of categories;
the loss L of an output result obtained by a detection classification probability algorithm in the regression classification layer is-log Sj
Loss function corresponding to output image when processing input sample by adopting detection frame regression algorithm
Figure FDA0003546192940000051
Comprises the following steps:
Figure FDA0003546192940000052
Figure FDA0003546192940000053
in the formula, tiThe location of the box is identified for the prediction output by the regression classification layer,
Figure FDA0003546192940000054
is given asiThe position of the corresponding actual result target box,
Figure FDA0003546192940000055
is a loss function curve, and the expression is:
Figure FDA0003546192940000056
CN202210248908.9A 2022-03-14 2022-03-14 Intelligent violation monitoring helmet based on edge calculation and monitoring method thereof Pending CN114708544A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210248908.9A CN114708544A (en) 2022-03-14 2022-03-14 Intelligent violation monitoring helmet based on edge calculation and monitoring method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210248908.9A CN114708544A (en) 2022-03-14 2022-03-14 Intelligent violation monitoring helmet based on edge calculation and monitoring method thereof

Publications (1)

Publication Number Publication Date
CN114708544A true CN114708544A (en) 2022-07-05

Family

ID=82167920

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210248908.9A Pending CN114708544A (en) 2022-03-14 2022-03-14 Intelligent violation monitoring helmet based on edge calculation and monitoring method thereof

Country Status (1)

Country Link
CN (1) CN114708544A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116562824A (en) * 2023-05-25 2023-08-08 闽通数智安全顾问(杭州)有限公司 Highway engineering full life cycle project management method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116562824A (en) * 2023-05-25 2023-08-08 闽通数智安全顾问(杭州)有限公司 Highway engineering full life cycle project management method and system
CN116562824B (en) * 2023-05-25 2023-11-24 闽通数智安全顾问(杭州)有限公司 Highway engineering full life cycle project management method and system

Similar Documents

Publication Publication Date Title
CN110717414B (en) Target detection tracking method, device and equipment
CN104966304A (en) Kalman filtering and nonparametric background model-based multi-target detection tracking method
CN113052876B (en) Video relay tracking method and system based on deep learning
CN102254394A (en) Antitheft monitoring method for poles and towers in power transmission line based on video difference analysis
CN112183472A (en) Method for detecting whether test field personnel wear work clothes or not based on improved RetinaNet
CN113034378B (en) Method for distinguishing electric automobile from fuel automobile
CN111881749A (en) Bidirectional pedestrian flow statistical method based on RGB-D multi-modal data
CN115797736B (en) Training method, device, equipment and medium for target detection model and target detection method, device, equipment and medium
CN112287823A (en) Facial mask identification method based on video monitoring
CN111967396A (en) Processing method, device and equipment for obstacle detection and storage medium
CN111460917B (en) Airport abnormal behavior detection system and method based on multi-mode information fusion
CN115171022A (en) Method and system for detecting wearing of safety helmet in construction scene
CN114648748A (en) Motor vehicle illegal parking intelligent identification method and system based on deep learning
CN101320477B (en) Human body tracing method and equipment thereof
CN115346155A (en) Ship image track extraction method for visual feature discontinuous interference
CN115049954A (en) Target identification method, device, electronic equipment and medium
CN114708544A (en) Intelligent violation monitoring helmet based on edge calculation and monitoring method thereof
CN116110006B (en) Scenic spot tourist abnormal behavior identification method for intelligent tourism system
CN112017213A (en) Target object position updating method and system
CN116862832A (en) Three-dimensional live-action model-based operator positioning method
Liu et al. STCN-Net: A novel multi-feature stream fusion visibility estimation approach
CN114912536A (en) Target identification method based on radar and double photoelectricity
CN112541403B (en) Indoor personnel falling detection method by utilizing infrared camera
CN113076825A (en) Transformer substation worker climbing safety monitoring method
CN111985331A (en) Detection method and device for preventing secret of business from being stolen

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination