CN113361502B - Garden perimeter intelligent early warning method based on edge group calculation - Google Patents
Garden perimeter intelligent early warning method based on edge group calculation Download PDFInfo
- Publication number
- CN113361502B CN113361502B CN202110911265.7A CN202110911265A CN113361502B CN 113361502 B CN113361502 B CN 113361502B CN 202110911265 A CN202110911265 A CN 202110911265A CN 113361502 B CN113361502 B CN 113361502B
- Authority
- CN
- China
- Prior art keywords
- camera
- image
- attention mechanism
- cameras
- early warning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19602—Image analysis to detect motion of the intruder, e.g. by frame subtraction
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an intelligent early warning method for a garden perimeter based on edge group calculation. And then introducing a Yolov5 network with a double attention mechanism to perform face detection on the images shot by the cameras. Secondly, according to a set threshold value, the detection result of the image shot by the main camera is judged, and the enhancement judgment of the auxiliary camera is introduced to the detection result judged to be suspected. When the slave camera carries out enhancement judgment, firstly, the mapping of the suspected face pixels in the image shot by the slave camera is obtained, and if the mapping coordinates are in the dangerous area in the slave camera, the slave camera is included in the candidate set. And finally, respectively carrying out face detection on each camera in the candidate set, scoring by adopting a scoring and voting system introducing weight, and carrying out early warning when preset conditions are met.
Description
Technical Field
The invention relates to a garden perimeter intelligent early warning method based on edge group calculation, and belongs to the field of edge group calculation and intelligent garden safety.
Background
The intelligent garden fuses emerging information technologies such as internet of things, intelligent terminals and big data cloud computing with modern ecological gardens to achieve mutual inductance, mutual knowledge and interaction between people and nature. There are many areas that need to be guarded in gardens, such as lakes and waters, and important plant protection areas, and these areas are in open gardens, and it is difficult to prevent parents and children from intruding unconsciously.
The existing human body detection method comprises an infrared thermal imaging technology, image processing and deep learning. The human body and the heat radiation characteristic of the background object are greatly different, the corresponding infrared thermal imaging area and the environment present different gray level expressions, and the infrared thermal imaging is slightly influenced by visible light, so that the human body can be detected no matter in the daytime or at night. However, infrared thermal imaging also has many disadvantages, for example, the image edge is fuzzy, the features are difficult to extract, the obstacle targets such as animals and street lamps are similar to the human target, the brightness is high, and the obstacle targets are easy to be confused with the human body. Secondly, infrared thermal imaging is limited by imaging technology and shooting equipment, and is not suitable for being used in intelligent garden scenes.
Disclosure of Invention
Aiming at the problems of unnecessary human resource waste caused by artificial realization, time consumption and labor consumption of the traditional early warning mode, the invention provides an intelligent early warning method for the garden perimeter based on edge group calculation, which is used for preventing accidents in gardens and meeting the safety and intelligence of smart gardens.
In order to achieve the purpose, the invention adopts the following technical scheme:
an intelligent garden perimeter early warning method based on edge group calculation specifically comprises the following steps:
s1, deploying M cameras around the designated dangerous area, wherein the M cameras meet the condition of discrete distribution, and anchoring the dangerous area for the M cameras; simultaneously, the camera that can shoot the information is complete, the even danger area of distribution is the main camera, and the rest is the follow camera.
S2, adding a double attention mechanism to improve Yolov5, introducing a pixel attention mechanism in front of a Yolov5 network, attenuating the pixel value of a non-target area in an intelligent garden scene, and highlighting the pixel of the target area; a space self-adaptive attention mechanism aiming at the intelligent garden scene is introduced behind a backbone network of Yolov5, the dynamic foreground is extracted, and the human face features are detected. And determining whether enhancement judgment is needed or not according to two set detection rate thresholds: if the detection rate is higher than 0.9, the person is judged to be a person, and early warning is carried out; if the number is less than 0.4, the person is judged to be non-human; and calling other cameras between the two cameras to perform enhancement judgment.
And S3, obtaining the mapping of the coordinates of the face pixels in the main camera in each slave camera according to the spatial relationship between the main camera and each slave camera, so as to determine a camera candidate set for enhancement judgment.
And S4, respectively carrying out face detection on the selected regions by the cameras in the candidate set, carrying out enhancement judgment according to the score voting system with the introduced weight, and determining whether to carry out early warning.
Further, anchoring each camera in a dangerous area in S1, specifically: two tools, cv2rectangle () and cv2putText () of OpenCV are adopted, and the usage is as follows:
cv2rectangle ( img , (a,b) , (a+w , b+h) , (B , G , R) , Thickness)
cv2putText ( img , text, (a,b) , Font , Size , (B , G , R) , Thickness)
where img represents the picture taken by the camera, (a, B) represents the coordinates of the top left corner of the manually anchored danger zone, (a + w, B + h) represents the coordinates of the bottom right corner of the manually anchored danger zone, (B, G, R) represents the color of the picture taken by the camera, Thickness represents the width of the line, text represents the added text, Font represents the Font, and Size represents the Font Size.
Furthermore, because there are many green interferents in the smart garden scene and the background area is complex, when the face detection is performed in the smart garden scene, the target only occupies a small part of the whole image, so that the preprocessing is performed by adopting a pixel attention mechanism, and the pixel attention mechanism is introduced in front of the Yolov5 network to attenuate the pixel value of the non-target area of the original pixel. The generator output of the pixel attention mechanism generates a foreground map, the structure of which is composed of an encoder and a decoder. The encoder consists of a CPL network block and comprises a convolution layer, a maximum pooling layer and a Leaky Relu activation function; the decoder consists of a TBL and a CTBM network block, wherein the TBL consists of a transposition convolution, a BN and a Leaky Relu, and the CTBM consists of a splicing convolution, a transposition convolution, a BN and a Mish activation function.
The discriminator of the pixel attention mechanism receives two inputs, one input is the stitching of an original image and a real foreground image, the other input is the stitching of the original image and a generated foreground image, and the output discriminates the foreground image. The structure of the discriminator takes CM as an input network block and consists of a convolutional layer and a Mish activation function, CBM is an intermediate module of the discriminator and consists of the convolutional layer, BN and the Mish activation function, the dimension of a feature graph is changed into 1 through a full connection layer during output, and finally, a discrimination result is output through a Sigmoid function.
Defining a green threshold value, setting a pixel value within the threshold value range as 0, wherein the threshold value range is as follows:
green min=[0,150,0]
green max=[100,255,100]
when the discriminator of the pixel attention mechanism generates the discrimination foreground image, the output discrimination foreground image needs to be converted into a gray image, and the gray image is processed by using a threshold switching function, so that the quality of the generated foreground image is improved.
The ashing function is:
Gray(R,G,B)=(R×0.299+G×0.587+B×0.144)/255
the switching function is:
img = Switch (R, G, B) =1 when Gray (R, G, B) > α
Img = Switch (R, G, B) = epsilon when Gray (R, G, B) ≦ alpha
Where α is the threshold and ε is the forgetting factor used to prune the pixel values of the non-target area.
Obtaining a final foreground picture ImgThen, multiplying each channel of the original image by the foreground image to obtain the output x of the pixel attention mechanismINamely:
xI[i , j , k]= x[I , j , k]·Img [i , j , k]。
further, a spatially adaptive attention mechanism is introduced in the backbone network of Yolov 5. The spatial adaptive attention mechanism described in the invention is a basic mask adding mechanism of a soft attention mechanism, and the difference is that the attention mechanism in the invention refers to the idea of a residual error network and considers the influence of different deformations on characteristics. And not only adding a mask according to the information of the current network layer, but also transmitting the information of the previous layer after superposing the deformation of different cameras. This can prevent the problem that the number of network layers cannot be stacked deeply due to too small amount of information after the mask. The spatial adaptive attention mask provided by the invention can be regarded as the weight of each feature element, and the attention mechanism of a spatial domain and a channel domain is formed simultaneously by finding the corresponding attention weight for each feature element.
The space self-adaptive attention mechanism provided by the invention comprises the following steps:
Hi,d(x)=[10Vi+Mi,c(x)] × Fi,c(x)
Vi =[arctan(Wimax/ Hi)/arctan(Wimin/ Hi)-1]2
where x represents the feature vector of the image captured by the camera, Hi,d(x) Indicating the output of the attention module, ViIndicates the degree of deformation, WimaxRepresenting the actual distance, W, from the camera of the real scene corresponding to the top edge of the image captured by the cameraiminRepresenting the actual distance H from the camera to the real scene corresponding to the bottom edge of the image shot by the cameraiIndicating the height at which the camera is mounted. Mi,c(x) Attention parameter, F, representing a soft maski,c(x) The picture tensor characteristics of the previous layer are represented. Where the F-function will have different attention domain results with different activation functions. The invention uses the activation function of the mean value of the picture feature tensor in the channel domain.
Further, in S3, according to the spatial relationship between the master camera and each slave camera, the mapping of the coordinates of the face pixels detected in the image captured by the master camera in each slave camera is calculated by the following formula, and a candidate set of cameras for enhancement determination is selected.
[um vm 1]T = FmKF0 -1[u v 1] T
Wherein [ u v]Representing the coordinates of the face pixels detected in the image taken by the main camera, [ u [ ]m vm]Represents [ u v ]]Mapping results in images taken from a camera, F0Representing a main camera parameter matrix, FmA parameter matrix representing the mth slave camera; k denotes a spatial transformation matrix from the master camera to the slave camera.
And calculating new pixel coordinates of the face pixel coordinates detected in the image shot by the main camera in each secondary camera according to the formula, judging whether the new pixel coordinates are in the dangerous area range defined by S1, and adding the secondary camera into a candidate set if the new pixel coordinates are in the dangerous area of the secondary camera.
Further, in S4, the cameras in the candidate set perform face detection on the selected area, perform enhancement judgment according to the score-vote system with the introduced weight, and determine whether to perform early warning. Score S of human face detected in image shot by jth camera in candidate setjCalculated by the following formula:
Sj =(VJmax / Vj) Pj
wherein, VJmaxMaximum value of deformation degree, V, in J cameras in table candidate setjIndicates the deformation degree, P, of the jth camerajAnd the probability of the Yolov5 output of adding a double attention mechanism when the face detection is carried out on the image shot by the jth camera is shown.
After the scores of all the cameras are calculated, the average value S of the scores of J cameras in the candidate set is further calculatedmeaIf S ismeaIf the following conditions are met, the suspected face image is judged to be a person, otherwise, the suspected face image is judged to be a different personA compound (I) is provided.
Smea>VJmax/(5VJmin)
Wherein, VJminAnd representing the minimum value of the deformation degree of the J cameras in the candidate set.
The invention has the beneficial effects that: compared with the traditional manual early warning method, the method adopts a deep learning method to detect the intrusion of people in the dangerous area, saves a large amount of manpower, material resources and financial resources, and better meets the purposes and requirements of modernization and intellectualization of intelligent gardens; compared with a general neural network detection scheme, the intelligent neural network detection method adopts the group intelligence idea, and uses the master camera and the slave camera to carry out hierarchical detection, so that the accuracy, the real-time performance and the high efficiency of detection are greatly improved, the intruder can be detected more quickly, the occurrence of accidents can be prevented better, the accident occurrence rate in the intelligent garden can be effectively reduced, and the application prospect is wide.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a flow chart of an enhanced judgment according to the present invention;
FIG. 3 is a diagram of a primary structure of a pixel attention mechanism generator;
FIG. 4 is a diagram of a pixel attention mechanism discriminator.
Detailed Description
The invention provides a garden perimeter intelligent early warning method based on edge group calculation, which basically comprises the steps of anchoring a dangerous area, detecting a main camera, enhancing detection of a secondary camera, calculating score, judging and early warning. As shown in fig. 1, the method specifically comprises the following steps:
step 1: deploying M cameras around the defined dangerous area, selecting a main camera and anchoring each camera in the dangerous area when the M cameras meet the condition of discrete distribution. Anchoring is carried out by adopting two tools of cv2rectangle () and cv2putText () of OpenCV, and the usage is as follows:
cv2rectangle ( img , (a,b) , (a+w , b+h) , (B , G , R) , Thickness)
cv2putText ( img , text, (a,b) , Font , Size , (B , G , R) , Thickness)。
step 2: yolov5 was improved by adding a double attention mechanism: a pixel attention mechanism is introduced in front of a network, because green interferents are more in a smart garden scene, a background area is more complex, and when the face detection is carried out in the smart garden scene, a target only occupies a small part of a whole image, the pixel attention mechanism is adopted for preprocessing, and the pixel value of an original pixel non-target area is attenuated.
The generator output of the pixel attention mechanism generates a foreground map, the structure of which is composed of an encoder and a decoder. The encoder consists of a CPL network block and comprises a convolution layer, a maximum pooling layer and a Leaky Relu activation function; the decoder consists of a TBL and a CTBM network block, wherein the TBL consists of a transposition convolution, a BN and a Leaky Relu, and the CTBM consists of a splicing convolution, a transposition convolution, a BN and a Mish activation function. The generator structure is shown in figure 3.
The discriminator of the pixel attention mechanism receives two inputs, one input is the stitching of an original image and a real foreground image, the other input is the stitching of the original image and a generated foreground image, and the output discriminates the foreground image. The structure of the discriminator takes CM as an input network block and consists of a convolutional layer and a Mish activation function, CBM is an intermediate module of the discriminator and consists of the convolutional layer, BN and the Mish activation function, the dimension of a feature graph is changed into 1 through a full connection layer during output, and finally, a discrimination result is output through a Sigmoid function. The structure of the discriminator is shown in fig. 4.
Defining a green threshold value, setting a pixel value within the threshold value range as 0, wherein the threshold value range is as follows:
green min=[0,150,0] (1)
green max=[100,255,100] (2)
when the discriminator of the pixel attention mechanism generates the discrimination foreground image, the original output needs to be converted into a gray image, and the gray image is processed by using a threshold switch function, so that the quality of the discrimination foreground image is improved, and the final foreground image is obtained. The ashing function is:
Gray(R,G,B)=(R×0.299+G×0.587+B×0.144)/255 (3)
the switching function is:
img = Switch (R, G, B) =1 when Gray (R, G, B) > α
Img = Switch (R, G, B) = epsilon (4) when Gray (R, G, B) ≦ alpha
Where α is the threshold and ε is the forgetting factor used to prune the pixel values of the non-target area.
After the final foreground image Img is obtained, multiplying each channel of the original image by the final foreground image to obtain the output x of the pixel attention mechanismINamely:
xI[i , j , k]= x[I , j , k]·Img [i , j , k] (5)
a spatial adaptive attention mechanism introduced in the backbone network of Yolov 5. The spatial adaptive attention mechanism described in the invention is a basic mask adding mechanism of a soft attention mechanism, and the difference is that the attention mechanism in the invention refers to the idea of a residual error network and considers the influence of different deformations on characteristics. And not only adding a mask according to the information of the current network layer, but also transmitting the information of the previous layer after superposing the deformation of different cameras. This can prevent the problem that the number of network layers cannot be stacked deeply due to too small amount of information after the mask. The spatial adaptive attention mask provided by the invention can be regarded as the weight of each feature element, and the attention mechanism of a spatial domain and a channel domain is formed simultaneously by finding the corresponding attention weight for each feature element.
The space self-adaptive attention mechanism provided by the invention comprises the following steps:
Hi,d(x)=[10Vi+Mi,c(x)] × Fi,c(x) (6)
Vi =[arctan(Wimax/ Hi)/arctan(Wimin/ Hi)-1]2 (7)
wherein Hi,d(x) Indicating the output of the attention module, ViIndicates the degree of deformation, WimaxRepresents the actual distance, W, of the top edge of the ith camera image from the cameraiminRepresenting the actual distance, H, from the bottom edge of the ith camera image to the cameraiIndicating the height at which the ith camera is mounted. Mi,c(x) Attention parameter, F, representing a soft maski,c(x) Representing the picture tensor characteristics of the previous layer. Where the F-function will have different attention domain results with different activation functions. The invention uses the activation function of the mean value of the picture feature tensor in the channel domain.
And step 3: according to the spatial relationship between the master camera and each slave camera, the mapping of the suspected face pixel coordinates in the master camera in each slave camera is calculated by the following formula, and a camera candidate set for enhancement judgment is selected.
[um vm 1]T = FmKF0 -1[u v 1] T (8)
Wherein [ u v]Representing the coordinates of the face pixels detected in the image taken by the main camera, [ u [ ]m vm]Represents [ u v ]]Mapping results in images taken from a camera, F0Representing a main camera parameter matrix, fx=1/dx,fy=1/dy,dx,dyRespectively, the size of the unit pixel in x and y directions, cx,cyThe position of the central point of a detection frame output by Yolov5 which is used for adding a double attention mechanism when the image shot by the main camera is used for face detection in the pixel coordinate system of the main camera; fmA parameter matrix representing the mth slave camera; k denotes master to slaveThe spatial transformation matrix, x, y, z, w, is the values of terms of a quaternion representing the angular transformation relationship between the master and slave cameras.
After calculating the new pixel coordinates of the face pixel coordinates detected in the image shot by the master camera in each slave camera according to the above formula, judging whether the new pixel coordinates are within the range of the dangerous area defined in step 1, and if the new pixel coordinates are within the dangerous area of the slave camera, adding the slave camera into a candidate set, as shown in fig. 2.
And 4, step 4: and the cameras in the candidate set respectively carry out face detection on the selected areas, carry out enhancement judgment according to the score voting system with the introduced weight, and determine whether to carry out early warning. If J cameras are in the camera candidate set, the score S of the face detected by each camerajCalculated by the following formula:
Sj =(VJmax / Vj) Pj (9)
wherein, VJmaxRepresents the maximum value of deformation degree V in J cameras in the candidate setjIndicates the deformation degree, P, of the jth camerajAnd the probability of the Yolov5 output of adding a double attention mechanism when the face detection is carried out on the image shot by the jth camera is shown.
After the scores of all the cameras are calculated, the average value S of the scores is further calculatedmeaIf S ismeaIf the following conditions are met, the suspected face image is judged as a person, otherwise, the suspected face image is judged as a foreign matter.
Smea>VJmax/(5VJmin) (10)
Wherein, VJminAnd representing the minimum value of the deformation degree of the J cameras in the candidate set.
The traditional early warning mode is realized manually, time and labor are consumed, and unnecessary human resource waste is caused. The method includes the steps that firstly, i cameras are deployed around a dangerous area in a garden scene, one main camera is selected, and artificial anchoring is conducted on the dangerous area shot by each camera. And then inputting the image into a Yolov5 network introducing a double attention mechanism, introducing a human face feature with small enhanced deformation degree of a space self-adaptive attention mechanism into a main network by increasing a pixel attention mechanism attenuation background pixel value in front of the network, finally, if the detection rate is greater than 0.9, judging that the human face feature is a person, performing early warning, judging that the detection rate is between 0.4 and 0.9 is suspected, introducing an enhancement judgment from a camera, and not performing early warning when the detection rate is less than 0.4. When the slave camera carries out enhancement judgment, firstly, the mapping of the suspected face pixels in the master camera in the slave camera is obtained, whether the mapping coordinates are in the dangerous area in the slave camera is judged, and if the mapping coordinates are in the dangerous area in the slave camera, the slave camera is contained in the candidate set. And finally, respectively carrying out face detection on each camera in the candidate set, scoring by adopting a scoring and voting system introducing weight, and carrying out early warning when preset conditions are met. And finally, perimeter early warning of the dangerous area is realized. The grading early warning system based on the edge calculation is more modern, convenient and intelligent.
It should be noted that the above description of the embodiments is only for the purpose of assisting understanding of the method of the present application and the core idea thereof, and that those skilled in the art can make several improvements and modifications to the present application without departing from the principle of the present application, and these improvements and modifications are also within the protection scope of the claims of the present application.
Claims (8)
1. A face detection method for intelligent garden scenes is characterized in that I cameras are arranged in an intelligent garden, and face detection is performed on images shot by each camera by using Yolov5 with a double attention mechanism; wherein, the Yolov5 added with the double attention mechanism introduces a pixel attention mechanism module before the input end of the Yolov5, and introduces a space adaptive attention mechanism module after the main network of the Yolov 5;
inputting the image to be detected into a generator in the pixel attention mechanism module, and outputting a generated foreground image by the generator; the generator is composed of an encoder and a decoder in cascade connection, the encoder is composed of n CPL network blocks in cascade connection, and each CPL network block is composed of a convolution layer, a maximum pooling layer and a Leaky Relu activation function; the decoder is formed by cascading 1 TBL network block and n-1 CTBM network blocks, wherein the TBL network block is formed by a transposition convolutional layer, a normalization BN layer and a Leaky Relu activation function, and the CTBM network block is formed by a splicing layer, a transposition convolutional layer, a normalization BN layer and a Mish activation function; the 1 st, 2 nd, … th CPL network blocks are respectively connected with the 1 st, 2 nd, … th, n-1 th CTBM and TBL;
the image to be detected is respectively spliced with the real foreground image and the generated foreground image and then used as two inputs of a discriminator in the pixel attention mechanism module, and the output of the discriminator is used for discriminating the foreground image; the discriminator is formed by cascading a CM network block, m CBM network blocks, a full connection layer and a Sigmoid function layer;
judging whether the foreground image is subjected to gray level processing or not, and then processing by using a threshold switching function to obtain a final foreground image;
multiplying each channel of the image to be detected by the final foreground image to obtain the output of the pixel attention mechanism module;
the spatial adaptive attention mechanism module adopts a mask adding mechanism, adds a mask according to the information of the current network layer, and simultaneously transmits the information of the previous layer after superposing the deformation of a camera for shooting the image to be detected; the spatial adaptive attention mechanism corresponding to the image shot by the ith camera is as follows:
Hi,d(x)=[10Vi+Mi,c(x)] × Fi,c(x)
Vi =[arctan(Wimax/ Hi)/arctan(Wimin/ Hi)-1]2
where x represents the feature vector of the image captured by the camera, Hi,d(x) Represents the output, V, of the spatial adaptive attention mechanism module corresponding to the image taken by the cameraiIndicating the degree of deformation of the camera, WimaxRepresenting the actual distance, W, from the camera of the real scene corresponding to the top edge of the image captured by the cameraiminRepresenting the actual distance H from the camera to the real scene corresponding to the bottom edge of the image shot by the cameraiIndicating the height at which the camera is mounted, Mi,c(x) Attention parameter, F, representing a soft maski,c(x) The picture tensor characteristics of the previous layer are represented.
2. The method as claimed in claim 1, wherein the real foreground image is generated by defining a green threshold, setting the pixel value of the pixel point in the image to be detected within the threshold range to 0, and setting the lower limit green of the threshold rangemin=[0,150,0]Upper limit greenmax=[100,255,100]。
3. The method of claim 1, wherein the graying function is: gray (R, G, B) = (R × 0.299+ G × 0.587+ B × 0.144)/255; threshold switching function: switch (R, G, B) =1 when Gray (R, G, B) > alpha, Switch (R, G, B) = epsilon when Gray (R, G, B) ≦ alpha, wherein (R, G, B) represents the color component value of the pixel point in the discrimination foreground image, alpha is the threshold value, epsilon is the forgetting factor.
4. An intelligent garden perimeter early warning method based on edge population calculation is characterized by comprising the following steps:
s1, deploying M cameras in a designated danger area of the garden, wherein the M cameras meet the condition of discrete distribution, and anchoring the danger area to the M cameras; meanwhile, selecting a camera capable of shooting a dangerous area with complete information and uniform distribution as a main camera, and the rest of the cameras as auxiliary cameras;
s2, based on the face detection method of any one of claims 1 to 3, the face detection is carried out on the image shot by the main camera, the detection result is higher than a first threshold value, the image is judged to be a person and early-warning is carried out, if the detection result is lower than a second threshold value, the image is judged to be a non-person, and if the detection result is between the first threshold value and the second threshold value, S3 is executed; wherein the first threshold is greater than the second threshold;
s3, mapping the pixel coordinates of the face detected in the image shot by the master camera to the image shot by each slave camera according to the spatial relationship between the master camera and each slave camera, and adding the slave camera into a candidate set if the mapping result in the image shot by a certain slave camera is in the range of the dangerous area;
s4, based on the face detection method of any claim 1 to 3, face detection is carried out on the images shot by the cameras in the candidate set, and score voting system of weight is introduced for enhanced judgment to determine whether to carry out early warning.
5. The intelligent garden perimeter early warning method based on edge population calculation as claimed in claim 4, wherein in S1, two tools of cv2rectangle () and cv2putText () of OpenCV are used for anchoring the camera in the dangerous area, and the usage is as follows:
cv2rectangle ( img , (a,b) , (a+w , b+h) , (B , G , R) , Thickness)
cv2putText ( img , text, (a,b) , Font , Size , (B , G , R) , Thickness)
where img represents the picture taken by the camera, (a, B) represents the coordinates of the top left corner of the manually anchored danger zone, (a + w, B + h) represents the coordinates of the bottom right corner of the manually anchored danger zone, (B, G, R) represents the color of the picture taken by the camera, Thickness represents the width of the line, text represents the added text, Font represents the Font, and Size represents the Font Size.
6. The intelligent early warning method for the garden perimeter based on edge population calculation as claimed in claim 4, wherein in step S3, according to the spatial relationship between the master camera and each slave camera, the coordinates of the face pixels detected in the image captured by the master camera are mapped to the images captured by each slave camera by the following formula:
[um vm 1]T = FmKF0 -1[u v 1] T
wherein [ u v]Representing the coordinates of the face pixels detected in the image taken by the main camera, [ u [ ]m vm]Represents [ u v ]]Mapping results in images taken from a camera, F0Representing a main camera parameter matrix, FmA parameter matrix representing the mth slave camera; k representsA spatial transformation matrix from the master camera to the slave camera.
7. The intelligent garden perimeter early warning method based on edge population calculation as claimed in claim 4, wherein in S4, the score S of the face detected in the image shot by the jth camera in the candidate set is S4jCalculated by the following formula:
Sj=(VJmax / Vj) Pj
wherein, VJmaxRepresents the maximum value of deformation degree V in J cameras in the candidate setjIndicates the deformation degree, P, of the jth camerajAnd the probability of the Yolov5 output of adding a double attention mechanism when the face detection is carried out on the image shot by the jth camera is shown.
8. The intelligent garden perimeter early warning method based on edge population calculation as claimed in claim 7, wherein the mean value S of scores of faces detected by J cameras in the candidate set is calculatedmeaIf S ismeaIf the following conditions are met, the detection result of the image shot by the main camera is judged to be a person, and early warning is carried out, otherwise, the detection result is non-person:
Smea>VJmax/(5VJmin)
wherein, VJminAnd representing the minimum value of the deformation degree of the J cameras in the candidate set.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110911265.7A CN113361502B (en) | 2021-08-10 | 2021-08-10 | Garden perimeter intelligent early warning method based on edge group calculation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110911265.7A CN113361502B (en) | 2021-08-10 | 2021-08-10 | Garden perimeter intelligent early warning method based on edge group calculation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113361502A CN113361502A (en) | 2021-09-07 |
CN113361502B true CN113361502B (en) | 2021-11-02 |
Family
ID=77540772
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110911265.7A Active CN113361502B (en) | 2021-08-10 | 2021-08-10 | Garden perimeter intelligent early warning method based on edge group calculation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113361502B (en) |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104463191A (en) * | 2014-10-30 | 2015-03-25 | 华南理工大学 | Robot visual processing method based on attention mechanism |
CN111860393A (en) * | 2020-07-28 | 2020-10-30 | 浙江工业大学 | Face detection and recognition method on security system |
CN112396115B (en) * | 2020-11-23 | 2023-12-22 | 平安科技(深圳)有限公司 | Attention mechanism-based target detection method and device and computer equipment |
CN112149761B (en) * | 2020-11-24 | 2021-06-22 | 江苏电力信息技术有限公司 | Electric power intelligent construction site violation detection method based on YOLOv4 improved algorithm |
CN113361326B (en) * | 2021-04-30 | 2022-08-05 | 国能浙江宁海发电有限公司 | Wisdom power plant management and control system based on computer vision target detection |
-
2021
- 2021-08-10 CN CN202110911265.7A patent/CN113361502B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN113361502A (en) | 2021-09-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110688987B (en) | Pedestrian position detection and tracking method and system | |
CN110543846B (en) | Multi-pose face image obverse method based on generation countermeasure network | |
CN113537099B (en) | Dynamic detection method for fire smoke in highway tunnel | |
CN109919026B (en) | Surface unmanned ship local path planning method | |
CN114399734A (en) | Forest fire early warning method based on visual information | |
CN110956158A (en) | Pedestrian shielding re-identification method based on teacher and student learning frame | |
CN114387195A (en) | Infrared image and visible light image fusion method based on non-global pre-enhancement | |
CN111582074A (en) | Monitoring video leaf occlusion detection method based on scene depth information perception | |
CN113128481A (en) | Face living body detection method, device, equipment and storage medium | |
CN106600613B (en) | Improvement LBP infrared target detection method based on embedded gpu | |
CN115331141A (en) | High-altitude smoke and fire detection method based on improved YOLO v5 | |
CN113361502B (en) | Garden perimeter intelligent early warning method based on edge group calculation | |
CN114626439A (en) | Transmission line peripheral smoke and fire detection method based on improved YOLOv4 | |
WO2022044369A1 (en) | Machine learning device and image processing device | |
CN117523437A (en) | Real-time risk identification method for substation near-electricity operation site | |
CN113158963A (en) | High-altitude parabolic detection method and device | |
Foedisch et al. | Adaptive road detection through continuous environment learning | |
CN114387484B (en) | Improved mask wearing detection method and system based on yolov4 | |
El Rai et al. | Integrating deep learning with active contour models in remote sensing image segmentation | |
CN115578664A (en) | Video monitoring-based emergency event judgment method and device | |
CN115620121A (en) | Photoelectric target high-precision detection method based on digital twinning | |
CN108898573A (en) | Infrared small target rapid extracting method based on multi-direction annular gradient method | |
CN110956153B (en) | Traffic signal lamp detection method and system for unmanned vehicle | |
CN114332682A (en) | Marine panoramic defogging target identification method | |
CN118072146B (en) | Unmanned aerial vehicle aerial photography small target detection method based on multi-level feature fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |