CN115511879A - Detection system and detection method for detecting whether muck truck is covered with no cover based on computer vision - Google Patents

Detection system and detection method for detecting whether muck truck is covered with no cover based on computer vision Download PDF

Info

Publication number
CN115511879A
CN115511879A CN202211382753.4A CN202211382753A CN115511879A CN 115511879 A CN115511879 A CN 115511879A CN 202211382753 A CN202211382753 A CN 202211382753A CN 115511879 A CN115511879 A CN 115511879A
Authority
CN
China
Prior art keywords
muck
car
detection
module
hopper
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211382753.4A
Other languages
Chinese (zh)
Inventor
郑艳伟
高杨
孙钦平
于东晓
马嘉林
崔方剑
张春雨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Hisense Information Technology Co ltd
Shandong University
Original Assignee
Qingdao Hisense Information Technology Co ltd
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Hisense Information Technology Co ltd, Shandong University filed Critical Qingdao Hisense Information Technology Co ltd
Priority to CN202211382753.4A priority Critical patent/CN115511879A/en
Publication of CN115511879A publication Critical patent/CN115511879A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a detection system and a detection method for detecting that a muck truck is not covered based on computer vision, wherein the detection system comprises a dual-model training module, a muck truck detection module, a muck truck hopper area detection module, a muck truck non-covered judgment module, a risk reporting and recording module and a log module; the muck car detection model identifies the muck car in the image, subdivides the car hopper part through the muck car hopper area detection model, calculates the area of each part to obtain the uncovered rate of the muck, and judges the muck car as uncovered if the uncovered rate of the muck is greater than a given threshold value. The invention combines the target detection technology with the residue soil vehicle without covering, adopts a double-model division task, subdivides the car hopper part, and introduces the residue soil uncovered rate to quantify the judgment standard, thereby improving the identification accuracy and greatly reducing the false alarm rate. The residual soil vehicle can be effectively supervised, and the environmental pollution and potential safety hazard are reduced.

Description

Detection system and detection method for detecting that residue soil vehicle is not covered on ground based on computer vision
Technical Field
The invention belongs to the field of computer vision, and particularly relates to a detection system and a detection method for detecting that a muck car is not covered on a cover based on computer vision.
Background
In order to maintain and improve the living environment of a city and reduce dust pollution in the process of transporting the residue soil vehicle, a plurality of related departments adopt related measures, so that the management force is increased, and the corresponding punishment is increased; however, due to factors such as the running time and the route of the muck truck which cannot be controlled, the supervision of the muck truck has certain problems.
The condition that the muck truck is not covered is detected by manpower, so that the muck truck has great instability and subjectivity, is difficult to monitor all-angle and all-time, and simultaneously consumes great manpower and material resource cost.
At present, with the rapid development of computer vision and deep learning, the application of a target detection technology in the traditional field is more and more extensive, and an omnibearing and multilevel learning system is constructed through a convolutional neural network, a deep confidence network, a neural network and the like, so that the accuracy, the efficiency and the convenience of artificial intelligence can be better exerted. The existing detection of the non-covered slag-soil vehicle breaks through the detection result of whether the slag-soil vehicle is covered or not and is directly output in an input model, and the false alarm rate is high.
Disclosure of Invention
In order to solve the technical problems, the invention provides a detection system and a detection method for detecting that a muck truck is not covered based on computer vision, so as to achieve the purposes of improving the identification accuracy, reducing the false alarm rate, improving the supervision on the muck truck and reducing the environmental pollution and the potential safety hazard.
In order to achieve the purpose, the technical scheme of the invention is as follows:
a muck car non-covering detection system based on computer vision comprises a double-model training module, a muck car detection module, a muck car hopper area detection module, a muck car non-covering judgment module, a risk reporting and recording module and a log module;
the double-model training module is responsible for training a muck vehicle detection model and a muck vehicle hopper area detection model;
the video frame acquisition module is responsible for connecting corresponding camera groups through a polling algorithm, carrying out motion detection on video streams of different cameras, and if a video picture moves, extracting frames at certain time intervals and sending the frames into an inference queue;
the muck car detection module comprises a muck car detection model, the muck car detection model is based on an improved YOLOv5x model and is used for detecting various vehicles in the picture, framing the outlines of the detected muck cars by using rectangular frames, and segmenting and sending the contours to the muck car hopper area detection module from the original image;
the muck car hopper area detection module comprises a muck car hopper area detection model, and the muck car hopper area detection model is based on an improved YOLOv5s model and is used for detecting the muck car hopper part to obtain a muck area, an empty car hopper area and a felt cover area;
the slag soil vehicle uncovered judging module is responsible for calculating the slag soil uncovered rate according to the detection result of the car hopper part so as to judge whether the slag soil vehicle is covered;
the risk reporting and recording module is responsible for uploading risk information and storing risk pictures;
the log module is responsible for recording errors and warning information in the operation of the system, and facilitates later maintenance and modification.
A detection method for detecting that a muck vehicle is not covered based on computer vision adopts the detection system for detecting that the muck vehicle is not covered based on computer vision, which comprises the following steps:
a model training stage: the video frame acquisition module collects pictures from a video stream according to a certain time interval, performs initial labeling and makes an initial data set; the double-model training module trains a muck car detection model by using an initial data set, and divides a rectangular frame of the muck car which is detected in a picture and is of the detection type through the trained muck car detection model, and the rectangular frame is stored separately to form a data set of a muck car hopper area detection model; then the double model training module trains a muck car hopper area detection model by using a data set of the muck car hopper area detection model;
and (3) a detection stage: the video frame acquisition module collects pictures from a video stream to be detected according to a certain time interval, the pictures are input into a trained muck car detection module, muck cars appearing in the pictures are detected through a muck car detection model, the outlines are framed out by rectangular frames, the muck cars framed out by the rectangular frames are divided from an original picture and transmitted to a muck car hopper area detection module, the muck car hopper area detection model detects the hopper area of the muck cars, and the hopper area is divided into three areas: the muck part, the empty car hopper part and the felt cover part correspond to three types of detection; the slag soil vehicle non-covering judging module calculates the areas of the three areas according to the detection result of the car hopper part to obtain the slag soil non-covering rate of the slag soil vehicle, and if the slag soil non-covering rate is greater than a set threshold value, the slag soil vehicle is determined to be not covered; otherwise, covering the straw mat; and the risk reporting and recording module stores the picture information of the uncovered muck truck and the detection truck hopper result.
In the scheme, in the model training stage, the training of the muck vehicle detection model is as follows:
(1) Extracting frames from a camera of a road section with larger traffic flow according to a certain time interval, storing the video frames, and screening pictures containing the muck trucks;
(2) Marking the vehicles in the picture to obtain m reference frames sigma i (x i ,y i ,w i ,h i ,l i ) Wherein i =1,2,. Multidot.m, x i ,y i ,w i ,h i ,l i The five components are respectively the abscissa and ordinate of the upper left corner of the reference frame, the width and height of the reference frame, the label and the label l i =0 for muck truck, l i =1 for large truck, l i =2 for van, l i =3 for compact open wagon,/ i =4 for car, /) i =5 denotes bus,/ i =6 denotes tank wagon and concrete truck, /) i =7 for other vehicles, including excavators, flatbeds, trucks loaded with animals;
(3) Translating and rotationally zooming the marked picture to increase a data set;
(4) Adopting Mosaic to perform data enhancement;
(5) Calculating a self-adaptive anchor frame;
(6) Self-adaptive picture scaling;
(7) And training by adopting a YOLOV5x network, marking a detected vehicle target by a rectangular solid frame, constructing binary cross entropy loss, including three parts of boundary frame regression loss, confidence coefficient prediction loss and category prediction loss, and performing back propagation.
In the scheme, in the model training stage, the training of the detection model of the muck car hopper area is as follows:
(1) Dividing a rectangular frame with detection categories of the muck cars in the picture into the muck car through the trained muck car detection model, and storing the rectangular frame separately to form a data set of the muck car hopper area detection model;
(2) Labeling the hopper part in the picture to obtain n reference frames rho i (x i ,y i ,w i ,h i ,t i ) Wherein i =1, 2.. Ang., n, x i ,y i ,w i ,h i ,t i With five components in the upper left corner of the reference frameThe abscissa, the ordinate, the width, the height of the reference frame, and the label; label t i =0 for muck fraction, t i =1 denotes the portion of the lid, t i =2 denotes an empty hopper section;
(3) Translating and rotationally zooming the marked picture to increase a data set;
(4) Adopting Mosaic to perform data enhancement;
(5) Calculating a self-adaptive anchor frame;
(6) Self-adaptive picture scaling;
(7) And training by adopting a YOLOv5s network, marking the detected muck part, straw mat cover part and empty car hopper part by using a rectangular solid line frame, constructing binary cross entropy loss comprising three parts of boundary frame regression loss, confidence coefficient prediction loss and category prediction loss, and performing back propagation.
In the scheme, the video frame acquisition module loads camera information according to the information of the configuration file, and is connected with a corresponding camera group through a set polling algorithm based on an RTSP (real time streaming protocol); taking a stream from a camera which is successfully connected, and carrying out motion detection by a three-frame difference method;
taking frames of the video stream passing through the motion detection part according to a certain time interval, attaching a unique timestamp to each video frame, and packaging the video frames, the timestamps and a camera picture queue into elements; when the number of the elements meets a batch, the elements of the batch are handed to the muck car detection module for reasoning, and when the number of the elements does not meet a given time threshold, the system can forcibly push the rest elements to the muck car detection module.
In the scheme, the detection process of the muck car detection module is as follows:
(1) Sending the pictures into a muck truck detection model for reasoning to obtain a result of predicting the pictures, wherein n prediction frames are respectively
Figure BDA0003929205430000031
Wherein z is i As a predicted class, z i =0 residue soil vehicle, z i =1 large truck, z i =2 compartmentType truck, z i =3 for small open wagon, z i =4 is car, z i =5 bus, z i =6 tank wagon and concrete truck, z i =7 for other vehicles including excavators, flatbeds and trucks carrying animals; p is the probability of the prediction class, 0<p<1;
(2) Calculating the intersection ratio of any two prediction frames
Figure BDA0003929205430000041
Intersection of two prediction boxes:
Figure BDA0003929205430000042
cross-over ratio:
Figure BDA0003929205430000043
(3) Setting the threshold τ =0.25 if
Figure BDA0003929205430000044
And z is i =z j Then compare p i And p j Deleting the prediction box with lower probability;
(4) Will z i A prediction frame of =0, that is, a prediction frame of the slag car is drawn in the original image, and the prediction frame is divided and stored from the original image according to information such as (x, y, w, h).
In the scheme, the detection process of the muck car hopper area detection module is as follows:
(1) Sending the picture containing the muck truck into a muck truck hopper area detection model for reasoning to obtain a result of predicting the picture, wherein n prediction frames are omega respectively i (x i ,y i ,w i ,h i ,a i ,p i ) I =1,2, \ 8230;, n, wherein, a i For the predicted class, a i =0 is muck fraction, a i =1 is the lid part, a i =2 empty car hopper section;p i To predict the probability of a class, 0<p i <1;
(2) Calculating the intersection ratio IoU (omega) of any two prediction frames ij ):
Intersection of two prediction boxes:
Inter(ω ij )=max(min(x i +w i ,x j +w j )-max(x i ,x j )+1,0)×max(min(y i +h i ,y j +h j )-max(y i ,y j )+1,0)
cross-over ratio:
Figure BDA0003929205430000045
(3) Set threshold τ =0.25, if IoU (ω) ij ) Is not less than tau and a i =a j Then compare p i And p j Deleting the prediction box with lower probability;
(4) A is to i The prediction frames of =0,1,2 are drawn in the original drawing, and comprise a muck car hopper part, a felt cover part and an empty car hopper part of a muck car, wherein the complete car hopper consists of the three parts, and the areas of the three parts of the prediction rectangular frames are calculated;
setting the area of the muck car hopper part as S dirt Width of W dirt Height is H dirt (ii) a The area of the covering part is S cover Width of W cover Height is H cover (ii) a The area of the empty car hopper part is S empty Width of W empty Height is H empty The area calculation formula is as follows:
S dirt =W dirt *H dirt
S cover =W cover *H cover
S empty =W empty *H empty
in the scheme, the judgment process of the judging module for not covering the muck truck is as follows:
when z =0 is presentThat is, when there is a prediction frame of the muck truck, the muck uncovered rate r of the muck truck hopper is calculated uncover If r is uncover >0.5, judging that the muck truck is not covered; the calculation formula is as follows:
(1) If the muck car hopper part is detected, the muck car hopper part is detected
Figure BDA0003929205430000051
The undetected partial area is 0;
(2) If no muck car hopper part is detected, r uncover =0。
In the above scheme, the risk reporting and recording module is implemented as follows:
if the detected muck car is not covered, storing the corresponding muck car picture and uploading the picture to a minio server; meanwhile, the car hopper detection information including the area of the part of the muck car hopper, the area of the part of the felt cover, the area of the empty car hopper and the uncovered rate of muck is uploaded to kafka by a producer.
Through the technical scheme, the detection system and the detection method for detecting the non-covered cover of the muck car based on computer vision have the following beneficial effects:
the invention combines the deep learning technology and the target detection, detects the muck vehicle in a video frame through a first target detection model, divides the muck vehicle from an original image, sends the muck vehicle to a second target detection model to detect the hopper area of the muck vehicle, calculates the uncovered rate of muck and judges whether the muck vehicle is not covered. The method can avoid the problems of low accuracy, easy false alarm and the like of a single-target detection model, adopts the double detection models, subdivides the muck car hopper area compared with the traditional detection mode, simultaneously leads the uncovered rate of muck, has definite quantitative standard for detecting the covering problem of the muck car, avoids the fuzzy condition when only the model is used for judging, improves the supervision of the muck car, reduces the cost of manpower, material resources and the like.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below.
FIG. 1 is a schematic view of a system for detecting that a muck vehicle is not covered based on computer vision, which is disclosed by an embodiment of the invention;
fig. 2 is a schematic flow chart of a method for detecting that a muck vehicle is not covered based on computer vision, which is disclosed by the embodiment of the invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
The invention provides a muck car non-covering detection system based on computer vision, which comprises a dual-model training module, a muck car detection module, a muck car hopper area detection module, a muck car non-covering judgment module, a risk reporting and recording module and a log module, as shown in figure 1.
The double-model training module is responsible for training a muck car detection model and a muck car hopper area detection model;
the video frame acquisition module is responsible for connecting corresponding camera groups through a polling algorithm, performing motion detection on video streams of different cameras, and if video pictures move, extracting frames at certain time intervals and sending the frames to an inference queue;
the muck car detection module comprises a muck car detection model, the muck car detection model is based on an improved YOLOv5x model and is used for detecting various vehicles in the picture, outlines of the detected muck cars are framed by rectangular frames, and the outlines are divided from the original image and sent to the muck car hopper area detection module;
the muck car hopper area detection module comprises a muck car hopper area detection model, and the muck car hopper area detection model is based on an improved YOLOv5s model and is used for detecting the muck car hopper part to obtain a muck area, an empty car hopper area and a felt cover area;
the slag soil vehicle uncovered judging module is responsible for calculating the slag soil uncovered rate according to the detection result of the car hopper part so as to judge whether the slag soil vehicle is covered;
the risk reporting and recording module is responsible for uploading risk information and storing risk pictures;
the log module is responsible for recording errors and warning information in the operation of the system, and facilitates later maintenance and modification.
A method for detecting that a muck vehicle is not covered based on computer vision adopts the system for detecting that the muck vehicle is not covered based on computer vision, as shown in figure 2, and comprises the following steps:
1. stage of model training
The video frame acquisition module collects pictures from a video stream according to a certain time interval, performs initial labeling and makes an initial data set; the double-model training module trains a muck car detection model by using an initial data set, and divides a rectangular frame of the muck car of which the detection category is the detection category in the picture through the trained muck car detection model and independently stores the rectangular frame to form a data set of a muck car hopper area detection model; and then the double-model training module trains the muck car hopper area detection model by using the data set of the muck car hopper area detection model.
1. Video frame acquisition module
Loading camera information according to the information of the configuration file, and connecting a corresponding camera group through a set polling algorithm based on an RTSP (real time streaming protocol); taking a stream from a camera which is successfully connected, and carrying out motion detection by a three-frame difference method; taking frames of the video stream passing through the motion detection part according to a certain time interval, attaching a unique timestamp to each video frame, and packaging the video frames, the timestamps and a camera picture queue into elements; when the number of the elements meets one batch, the elements of the one batch are handed to the muck car detection module for reasoning, and when the number of the elements does not meet the given time threshold value, the system can forcibly push the residual elements to the muck car detection module.
The specific process is as follows:
(1) Based on the RTSP, connecting a corresponding camera through a set polling algorithm;
(2) The video stream is subjected to motion detection, and because the motion is fast, a three-frame difference method is used for detecting whether the video picture has motion, and only under the condition that the video picture has motion, GPU reasoning is used, so that resources are saved when no motion exists. The resource saving amount can reach N (N > 5). The motion detection occupies a part of CPU resources. The concrete implementation is as follows:
(2.1) recording the images of the n +1 th frame, the n frame and the n-1 th frame in the video sequence as f respectively n+1 、f n And f n-1 The gray value of the corresponding pixel point of the three frames is marked as f n+1 (x,y)、f n (x, y) and f n-1 (x, y), subtracting the gray values of the pixel points corresponding to the two frames, and taking the absolute value to obtain a differential image D n+1 、D n
D n+1 (x,y)=|f n+1 (x,y)-f n (x,y)|
D n (x,y)=|f n (x,y)-f n-1 (x,y)|
(2.2) taking intersection of the two differential images to obtain:
D′ n (x,y)=|f n+1 (x,y)-f n (x,y)|∩|f n (x,y)-f n-1 (x,y)|
and (2.3) carrying out threshold processing and connectivity analysis, and finally extracting the moving target. If the value selected by the threshold value T is too small, the noise in the differential image cannot be inhibited; if the selected value is too large, part of information of the target in the check image can be covered; and the fixed threshold T cannot adapt to the change of the light in the scene and the like. Therefore, the invention adds an addition item method sensitive to the whole illumination in the judgment condition, and modifies the judgment condition into:
Figure BDA0003929205430000071
wherein N is A And lambda is the suppression coefficient of illumination, and A can be set as the whole frame image. Additive terms
Figure BDA0003929205430000072
The change of illumination in the whole frame image is expressed. If the illumination change in the scene is small, the item tends to 0; if the illumination changes significantly in the scene, the value of the term increases significantly, resulting inThe right-side judgment condition is adaptively increased, and the final judgment result is that no moving target exists, so that the influence of light change on the detection result of the moving target is effectively inhibited.
(3) Frames are taken at 0.2fps from a video stream with picture motion.
2. Training of muck vehicle detection model
(1) Extracting frames from a camera of a road section with larger traffic flow according to a certain time interval to store video frames, screening pictures containing the muck trucks, wherein the training set needs about 2000-3000 frames;
(2) Marking the vehicles in the picture to obtain m reference frames sigma i (x i ,y i ,w i ,h i ,l i ) Wherein i =1,2,. Multidot.m, x i ,y i ,w i ,h i ,l i The five components are respectively the abscissa and ordinate of the upper left corner of the reference frame, the width and height of the reference frame, the label and the label l i =0 for muck truck, l i =1 denotes large truck,/ i =2 for van, l i =3 for compact open wagon,/ i =4 for car,/ i =5 denotes bus,/ i =6 denotes tank wagon and concrete truck, /) i =7 for other vehicles, including excavators, flatbeds, trucks loaded with animals;
(3) Translating and rotationally zooming the marked picture to increase a data set;
in order to increase the diversity of data and improve the judgment of a model on test data, the content of a data set is increased on the basis of a limited initial data set through a series of means such as translation, rotation and scaling. The concrete implementation is as follows:
(3.1) rotational scaling:
the rotation scaling matrix is:
Figure BDA0003929205430000081
the rotational scaling formula: dst (x, y) = src (x, y) × S
Where, src is the original picture, dst is the picture after rotation scaling transformation, and x and y are horizontal and vertical coordinates.
(3.2) translation:
translation matrix:
Figure BDA0003929205430000082
translation formula: dst (x, y) = src (x, y) × T
Wherein, src is the original picture, dst is the picture after translation transformation, and x, y are horizontal and vertical coordinates.
(4) Adopting Mosaic to enhance data;
the four pictures are spliced by utilizing the four pictures according to the modes of random zooming, random cutting and random arrangement, each picture has a corresponding frame, a new picture can be obtained after the four pictures are spliced, and the frame corresponding to the picture is also obtained at the same time, so that the new picture is equivalent to the four pictures when being sent to the neural network for learning. The concrete implementation is as follows:
(4.1) newly creating mosaic canvas, and randomly generating a point (x) on the mosaic canvas c ,y c );
(4.2) surrounding random points (x) c ,y c ) 4 pieces of puzzles are placed.
Wherein, the canvas placement area at the upper left position is (x) 1a ,y 1a ,x 2a ,y 2a ). Consider two situations, one is that the picture does not go beyond the canvas, the canvas placement area is (x) c -w,y c -h,x c ,y c ) (ii) a Secondly, the picture exceeds the canvas, and the canvas placement area is (0, x) c ,y c ). Combining two situations, the canvas area is:
(x 1a ,y 1a ,x 2a ,y 2a )=(max(x c -w,0),max(y c -h,0),x c ,y c )
picture area of the top left tile is (x) 1b ,y 1b ,x 2b ,y 2b ). Two situations are considered, namely, the picture does not exceed the canvas, the picture does not need to be cut, and the picture area is (0, w, h); the second is that the picture exceeds the canvas, the picture of the exceeding part needs to be cut, the area is (w-x c ,h-y c W, h). Combining two situations, the picture area is:
(x 1b ,y 1b ,x 2b ,y 2b )=(w-(x 2a -x 1a ),h-(y 2a -y 1a ),w,h)
the canvas placement area at the upper right position is (x) 1a ,y 1a ,x 2a ,y 2a ). Consider two situations, one is that the picture does not exceed the canvas, and the canvas placement area is (x) c ,y c -h,x c +w,y c ) (ii) a Secondly, the picture exceeds the canvas, and the canvas is placed in the area of (x) c ,0,s_mosaic,y c ). Combining two situations, the canvas area is:
(x 1a ,y 1a ,x 2a ,y 2a )=(x c ,max(y c -h,0),min(x c +w,s_mosaic),y c )
picture area of the upper right tile is (x) 1b ,y 1b ,x 2b ,y 2b ). Two situations are considered, namely, the picture does not exceed the canvas, the picture does not need to be cut, and the picture area is (0, w, h); second, the picture exceeds the canvas, the picture of the exceeding part needs to be cut, the area is (0, h- (y) 2a -y 1a ),x 2a -x 1a H). Combining two situations, the picture area is:
(x 1b ,y 1b ,x 2b ,y 2b )=(0,h-(y 2a -y 1a ),min(w,x 2a -x 1a ),h)
and the jigsaw puzzle with the left lower part and the right lower part can be realized in the same way.
(4.3) updating the coordinates of bbox. Bbox coordinate value of x is (x) min ,y min ,x max ,y max ) And the offset is added to obtain the coordinate of the mosaicbbox. The coordinate calculation formula is as follows:
y[:,0]=x[:,0]+padw
y[:,1]=x[:,1]+padh
y[:,2]=x[:,2]+padw
y[:,3]=x[:,3]+padh。
(5) Calculating a self-adaptive anchor frame;
in the network training process, the network outputs a corresponding prediction frame on the basis of the initial manually marked rectangular frame, and then compares the prediction frame with a real and correct rectangular frame, calculates the difference between the prediction frame and the true and correct rectangular frame, and updates and iterates network parameters in the reverse direction. The concrete implementation is as follows:
(5.1) reading w and h of all pictures in the muck car training set and w and h of the detection frame;
(5.2) modifying the read coordinates to absolute coordinates;
(5.3) clustering all detection boxes in the training set by using a Kmeans algorithm to obtain k anchors;
(5.4) mutating the obtained anchors through a genetic algorithm, mainly evaluating the fitness calculated by an anchor _ fitness method, if the mutation is good, keeping the mutation, and otherwise skipping; and returning the finally obtained optimal anchors according to the area.
(6) Self-adaptive picture scaling;
adaptively adding the least black edge to the picture after scaling. And the phenomenon that a large amount of information is redundant due to excessive filling so as to influence the reasoning speed is prevented. The concrete implementation is as follows:
(6.1) calculating a scaling according to the size of the original picture in the muck car data set and the size of the picture input into the network;
(6.2) calculating the size of the zoomed picture according to the size of the original picture and the zooming proportion;
and (6.3) calculating a black edge filling value.
(7) And training by adopting a YOLOV5x network, marking a detected vehicle target by a rectangular solid line frame, constructing binary cross entropy loss (BCE loss), including three parts of boundary frame regression loss, confidence prediction loss and category prediction loss, and performing back propagation.
The loss function is formulated as follows:
Figure BDA0003929205430000101
3. training of muck car hopper area detection model
(1) Dividing a rectangular frame with detection categories of the muck cars in the picture into the muck car through the trained muck car detection model, and storing the rectangular frame separately to form a data set of the muck car hopper area detection model;
(2) Labeling the hopper part in the picture to obtain n reference frames rho i (x i ,y i ,w i ,h i ,t i ) Wherein i =1, 2.. Ang., n, x i ,y i ,w i ,h i ,t i The five components are respectively an abscissa and an ordinate of the upper left corner of the reference frame, the width and the height of the reference frame and the label; label t i =0 denotes the residue fraction, t i =1 denotes the portion of the lid, t i =2 denotes an empty hopper section;
(3) Translating and rotationally zooming the marked picture to increase a data set;
(4) Adopting Mosaic to perform data enhancement;
(5) Calculating a self-adaptive anchor frame;
(6) Self-adaptive picture scaling;
(7) And training by adopting a YOLOv5s network, marking the detected muck part, straw mat cover part and empty car hopper part by using a rectangular solid line frame, constructing binary cross entropy loss comprising three parts of boundary frame regression loss, confidence coefficient prediction loss and category prediction loss, and performing back propagation.
And (4) the steps (3) - (7) are the same as the training of the muck car detection model.
2. Detection phase
The video frame acquisition module collects pictures (the process is the same as the above) from the video stream to be detected according to certain time interval, input to the dregs car detection module after training, detect the dregs car that appears in the model detection picture through the dregs car, and frame out the outline with the rectangle frame, cut apart the dregs car that the rectangle frame framed out from the original image and pass to dregs car hopper regional detection module, dregs car hopper regional detection model detects the hopper region of dregs car, divide into three region with the hopper region: the muck part, the empty car hopper part and the felt cover part correspond to three types of detection; the slag soil vehicle non-covering judging module calculates the areas of the three areas according to the detection result of the car hopper part to obtain the slag soil non-covering rate of the slag soil vehicle, and if the slag soil non-covering rate is greater than a set threshold value, the slag soil vehicle is determined to be not covered; otherwise, covering the straw mat; and the risk reporting and recording module stores the picture information of the uncovered muck truck and the detection truck hopper result.
1. The detection process of the muck car detection module is as follows:
(1) Sending the pictures into a muck truck detection model for reasoning to obtain a result of predicting the pictures, wherein n prediction frames are respectively
Figure BDA0003929205430000111
Wherein z is i For the predicted class, z i =0 as a muck truck, z i =1 large truck, z i =2 van, z i =3 for small open wagon, z i =4 is car, z i =5 bus, z i =6 tank wagon and concrete truck, z i =7 for other vehicles including excavators, flatbeds and trucks carrying animals; p is the probability of the prediction class, 0<p<1;
(2) Calculating the intersection ratio of any two prediction frames
Figure BDA0003929205430000112
Intersection of two prediction boxes:
Figure BDA0003929205430000113
and (3) cross-linking ratio:
Figure BDA0003929205430000114
(3) Setting the threshold τ =0.25 if
Figure BDA0003929205430000115
And z is i =z j Then compare p i And p j Deleting the prediction box with lower probability;
(4) Will z i A prediction frame of =0, that is, a prediction frame of the slag car is drawn in the original image, and the prediction frame is divided and stored from the original image according to information such as (x, y, w, h).
2. The detection process of the muck car hopper area detection module is as follows:
(1) Sending the picture containing the muck truck into a muck truck hopper area detection model for reasoning to obtain a result of predicting the picture, wherein n prediction frames are omega respectively i (x i ,y i ,w i ,h i ,a i ,p i ) I =1,2, \ 8230;, n, wherein, a i For the predicted class, a i =0 is a residue fraction, a i =1 is the portion covered with a i =2 empty hopper section; p is a radical of i To predict the probability of a class, 0<p i <1;
(2) Calculating the intersection ratio IoU (omega) of any two prediction frames ij ):
The intersection of the two prediction boxes:
Inter(ω ij )=max(min(x i +w i ,x j +w j )-max(x i ,x j )+1,0)×max(min(y i +h i ,y j +h j )-max(y i ,y j )+1,0)
cross-over ratio:
Figure BDA0003929205430000116
(3) Set threshold τ =0.25, if IoU (ω) ij ) Is not less than tau and a i =a j Then compare p i And p j Deleting the prediction box with lower probability;
(4) A is to i The prediction frames of =0,1,2 are drawn in the original drawing, and comprise a muck car hopper part, a felt cover part and an empty car hopper part of a muck car, wherein the complete car hopper consists of the three parts, and the areas of the three parts of the prediction rectangular frames are calculated;
setting upThe area of the muck car hopper part is S dirt Width of W dirt Height is H dirt (ii) a The area of the cover part is S cover Width of W cover Height is H cover (ii) a The area of the empty car hopper part is S empty Width of W empty Height of H empty The area calculation formula is as follows:
S dirt =W dirt *H dirt
S cover =W cover *H cover
S empty =W empty *H empty
3. the judgment process of the residue soil vehicle uncovered judgment module is as follows:
when z =0 exists, namely a prediction frame of the slag car exists, calculating the slag uncovered rate r of the slag car hopper uncover If r is uncover >0.5, judging that the muck vehicle is not covered with a mat; the calculation formula is as follows:
(1) If the muck car hopper part is detected, the muck car hopper part is judged to be broken
Figure BDA0003929205430000121
The undetected partial area is 0;
(2) If no muck car hopper part is detected, r uncover =0。
4. The risk reporting and recording module is realized as follows:
if the detected muck car is not covered, storing the corresponding muck car picture and uploading the muck car picture to a minio server; meanwhile, the car hopper detection information including the area of the part of the muck car hopper, the area of the part of the felt cover, the area of the empty car hopper and the uncovered rate of muck is uploaded to kafka by a producer.
The log module is responsible for recording errors and warning information in the operation of the system, and facilitates later maintenance and modification.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (9)

1. A muck car non-covering detection system based on computer vision is characterized by comprising a dual-model training module, a muck car detection module, a muck car hopper area detection module, a muck car non-covering judgment module, a risk reporting and recording module and a log module;
the double-model training module is responsible for training a muck car detection model and a muck car hopper area detection model;
the video frame acquisition module is responsible for connecting corresponding camera groups through a polling algorithm, performing motion detection on video streams of different cameras, and if video pictures move, extracting frames at certain time intervals and sending the frames to an inference queue;
the muck car detection module comprises a muck car detection model, the muck car detection model is based on an improved YOLOv5x model and is used for detecting various vehicles in the picture, outlines of the detected muck cars are framed by rectangular frames, and the outlines are divided from the original image and sent to the muck car hopper area detection module;
the muck car hopper area detection module comprises a muck car hopper area detection model, and the muck car hopper area detection model is based on an improved YOLOv5s model and is used for detecting the muck car hopper part to obtain a muck area, an empty car hopper area and a felt cover area;
the slag soil vehicle uncovered judging module is responsible for calculating the slag soil uncovered rate according to the detection result of the car hopper part so as to judge whether the slag soil vehicle is covered;
the risk reporting and recording module is responsible for uploading risk information and storing risk pictures;
the log module is responsible for recording errors and warning information in the operation of the system, and is convenient for later maintenance and modification.
2. A method for detecting the non-covering of a muck truck based on computer vision, which adopts the system for detecting the non-covering of the muck truck based on computer vision as claimed in claim 1, and is characterized by comprising the following steps:
a model training stage: the video frame acquisition module collects pictures from a video stream according to a certain time interval, performs initial labeling and makes an initial data set; the double-model training module trains a muck car detection model by using an initial data set, and divides a rectangular frame of the muck car which is detected in a picture and is of the detection type through the trained muck car detection model, and the rectangular frame is stored separately to form a data set of a muck car hopper area detection model; then the double-model training module trains a muck car hopper area detection model by using a data set of the muck car hopper area detection model;
a detection stage: the video frame acquisition module collects pictures from a video stream to be detected according to a certain time interval, the pictures are input into a trained muck car detection module, muck cars appearing in the pictures are detected through a muck car detection model, the outlines are framed out by rectangular frames, the muck cars framed out by the rectangular frames are divided from an original picture and transmitted to a muck car hopper area detection module, the muck car hopper area detection model detects the hopper area of the muck cars, and the hopper area is divided into three areas: the muck part, the empty car hopper part and the felt cover part are correspondingly detected; the slag soil vehicle non-covering judging module calculates the areas of the three areas according to the detection result of the car hopper part to obtain the slag soil non-covering rate of the slag soil vehicle, and if the slag soil non-covering rate is greater than a set threshold value, the slag soil vehicle is determined to be not covered; otherwise, covering the straw mat; and the risk reporting and recording module stores the picture information of the uncovered muck truck and the detection truck hopper result.
3. The method for detecting the uncovering of the muck car based on the computer vision as claimed in claim 2, wherein in the model training stage, the training of the muck car detection model is as follows:
(1) Extracting frames from a camera of a road section with larger traffic flow according to a certain time interval, storing the video frames, and screening pictures containing the muck trucks;
(2) Marking the vehicles in the picture to obtain m reference frames sigma i (x i ,y i ,w i ,h i ,l i ) Wherein i =1,2,. Multidot.m, x i ,y i ,w i ,h i ,l i The five components are respectively the abscissa and ordinate of the upper left corner of the reference frame, the width and height of the reference frame, the label and the label l i =0 for slag car,/ i =1 denotes large truck,/ i =2 for van,/ i =3 for compact open wagon,/ i =4 for car,/ i =5 denotes bus,/ i =6 tank wagon and concrete truck, l i =7 represents other vehicles, including excavators, flatbeds, trucks loaded with animals;
(3) Translating and rotationally zooming the marked picture to increase a data set;
(4) Adopting Mosaic to perform data enhancement;
(5) Calculating a self-adaptive anchor frame;
(6) Self-adaptive picture scaling;
(7) And training by adopting a YOLOV5x network, marking a detected vehicle target by using a rectangular solid frame, constructing binary cross entropy loss, including three parts of boundary frame regression loss, confidence coefficient prediction loss and category prediction loss, and performing back propagation.
4. The method for detecting the uncovering of the muck car based on the computer vision as claimed in claim 2, wherein in the model training stage, the training of the detection model of the muck car hopper area is as follows:
(1) Dividing the detection category in the picture into rectangular frames of the muck car through the trained muck car detection model, and storing the rectangular frames separately to form a data set of the muck car hopper area detection model;
(2) Marking the hopper part in the picture to obtain n reference frames rho i (x i ,y i ,w i ,h i ,t i ) Wherein i =1, 2.. Ang., n, x i ,y i ,w i ,h i ,t i The five components are respectively an abscissa and an ordinate of the upper left corner of the reference frame, the width and the height of the reference frame and the label; label t i =0 denotes the residue fraction, t i =1 denotes a tarpaulin portion, t i =2 represents an empty hopper section;
(3) Translating and rotationally zooming the marked picture to increase a data set;
(4) Adopting Mosaic to perform data enhancement;
(5) Calculating a self-adaptive anchor frame;
(6) Self-adaptive picture scaling;
(7) And training by adopting a YOLOv5s network, marking the detected muck part, the straw cover part and the empty car hopper part by using a rectangular solid line frame, constructing binary cross entropy loss, including three parts of boundary frame regression loss, confidence coefficient prediction loss and category prediction loss, and performing back propagation.
5. The method for detecting the covering absence of the muck vehicle based on the computer vision as claimed in claim 2, wherein the video frame acquisition module loads camera information according to the information of the configuration file, and is connected with a corresponding camera group through a set polling algorithm based on an RTSP (real time streaming protocol); taking a stream from a camera which is successfully connected, and carrying out motion detection by a three-frame difference method;
taking frames of the video stream passing through the motion detection part according to a certain time interval, attaching a unique time stamp to each video frame, and packaging the video frames, the time stamps and a camera picture queue into elements; when the number of the elements meets one batch, the elements of the one batch are handed to the muck car detection module for reasoning, and when the number of the elements does not meet the given time threshold value, the system can forcibly push the residual elements to the muck car detection module.
6. The method for detecting the uncovering of the slag car based on the computer vision as claimed in claim 2, wherein the detection process of the slag car detection module is as follows:
(1) DrawingThe slices are sent into a muck truck detection model for reasoning to obtain a result of a prediction picture, and n prediction frames are respectively
Figure FDA0003929205420000031
Wherein z is i For the predicted class, z i =0 residue soil vehicle, z i =1 for large truck, z i =2 van, z i =3 for small open wagon, z i =4 for car, z i =5 bus, z i =6 tank wagon and concrete truck, z i =7 for other vehicles, including excavators, flatbeds and trucks loaded with animals; p is the probability of the prediction class, 0<p<1;
(2) Calculating the intersection ratio of any two prediction frames
Figure FDA0003929205420000032
The intersection of the two prediction boxes:
Figure FDA0003929205420000033
cross-over ratio:
Figure FDA0003929205420000034
(3) Setting the threshold τ =0.25 if
Figure FDA0003929205420000035
And z is i =z j Then compare p i And p j Deleting the prediction box with lower probability;
(4) Will z i A prediction frame of =0, that is, a prediction frame of the slag car is drawn in the original image, and the prediction frame is divided and stored from the original image according to information such as (x, y, w, h).
7. The method for detecting the uncovering of the muck truck based on the computer vision as claimed in claim 2, wherein the detection process of the muck truck hopper area detection module is as follows:
(1) Sending the picture containing the muck truck into a muck truck hopper area detection model for reasoning to obtain a result of predicting the picture, wherein n prediction frames are omega respectively i (x i ,y i ,w i ,h i ,a i ,p i ) I =1,2, \8230;, n, wherein a i As a predictive category, a i =0 is muck fraction, a i =1 is the portion covered with a i =2 empty hopper section; p is a radical of i To predict the probability of a class, 0<p i <1;
(2) Calculating the intersection ratio IoU (omega) of any two prediction frames ij ):
The intersection of the two prediction boxes:
Inter(ω ij )=max(min(x i +w i ,x j +w j )-max(x i ,x j )+1,0)×max(min(y i +h i ,y j +h j )-max(y i ,y j )+1,0)
cross-over ratio:
Figure FDA0003929205420000041
(3) Set threshold τ =0.25, if IoU (ω) ij ) Is not less than tau and a i =a j Then compare p i And p j Deleting the prediction frame with smaller probability;
(4) A is to be i The prediction frames of =0,1,2 are drawn in the original drawing, and comprise a muck car hopper part, a felt cover part and an empty car hopper part of a muck car, wherein the complete car hopper consists of the three parts, and the areas of the three parts of the prediction rectangular frames are calculated;
setting the area of the muck car hopper part as S dirt Width of W dirt Height is H dirt (ii) a The area of the covering part is S cover Width of W cover Height is H cover (ii) a Air conditionerThe area of the hopper part is S empty Width of W empty Height of H empty The area calculation formula is as follows:
S dirt =W dirt *H dirt
S cover =W cover *H cover
S empty =W empty *H empty
8. the method for detecting the covering absence of the muck vehicle based on the computer vision of claim 7, wherein the judging process of the module for judging the covering absence of the muck vehicle is as follows:
when z =0 exists, namely a prediction frame of the slag car exists, calculating the slag uncovered rate r of the slag car hopper uncover If r is uncover >0.5, judging that the muck vehicle is not covered with a mat; the calculation formula is as follows:
(1) If the muck car hopper part is detected, the muck car hopper part is detected
Figure FDA0003929205420000042
The undetected partial area is 0;
(2) If no muck car hopper part is detected, r uncover =0。
9. The method for detecting the covering absence of the muck vehicle based on the computer vision as claimed in claim 2, wherein the risk reporting and recording module is implemented as follows:
if the detected muck car is not covered, storing the corresponding muck car picture and uploading the picture to a minio server; meanwhile, the hopper detection information including the area of the muck hopper part, the area of the felt cover part, the area of the empty hopper and the uncovered rate of muck is uploaded to kafka by the producer.
CN202211382753.4A 2022-11-07 2022-11-07 Detection system and detection method for detecting whether muck truck is covered with no cover based on computer vision Pending CN115511879A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211382753.4A CN115511879A (en) 2022-11-07 2022-11-07 Detection system and detection method for detecting whether muck truck is covered with no cover based on computer vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211382753.4A CN115511879A (en) 2022-11-07 2022-11-07 Detection system and detection method for detecting whether muck truck is covered with no cover based on computer vision

Publications (1)

Publication Number Publication Date
CN115511879A true CN115511879A (en) 2022-12-23

Family

ID=84512232

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211382753.4A Pending CN115511879A (en) 2022-11-07 2022-11-07 Detection system and detection method for detecting whether muck truck is covered with no cover based on computer vision

Country Status (1)

Country Link
CN (1) CN115511879A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115861904A (en) * 2023-02-23 2023-03-28 青岛创新奇智科技集团股份有限公司 Method and system for generating slag car roof fall detection model

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115861904A (en) * 2023-02-23 2023-03-28 青岛创新奇智科技集团股份有限公司 Method and system for generating slag car roof fall detection model

Similar Documents

Publication Publication Date Title
CN111368687A (en) Sidewalk vehicle illegal parking detection method based on target detection and semantic segmentation
CN107316016A (en) A kind of track of vehicle statistical method based on Hadoop and monitoring video flow
CN103824081B (en) Method for detecting rapid robustness traffic signs on outdoor bad illumination condition
CN114743119B (en) High-speed rail contact net hanger nut defect detection method based on unmanned aerial vehicle
CN111564015B (en) Method and device for monitoring perimeter intrusion of rail transit
CN111241343A (en) Road information monitoring and analyzing detection method and intelligent traffic control system
CN112330593A (en) Building surface crack detection method based on deep learning network
CN109117788A (en) A kind of public transport compartment crowding detection method merging ResNet and LSTM
CN112287941B (en) License plate recognition method based on automatic character region perception
CN109190455B (en) Black smoke vehicle identification method based on Gaussian mixture and autoregressive moving average model
CN112329569B (en) Freight vehicle state real-time identification method based on image deep learning system
Yang et al. A vehicle license plate recognition system based on fixed color collocation
CN114418895A (en) Driving assistance method and device, vehicle-mounted device and storage medium
CN110705412A (en) Video target detection method based on motion history image
CN104766071A (en) Rapid traffic light detection algorithm applied to pilotless automobile
CN115511879A (en) Detection system and detection method for detecting whether muck truck is covered with no cover based on computer vision
CN112990004A (en) Black smoke vehicle detection method based on optical flow method and deep learning convolutional neural network
CN112289022B (en) Black smoke vehicle detection and judgment method and system based on space-time background comparison
CN113450573A (en) Traffic monitoring method and traffic monitoring system based on unmanned aerial vehicle image recognition
CN108648210B (en) Rapid multi-target detection method and device under static complex scene
CN115601741A (en) Non-motor vehicle retrograde detection incremental learning and license plate recognition method
CN115761701A (en) Laser radar point cloud data enhancement method, device, equipment and storage medium
CN111738109A (en) Van-type cargo vehicle carriage door state identification method based on deep learning
CN111723708A (en) Van-type cargo vehicle carriage door state recognition device and system based on deep learning
CN113408550A (en) Intelligent weighing management system based on image processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination