CN112785557A - Belt material flow detection method and device and belt material flow detection system - Google Patents
Belt material flow detection method and device and belt material flow detection system Download PDFInfo
- Publication number
- CN112785557A CN112785557A CN202011642375.XA CN202011642375A CN112785557A CN 112785557 A CN112785557 A CN 112785557A CN 202011642375 A CN202011642375 A CN 202011642375A CN 112785557 A CN112785557 A CN 112785557A
- Authority
- CN
- China
- Prior art keywords
- material flow
- flow detection
- image
- detection model
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 329
- 239000000463 material Substances 0.000 title claims abstract description 307
- 238000012549 training Methods 0.000 claims abstract description 171
- 238000012360 testing method Methods 0.000 claims abstract description 67
- 238000000034 method Methods 0.000 claims abstract description 59
- 239000013077 target material Substances 0.000 claims abstract description 38
- 238000011156 evaluation Methods 0.000 claims abstract description 13
- 238000002372 labelling Methods 0.000 claims description 17
- 230000004927 fusion Effects 0.000 claims description 14
- 238000004590 computer program Methods 0.000 claims description 13
- 238000000605 extraction Methods 0.000 claims description 10
- 238000007781 pre-processing Methods 0.000 claims description 4
- 238000013135 deep learning Methods 0.000 abstract description 8
- 238000012423 maintenance Methods 0.000 abstract description 6
- 239000000523 sample Substances 0.000 description 147
- 230000008569 process Effects 0.000 description 22
- 238000010586 diagram Methods 0.000 description 19
- 239000003245 coal Substances 0.000 description 12
- 238000004891 communication Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 7
- 230000000694 effects Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 230000004913 activation Effects 0.000 description 3
- 238000013500 data storage Methods 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 230000002265 prevention Effects 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 2
- 238000005299 abrasion Methods 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000013075 data extraction Methods 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000003595 mist Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000035515 penetration Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 239000000779 smoke Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The application relates to a belt material flow detection method and device and a belt material flow detection system. The method comprises the following steps: acquiring a test sample image and a training sample image; marking the position of the material flow of the training sample image to obtain training sample data; training a pre-constructed material flow detection model by using training sample data, and determining the material flow detection model corresponding to each weight; testing the trained material flow detection models by using the test sample images, and determining a target material flow detection model according to the accuracy rate evaluation result of each material flow detection model; and then carrying out belt material flow detection on the image to be detected by using the target material flow detection model. By applying the method in the field of computer vision deep learning to belt material flow detection, the detection accuracy and efficiency in a complex environment can be improved, the problems of missed detection and false detection of material flow detection are effectively solved, the equipment safety and the equipment utilization rate are ensured, the later maintenance difficulty is reduced, and resources are saved.
Description
Technical Field
The application relates to the technical field of material flow detection, in particular to a belt material flow detection method and device and a belt material flow detection system.
Background
In recent years, with the rapid development of national economy, the demand of energy is increasing day by day, the high-efficiency transportation of coal is also of great importance, and a batch of special coal wharfs are built by investing in Huang Ye harbor, Tianjin harbor, Caofie Dian harbor, Jing Tang harbor, Qinhuang island harbor and the like in succession to ensure the supply of energy required by the development of the national economy. Coal in a port is generally transported by a belt conveyor, the coal is dumped onto the belt conveyor from a dumper, transported to a storage yard through a stacker and a stacker-reclaimer, and then loaded into a ship by a ship loader. Therefore, the belt conveyor is the main coal transportation equipment in the port. In whole coal transportation link, the material flow information of specific position on the belt feeder can play the early warning effect, can prevent that the unexpected condition such as unrestrained of material flow from taking place on the belt.
The conventional belt material flow detection technology is generally realized by a contact type tilt switch, a pull-cord switch and the like, but the inventor finds that the above mode generally has the following problems in the implementation process: the contact tilt switch, the pull-cord switch and the like are in direct contact with the material flow, and the running speed of the belt conveyor is high, so that after the contact switch equipment is used for a long time, loosening and abrasion of the contact switch can be caused, the condition of false detection or missed detection is caused, and the thin material flow cannot be detected.
Disclosure of Invention
Therefore, it is necessary to provide a method and a device for detecting a belt material flow and a system for detecting a belt material flow, which can effectively avoid the problems of false detection and missed detection caused by the self-obstacle of a detection component, and improve the detection accuracy and reliability.
The embodiment of the application provides a belt material flow detection method, which comprises the following steps:
acquiring a test sample image and a training sample image;
marking the training sample image to obtain training sample data, wherein the marked content comprises the position of the material flow;
training a pre-constructed material flow detection model based on training sample data to obtain each model weight, and determining the material flow detection model corresponding to each model weight;
inputting the test sample image into each material flow detection model to evaluate the accuracy of the material flow detection model corresponding to each model weight;
determining a target material flow detection model according to the accuracy rate evaluation result of each material flow detection model;
and inputting the image to be detected into a target material flow detection model to obtain material flow information of the image to be detected.
In one embodiment, the step of labeling the training sample image to obtain training sample data includes:
and marking the position of the material flow in the training sample image by using a marking frame to obtain the position information of the material flow, wherein the position information of the material flow comprises the position information of the marking frame.
In one embodiment, the training of the pre-constructed material flow detection model based on the training sample data to obtain each model weight, and the step of determining the material flow detection model corresponding to each model weight includes:
according to the basic network initial value, pre-training a pre-constructed material flow detection model based on training sample data to obtain pre-training weights of the models, and determining the initial material flow detection model corresponding to each pre-training weight;
according to the pre-training weight, performing formal training on each initial material flow detection model based on training sample data to obtain a formal training weight of each model;
and determining the material flow detection model corresponding to the formal training weight of each model.
In one embodiment, the step of inputting the test sample image to each flow detection model to evaluate the accuracy of the flow detection model corresponding to each weight comprises:
inputting the test sample image into each material flow detection model to obtain a detection result;
calculating the accuracy and the recall rate of each material flow detection model according to the detection result;
calculating the average accuracy of each material flow detection model according to the accuracy and the recall rate;
the step of determining the target material flow detection model according to the accuracy rate evaluation result of each material flow detection model comprises the following steps:
and determining the material flow detection model with the highest average accuracy as the target material flow detection model.
In one embodiment, the detection result includes position information of the prediction box; inputting the test sample image into each material flow detection model, and obtaining the detection result, wherein the step comprises the following steps:
and inputting the test sample image into each material flow detection model to obtain the position information of the prediction frame, wherein the position information of the prediction frame is used for representing the material flow position detected in the test.
In one embodiment, the step of calculating the accuracy and recall of each stream detection model according to the detection result comprises:
and according to the position information of the labeling frame and the position information of the prediction frame, calculating the accuracy and the recall ratio of each material flow detection model by using the following formulas:
the Precision of each material flow detection model is Precision, the Recall of each material flow detection module is called, TP is the number of prediction frames in the prediction frames, the intersection ratio of the prediction frames to the marking frames is larger than a set proportion threshold, FP is the number of prediction frames in the prediction frames, the intersection ratio of the prediction frames to the marking frames is smaller than the set proportion threshold, and FN is the number of marking frames in the marking frames, which are not predicted.
In one embodiment, the step of calculating the average accuracy for each stream detection model based on accuracy and recall comprises:
obtaining the accuracy and the recall rate of each material flow detection model under different set proportion thresholds;
obtaining an accuracy rate curve and a recall rate drawing curve of each material flow detection model according to the accuracy rate and the recall rate of each material flow detection model under different set proportion thresholds;
and calculating to obtain the average accuracy of each material flow detection model according to the accuracy curve and the recall rate drawing curve.
In one embodiment, training the sample image comprises:
a first flow stream image, the first flow stream image being an image of a sample where flow is evident;
a second flow stream image, the second flow stream image being an image of a sample having a thin flow stream;
a third stream image, the third stream image referring to a sample image without stream.
In one embodiment, before the step of acquiring the test sample image and the training sample image, the method further comprises:
collecting material flow video data;
performing frame extraction on the material flow video data according to a preset rule to obtain original sample image data;
the steps of obtaining the test sample image and the training sample image include:
the raw sample image data is divided into a test sample image and a training sample image.
In one embodiment, the material flow information of the image to be detected comprises material flow position information and confidence coefficient of the image to be detected; inputting an image to be detected into a target material flow detection model, and obtaining material flow information of the image to be detected, wherein the method comprises the following steps:
preprocessing an image to be detected;
extracting material flow characteristics in the preprocessed image to be detected;
fusing the characteristics of all material flows through a characteristic fusion network to obtain fusion characteristics;
and determining the material flow position information and the confidence coefficient of the image to be detected according to the fusion characteristics.
The embodiment of the present application further provides a material flow detection device, and the device includes:
the sample image acquisition module is used for acquiring a test sample image and a training sample image;
the sample labeling module is used for labeling the training sample image to obtain training sample data, wherein the labeled content comprises the position of the material flow;
the model training module is used for training a pre-constructed material flow detection model based on training sample data to obtain each model weight and determining the material flow detection model corresponding to each model weight;
the test module is used for inputting the test sample image into each material flow detection model so as to evaluate the accuracy of the material flow detection model corresponding to each model weight;
the target material flow detection model selection module is used for determining a target material flow detection model according to the accuracy rate evaluation result of each material flow detection model;
and the material flow detection realization module is used for inputting the image to be detected into the target material flow detection model to obtain the material flow information of the image to be detected.
The embodiment of the application also provides a belt material flow detection system, which comprises computer equipment and an image acquisition device; the image acquisition device is used for acquiring a test sample image, a training sample image and an image to be detected; the computer equipment is in communication connection with the image acquisition device and comprises a memory and a processor, wherein the memory stores a computer program, and the processor executes the computer program to realize the steps of the belt material flow detection method.
One or more technical solutions provided by the embodiments of the present application have at least the following beneficial effects: the method comprises the steps of dividing collected sample images into test sample images and training sample images, firstly labeling the training sample images to obtain training sample data, training a pre-constructed material flow detection model by utilizing a deep learning technology in the field of computers, determining the material flow detection model under the weight of each model, evaluating the accuracy rate of each material flow detection model by utilizing the test sample images in order to further improve the accuracy of the detection model, and selecting the detection model with the highest accuracy rate as a target material flow detection model, so that the material flow information in the to-be-detected images detected by utilizing the model is high in accuracy and reliability. The method in the field of computer vision deep learning is applied to belt material flow detection, traditional hardware detection is replaced, and by collecting a large amount of material flow detection image data in complex scenes, the material flow detection model can extract, fuse and construct material flow depth features in a self-learning mode without artificial design features. The material flow detection model which is constructed is high in adaptability under various environments, the detection accuracy under the complex environment is improved, the problems of missing detection and false detection of material flow detection are effectively solved, the operation efficiency is improved, the equipment safety is guaranteed, the equipment utilization rate is improved, the later maintenance difficulty is reduced, and manpower and material resources are saved.
Drawings
FIG. 1 is a diagram of an environment in which a belt material flow detection method is applied and a schematic diagram of a belt material flow detection system according to an embodiment of the present disclosure;
FIG. 2 is a schematic flow diagram of a belt flow detection method in one embodiment of the present application;
FIG. 3 is a schematic flow diagram of a belt flow detection method in yet another embodiment of the present application;
fig. 4 is a schematic flowchart illustrating a procedure of training a pre-constructed material flow detection model based on training sample data to obtain each model weight and determining a material flow detection model corresponding to each model weight in another embodiment of the present application;
FIG. 5 is a schematic flow chart illustrating a portion of the steps of a belt flow detection method according to an embodiment of the present application;
FIG. 6 is a schematic flow chart illustrating a portion of the steps of a belt flow detection method according to yet another embodiment of the present application;
FIG. 7 is a diagram of a network model architecture of YOLO-v4 suitable for use in embodiments of the present application;
FIG. 8 is a schematic representation of Backbone basic module CBL of YOLO-v4, suitable for use in the examples of the present application;
FIG. 9 is a schematic representation of Backbone basic module CBM of YOLO-v4, suitable for use in the examples of the present application;
FIG. 10 is a diagram of Backbone basic module Res unit of YOLO-v4 suitable for use in embodiments of the present application;
FIG. 11 is a schematic diagram of Backbone basic module CSPX of YOLO-v4 suitable for use in the examples of the present application;
FIG. 12 is a schematic diagram of a Backbone basic module SPP of YOLO-v4 suitable for use in the embodiments of the present application;
FIG. 13 is a schematic diagram of the Neck base components FPN and PAN of a YOLO-v4 suitable for use in embodiments of the present application;
FIG. 14 is a graph illustrating an overall training process of a flow detection model suitable for use in an embodiment of the present application;
FIG. 15 is a schematic diagram illustrating an actual detection effect of a flow detection method applied in the embodiment of the present application;
FIG. 16 is a block diagram of a belt material flow detection device according to an embodiment of the present disclosure;
fig. 17 is an internal structural diagram of a computer device in one embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
In order to make the objects, technical solutions and advantages of the present application more apparent, specific embodiments of the present application will be described in detail with reference to the accompanying drawings. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be further noted that, for the convenience of description, only some but not all of the relevant portions of the present application are shown in the drawings. Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the operations (or steps) as a sequential process, many of the operations can be performed in parallel, concurrently or simultaneously. In addition, the order of the operations may be re-arranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
In view of the above mentioned background art, in the conventional manner of detecting material flow by hardware, on one hand, since the contact tilt switch, the pull-cord switch, and the like are in direct contact with the material flow, and the belt conveyor runs at a high speed, after the contact switch device is used for a long time, the contact switch may loosen and wear, which may cause false detection or missed detection, especially thin material flow. In addition, to some modes that adopt equipment such as ultrasonic switch to carry out the material stream and detect in the conventional art, it is the same with contact check out test set, and the mounted position all is nearer from the belt feeder, need shut down when later stage maintenance equipment and maintain, when having reduced the conveying efficiency of coal, still causes the loss of manpower and materials.
In conjunction with an applicable scenario of the embodiment of the present application, as shown in fig. 1, a description is given of the type and installation requirements of the image capturing apparatus 10 of the embodiment of the present application. Taking a port as an example, because the port operation environment is special, it is important to select a camera with complete functions and suitable for material flow detection, and the coal material flow detection camera needs to meet the following requirements: 1) stray light interference prevention: because illumination can interfere with image quality and directly influence the final detection result, an automatic aperture, a slow shutter and white balance self-adjustment are needed to support backlight compensation; 2) electronic anti-shake: the whole transportation process of port coal has obvious vibration, so that the picture shot by a camera is deformed and blurred; 3) supporting fog penetration: the port often encounters heavy fog weather and haze weather, and the pictures shot by the camera are also interfered; 4) waterproof, lightning protection: the damage to the camera in thunderstorm weather is avoided; 5) antistatic, electromagnetic interference prevention: electromagnetic interference generated by a large number of high-power electronic devices can influence the normal work of the camera and influence imaging; 6) h.264 video coding compression technology: the operation speed on the belt conveyor 30 is fast, and material flow detection needs to acquire a real-time video flow for processing; 7) working temperature and humidity: the temperature of the port is mostly between minus 30 ℃ and 40 ℃, and the humidity is also between 20 percent and 90 percent. Comprehensively considering the above requirements, for example, a Haokawa vision smart IPC H265200 ten thousand infrared barrel type network camera with the model number of DS-2CD5A27EFWD can be selected, the installation position of the camera is right above the belt, and the direction of the camera is vertical downward, so that the material flow shape collected by the camera is ensured to be regular, the material flow detection is facilitated, and the material flow shape in the shot picture is ensured to be rectangular and has the best effect. In addition, the image acquisition device 10 is arranged right above the belt and is relatively far away from the belt conveyor 30, so that the image acquisition device is convenient to maintain in the later period, and manpower and material resources are saved. The image capturing device 10 can be communicatively connected to a terminal 20 such as an operating desk in the harbor control room, and the following steps of the belt material flow detecting method are performed by an operating system in the harbor control room to achieve the purpose.
In view of the above-mentioned problems, embodiments of the present application provide a belt material flow detection method applicable to the above application scenarios, as shown in fig. 2 to 6, the method including:
s30: acquiring a test sample image and a training sample image;
s40: marking the training sample image to obtain training sample data, wherein the marked content comprises the position of the material flow;
the training sample data set for material flow detection needs to contain different material flow conditions under various environments, so that a large amount of video data needs to be acquired. In order to better ensure the accuracy of the trained model, the collected material flow video data can be preliminarily screened, namely the video data under the environments of water mist, smoke, sunlight interference, illumination lamp interference, shadow interference and the like are selected. The video data with sundries, water and the like on the belt and other types of interference such as workers nearby need to be selected. The training sample images after the initial screening are used for carrying out the subsequent model training, so that the detection precision and reliability of the belt material flow detection method under a plurality of complex environments can be guaranteed.
S60: training a pre-constructed material flow detection model based on training sample data to obtain each model weight, and determining the material flow detection model corresponding to each model weight;
s70: inputting the test sample image into each material flow detection model to evaluate the accuracy of the material flow detection model corresponding to each model weight;
s80: determining a target material flow detection model according to the accuracy rate evaluation result of each material flow detection model;
s90: and inputting the image to be detected into a target material flow detection model to obtain material flow information of the image to be detected.
According to the belt material flow detection method provided by the embodiment of the application, the collected sample image is divided into the test sample image and the training sample image, the training sample image is labeled firstly, the training sample data is obtained, the pre-constructed material flow detection model is trained by utilizing the deep learning technology in the field of computers, the material flow detection model under each model weight is determined, in order to further improve the correctness of the detection model, the accuracy rate of each material flow detection model is evaluated by utilizing the test sample image, the detection model with the maximum accuracy rate is selected as the target material flow detection model, and the high accuracy rate and the high reliability of the material flow information in the to-be-detected image detected by utilizing the model can be guaranteed.
The method in the field of computer vision deep learning is applied to belt material flow detection, traditional hardware detection is replaced, and by collecting a large amount of material flow detection image data in complex scenes, the material flow detection model can extract, fuse and construct material flow depth features in a self-learning mode without artificial design features. The constructed target material flow detection model has strong adaptability in various environments, improves the detection accuracy in complex environments, effectively solves the problems of missed detection and false detection of material flow detection, improves the operation efficiency, ensures the equipment safety, improves the equipment utilization rate, reduces the difficulty of later maintenance, and saves manpower and material resources.
In one embodiment, the step S40 of labeling the training sample image to obtain training sample data includes:
s41: and marking the position of the material flow in the training sample image by using a marking frame to obtain the position information of the material flow, wherein the position information of the material flow comprises the position information of the marking frame. Optionally, the material flow in each image falls into the corresponding labeling frame, and a gap is formed between the border of each labeling frame and the material flow boundary corresponding to the border.
In one embodiment, the training of the pre-constructed material flow detection model based on the training sample data to obtain each model weight, and the step of determining the material flow detection model corresponding to each model weight includes:
s61: according to the basic network initial value, pre-training a pre-constructed material flow detection model based on training sample data to obtain pre-training weights of the models, and determining the initial material flow detection model corresponding to each pre-training weight;
s62: according to the pre-training weight, performing formal training on each initial material flow detection model based on training sample data to obtain a formal training weight of each model;
s63: and determining the material flow detection model corresponding to the formal training weight of each model.
Specifically, according to the initial value of the basic network, pre-training a pre-constructed material flow detection model by using training sample data to obtain pre-training weight; and according to the pre-training weight, applying training sample data to carry out formal training on the initial material flow detection model to obtain formal training weights and determining the material flow detection model corresponding to each formal training weight. Optionally, after obtaining the formal training weight, the weight of the result is saved according to the storage manner described in the above embodiment.
The training process is divided into a pre-training process and a formal training process, wherein the pre-training process is to directly apply the weight of the trained image classification model to another similar task, and the pre-training weight can be obtained by taking the initial value of the basic network as an initial parameter. By utilizing the neural network-basic network which is trained previously, the prior values of all parameters of the network with the profile replace a weight random initialization mode, in the background, all the previous layers can be frozen and the last layer can be retrained, so that the training speed is greatly improved, but in consideration of the general condition, the distribution of newly obtained training sample data deviates from the distribution of the previous training set, so that most of the previous network can be frozen and only the last layers are trained and parametrized (called fine tune) when the prior network is not enough to completely fit new data.
In one embodiment, the step S70 of inputting the test sample image to each flow detection model to evaluate the accuracy of the flow detection model corresponding to each weight includes:
s71: inputting the test sample image into each material flow detection model to obtain a detection result;
s72: calculating the accuracy and the recall rate of each material flow detection model according to the detection result;
s73: calculating the average accuracy of each material flow detection model according to the accuracy and the recall rate;
the step S80 of determining the target stream detection model according to the accuracy evaluation result of each stream detection model includes:
s81: and determining the material flow detection model with the highest average accuracy as the target material flow detection model.
In one embodiment, the detection result includes position information of the prediction box; inputting the test sample image to each of the flow detection models, and the step S71 of obtaining the detection result includes:
s711: and inputting the test sample image into each material flow detection model to obtain the position information of the prediction frame, wherein the position information of the prediction frame is used for representing the material flow position detected in the test.
Specifically, the trained material flow detection model is used for testing the test sample image. Reading RTSP (Real Time Streaming Protocol) video stream through Opencv, and then conveying the test sample images obtained by frame extraction and classification to a stream detection model for testing to obtain a detection result. For example, the information of the area where the material flow is detected can be specifically processed to obtain the accuracy and the recall ratio, the area where the prediction frame is located is taken as the target, left _ x represents the leftmost abscissa of the area, top _ y represents the uppermost ordinate of the area, width is the width of the image (the width of the prediction frame), and height is the height of the image (the height of the prediction frame). In one specific application scenario, the detection speed is about 55FPS with a confidence of about 100%. And then, sequentially evaluating the weights of all the trained material flow detection models, selecting the model with the highest mAP (mean Average Precision) as the weight of the final material flow detection model, and selecting the model corresponding to the weight as the target material flow detection model.
In one embodiment, the step S72 of calculating the accuracy and the recall of each stream detection model according to the detection result includes:
s721: and according to the position information of the labeling frame and the position information of the prediction frame, calculating the accuracy and the recall ratio of each material flow detection model by using the following formulas:
the Precision of each material flow detection model is Precision, the Recall of each material flow detection module is called, TP is the number of prediction frames in the prediction frames, the intersection ratio of the prediction frames to the marking frames is larger than a set proportion threshold, FP is the number of prediction frames in the prediction frames, the intersection ratio of the prediction frames to the marking frames is smaller than the set proportion threshold, and FN is the number of marking frames in the marking frames, which are not predicted.
TP (True Positive, a correct detection) means that IoU (Intersection over Union) of the predicted frame and the labeled frame is greater than a set proportion threshold, that is, the number of the predicted frames classified correctly and the coordinates of the bounding box correct. FP (False Positive, a detection of error) means that IoU of the predicted box and the labeled box is smaller than a set proportion threshold, that is, the number of classification errors in the predicted bounding box or the coordinates of the bounding box do not reach the standard, that is, the number of the predicted bounding boxes excluding the bounding box with correct prediction and the remaining bounding boxes. FN (False Negative, the number of all non-predicted bounding boxes), i.e. the number of remaining bounding boxes excluding the correctly predicted bounding boxes.
In one embodiment, the step S73 of calculating the average accuracy of each stream detection model based on the accuracy and the recall ratio includes:
s731: obtaining the accuracy and the recall rate of each material flow detection model under different set proportion thresholds;
s732: obtaining an accuracy rate curve and a recall rate drawing curve of each material flow detection model according to the accuracy rate and the recall rate of each material flow detection model under different set proportion thresholds;
s733: and calculating to obtain the average accuracy of each material flow detection model according to the accuracy curve and the recall rate drawing curve.
The mAP can be calculated according to Precision and Recall, Precision and Recall which need to be obtained under different set proportion thresholds, then a curve is drawn according to the Precision and Recall under different set proportion thresholds, and smoothing is carried out on the curve, namely for each point on the curve, the value of Precision is the maximum Precision value on the right side of the point. The area enclosed by the final curve and the coordinate axis is mAP. In a specific example, the set ratio threshold may be 0.5, and the calculated weight with the mapp of 0.997496 is used as the optimal weight of the material flow detection model, and the corresponding model is the target material flow detection model.
In one embodiment, training the sample image comprises:
a first flow stream image, the first flow stream image being an image of a sample where flow is evident;
a second flow stream image, the second flow stream image being an image of a sample having a thin flow stream;
a third stream image, the third stream image referring to a sample image without stream.
In order to further improve the detection accuracy of the trained model under different material flows, secondary screening and classification can be carried out on the primarily screened video data, and the primarily screened video data can be divided into three types of video data according to the number of the material flows: obvious material flow, light and thin material flow and no material flow, and then performing frame separation processing or extracting key frames on the three types of videos, wherein the three types of videos are correspondingly preprocessed into a first material flow image, a second material flow image and a third material flow image.
In one embodiment, in order to make the model sufficiently trained, the ratio of the number of first flow stream images, the number of second flow stream images, and the number of third flow stream images should be approximately 1: 1: 2, the number of samples with material flow and the number of samples without material flow are approximately equal, so that the model can be fully trained in the material flow direction and the material flow direction, the target detection model trained based on the training sample image can be suitable for material flow detection under various scenes, and the accuracy of the detection result can be further improved. Optionally, the number of the first material flow images, the number of the second material flow images and the number of the third material flow images are not less than 3000 images, so that a training effect is ensured.
In one embodiment, in order to better implement storage and fast retrieval of data with different attributes, in the belt material flow detection method provided in this embodiment of the present invention, before generating a training sample data for material flow detection by labeling a training sample image, a file directory of a data set is constructed in advance for storing different files, that is, in the belt material flow detection method provided in this embodiment of the present application, the step S40 of labeling the training sample image to obtain the training sample data further includes:
and labeling the training sample image to obtain training sample data, and storing the training sample image and the training sample data in different storage areas.
Specifically, the training sample image can be placed under the JPEGImages file directory, and then the target material flow detection image labeling software Labelimg is used for labeling the training sample image. Labelimg requires that the name Coal of the stream be given, as well as the region in which the calibration stream is located, during use. The labeling area must include the entire material flow, and a certain gap is reserved between the frame and the material flow boundary. After each training sample picture is labeled, a text file in an xml format (extensible markup language format) is generated, in a specific example, the labeled content includes the position of the material flow, and in addition, the name of the sample material flow can also be included, and the labeled content is placed under an expressions file directory. By reasonably planning the data storage mode, the rapidity of data storage and extraction can be further improved, and the accuracy and reliability of data reading can be ensured.
In one embodiment, before the step S30 of acquiring the test sample image and the training sample image, the method further includes:
s10: stream video data is collected.
S20: and carrying out frame extraction on the material flow video data according to a preset rule to obtain original sample image data.
The preset rule is a frame extraction rule agreed in advance, for example, frame-by-frame extraction, or two-by-two frame extraction, and the preset rule can be set according to requirements in an actual application scene.
The step S30 of acquiring the test sample image and the training sample image includes:
s31: the raw sample image data is divided into a test sample image and a training sample image.
In an actual application process, a video is often directly acquired, for example, the video acquired by using an image acquisition device needs to be processed into an image, and then frame extraction operations such as frame-by-frame extraction or frame-by-frame extraction are performed on a processing result to obtain original sample image data.
Specifically, the realization process of collecting the material flow Video data on the belt can directly read the hard disk Video Recorder (NVR) of Video equipment such as Haekwever Video equipment; it is also possible to read the RTSP (Real Time Streaming Protocol) camera video stream through Opencv (which is a BSD license (open source) -based distributed cross-platform computer vision and machine learning software library), and save each frame as a video.
Based on the different purposes of the training sample data and the test sample image, in the above embodiment, the original sample image is divided according to the criterion that the training sample image set is larger than the test sample image set, for example, the ratio of the training sample image set to the test sample image set may be 9: 1. and applying the divided training sample image set to model training, and using the divided testing sample image set to evaluate the accuracy rate of the material flow detection model. Further, the file names of the samples in the training sample image set and the file names of the samples in the testing sample image set may be stored in the train.txt and test.txt files, respectively, at the same time, that is, the data storage is also performed in the sub-storage areas as mentioned in the above embodiments.
In one embodiment, the material flow information of the image to be detected comprises material flow position information and confidence coefficient of the image to be detected; the step S90 of inputting the image to be detected to the target material flow detection model to obtain the material flow information of the image to be detected includes:
s91: preprocessing an image to be detected;
s92: extracting material flow characteristics in the preprocessed image to be detected;
s93: fusing the characteristics of all material flows through a characteristic fusion network to obtain fusion characteristics;
s94: and determining the material flow position information and the confidence coefficient of the image to be detected according to the fusion characteristics.
The pre-constructed material flow detection model may be an image recognition model of a "You need Only Look Once" (YOLO) series, or may be a region convolution feature (regionswitchcnn Features, R-CNN), a Fast region convolution feature (Fast Regions with CNN Features, FastR-CNN), a Faster region convolution feature (Fast Regions with CNN Features, Fast R-CNN), a Mask region convolution feature (Mask Regions with CNN Features, MaskFast R-CNN), and the like, which is not limited in the embodiment of the present application.
Taking a pre-constructed material flow detection model as a YOLO-v4 material flow detection model as an example to illustrate a material flow detection implementation process, it should be emphasized that the material flow detection model provided in the embodiments of the present application is not limited to the YOLO series material flow detection model, and the YOLO-v4 material flow detection model includes an input layer, a base network, a feature fusion network, and a prediction layer.
The basic network can use a CSPDarknet53 network, the feature fusion network can use a network structure of a FPN + PAN network, and the prediction layer outputs position information and confidence of the material flow in the image to be detected, wherein the position information is used for determining the size of the material flow.
The input layer is used for preprocessing the image to be detected before the image to be detected is transmitted to the network, so that the preprocessed image to be detected meets the input requirement of the YOLO image recognition model, for example, before an image with a size of 1920 × 1080 is transmitted to the network, the image needs to be scaled to 608 × 342, that is, the wide side is scaled to 608, and the narrow side is also scaled in equal proportion. The scaled image is then filled 608 x 608 to meet the input format requirements of the YOLO-v4 model.
The basic network (Backbone) refers to a network architecture part for extracting features of an image, the high-level Feature map (Feature map) of the basic network has a large receptive field and strong semantic information representation capability, such as appearance details of an object, but the Feature map has low resolution, and the image has weak geometric information and lacks of geometric Feature details, so that the method is suitable for image classification. The low-layer Feature map of the Backbone has a small receptive field and strong geometric detail information representation capability, such as the edge shape of an object, and the resolution of a Feature map is high, but the semantic information representation capability is weak, so that the method is suitable for target detection. The basic network adopted by the YOLO-v4 image recognition model is CSPDarknet53, which is composed of three different basic components, mainly comprising CBL, CBM, Res unit, CSPX and SPP; the CBL and the CBM are respectively composed of Conv, BN and an activation function Leaky _ relu; SPP uses 4 k × k max pooling layers in parallel, where k takes on values of 1, 5, 9, 13.
In detail, the constitution and function of each basic component are as follows:
1) CBL: a minimum group forming unit in the YOLOv4 network architecture, which consists of Conv, BN and a Leaky _ relu activation function;
2) and (3) CBM: the minimum composition unit in the YOLOv4 network architecture consists of Conv, BN and a Leaky _ Mish activation function;
3) res unit: the classical residual structure in the Resnet architecture network can effectively strengthen information circulation among layers and simultaneously prevent the problems of gradient disappearance and gradient explosion in the network training process;
4) ResX: consists of one CBL and X residual components. The CBL in front of each Res module plays a role of down-sampling, and the size of the prior layer Feature map can be reduced to half of the original size;
5) CSPX: by using the backhaul structure of CSPNet, the problem of large calculated amount in the inference process can be effectively solved;
6) SPP: the use of 4 kxkmax pooling in parallel, 1 x1, 5 x 5, 9 x 9, 13 x 13, respectively, separated the high and low level features to some extent.
In a specific example, FIG. 7 shows a network model architecture diagram of YOLOv 4; FIG. 8 shows a schematic representation of Backbone basic module CBL of YOLO-v 4; FIG. 9 shows a schematic representation of Backbone basic module CBM of YOLOv 4; FIG. 10 shows a Backbone basic module Res unit schematic of YOLOv 4; FIG. 11 shows a schematic representation of Backbone basic module CSPX of YOLOv 4; FIG. 12 shows a schematic diagram of Backbone basic module SPP of YOLOv 4; FIG. 13 shows a diagram of the Neck base component FPN + PAN of YOLOv 4; fig. 14 shows a curve diagram of an overall training process of one of the flow detection models.
The Feature fusion network (Neck) is a transition part of a basic network Backbone and a Prediction layer Prediction, and plays a main role in Feature fusion, namely, all Feature maps of the Backbone are fused, the geometric information of a low layer and the semantic information of a high layer are fused, so that the semantic information representation capability and the high resolution of the Feature maps can be ensured, and the detection effect is improved. The Feature fusion network adopted by YOLO-v4 is FPN (Feature Pyramid Networks) and PAN (Pixel Aggregation Networks).
A Prediction layer (Prediction) for outputting a Prediction result, namely material flow information, the Prediction result including five-dimensional parameters of position information (x, y, w, h) and confidence (confidence) of the detected material flow, wherein the confidence is used for representing the confidence that an object is actually present in an output detection region (which may be an image acquisition region of an image acquisition device) and the confidence that all objects of the object to be detected are included, and can be understood as an accuracy, and a calculation formula of the confidence is as follows:
wherein, pr (object) indicates whether the detection area has an object to be detected, and is 1 if yes, or is 0 if no.Value representing the cross-over ratio of the prediction result and the artificially labeled sample, Pr (Class)iI Object) is the probability that the detected Object is of a certain class (e.g., coal), and the confidence is obtained by multiplying the three.
In a specific example, in the pre-training of the flow detection model, the total number of iterations of the pre-training may be 1000, the initial learning rate learning _ rate may be set to 0.001, and when the number of iterations reaches 800 and 900, the learning rate is respectively reduced to 0.0001 and 0.00001, and then the pre-training weight of the flow detection model is obtained.
The training total iteration number of the flow detection model can be 20000, and a dual-GPU training mode can be adopted in the training process, so the learning rate can be set to 0.001 for learning _ rate/0.0005 for GPUs. When the iteration times reach 16000 and 18000 times, the learning rate is respectively reduced to 0.00025 and 0.000125, and at this time, the material flow detection model weight file (the material flow detection model corresponding to the formal training weight of each model is determined) is obtained.
In order to better explain the implementation process of the belt material flow detection method provided by the embodiment of the present application, taking a case that the material flow detection model is an image recognition model of a YOLO-v4 version as an example, in the above embodiment, after training sample images are labeled to generate training sample data and converted into an xml format file, the training sample data set in the xml format may be converted into a txt format required by YOLO-v4 training through a program clock _ label.
In one embodiment, each txt file may contain: number of streams and relative positions of streams.
Wherein the relative position of the material flow is a value obtained by normalizing the absolute position, and the step of obtaining the relative position of the material flow comprises the following steps:
the coordinates of the bottom left corner and the top right corner of the labeled region (the region enclosed by the label box) of the stream are taken to be (x1, y1) (x2, y2), respectively, and the width w and height h of the label box (i.e., the width and height of the image are w, h, respectively).
And calculating the normalized relative coordinate of the central point x according to the formula (x1+ x 2)/2/w.
And calculating to obtain the relative coordinate of the normalized center point y according to the formula (y1+ y 2)/2/h.
The normalized relative width of the stream is calculated according to the formula (x2-x 1)/w.
The normalized relative height of the streams is calculated according to the formula (y2-y 1)/h.
In one embodiment, the belt flow information may include the location of the flow, the size of the flow, the name of the flow, and the like. In a specific example, fig. 15 shows a schematic diagram of the flow detection effect.
According to the belt material flow detection method, the method in the field of computer vision deep learning is applied to belt material flow detection, traditional hardware detection is replaced, and the material flow detection model is enabled to extract, fuse and build material flow depth features in a self-learning mode through collecting a large amount of material flow detection image data in a complex scene, and artificial design features are not needed. The material flow detection model which is constructed is high in adaptability under various environments, the detection accuracy under the complex environment is improved, the problems of missing detection and false detection of material flow detection are effectively solved, the operation efficiency is improved, the equipment safety is guaranteed, the equipment utilization rate is improved, the later maintenance difficulty is reduced, and manpower and material resources are saved.
It should be understood that although the various steps in the flow charts of fig. 2-6 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-6 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least some of the other steps.
An embodiment of the present application further provides a material flow detection device, as shown in fig. 16, the device includes:
a sample image obtaining module 100, configured to obtain a test sample image and a training sample image;
the sample labeling module 200 is configured to label a training sample image to obtain training sample data, where the labeled content includes a position of a material flow;
the model training module 300 is configured to train a pre-constructed material flow detection model based on training sample data to obtain each model weight, and determine a material flow detection model corresponding to each model weight;
the test module 400 is used for inputting the test sample image into each material flow detection model to evaluate the accuracy of the material flow detection model corresponding to each model weight;
a target material flow detection model selection module 500, configured to determine a target material flow detection model according to the accuracy evaluation result of each material flow detection model;
and the material flow detection implementation module 600 is configured to input the image to be detected to the target material flow detection model, so as to obtain material flow information of the image to be detected.
Specifically, the belt material flow detection device provided in the embodiment of the present application acquires a test sample image and a training sample image through the sample image acquisition module 100, and then the sample marking module 200 marks the training sample image to obtain training sample data; the model training module 300 trains the pre-constructed material flow detection model according to the training sample data to obtain each model weight, and determines the material flow detection model corresponding to each model weight; the test module 400 inputs the test sample image generated by the sample image obtaining module 100 into each material flow detection model to evaluate the accuracy of the material flow detection model corresponding to each model weight; finally, the target material flow detection model selection module 500 determines a target material flow detection model according to the accuracy rate evaluation result of each material flow detection model; the material flow detection implementation module 600 inputs the image to be detected into the target material flow detection model to obtain the material flow information of the image to be detected. By applying the deep learning technology to the material flow detection, the detection efficiency and the detection precision can be greatly improved. And the collected image can be acquired through the image acquisition device, and can be relatively far away from the belt conveyor, so that missing detection or false detection caused by damage of hardware equipment can be avoided, and the detection reliability is improved. In addition, when carrier equipment for executing the steps of the method is maintained, the carrier equipment does not need to be stopped, and normal conveying of materials such as coal can be guaranteed.
For specific limitations of the belt material flow detection device, reference may be made to the above limitations of the belt material flow detection method, which are not described herein again. Each module in the belt material flow detecting device may be wholly or partially implemented by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules. It should be further emphasized that the above-mentioned module and other unit modules not listed in the belt material flow detection apparatus can perform the steps in the above-mentioned method embodiments correspondingly, and achieve corresponding beneficial effects, which are not described herein again.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 17. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for performing wired or wireless communication with an external terminal (for example, a terminal such as a computer of a manager or a control console in a port control room), and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a belt flow detection method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 17 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
s30: acquiring a test sample image and a training sample image;
s40: marking the training sample image to obtain training sample data, wherein the marked content comprises the position of the material flow;
s60: training a pre-constructed material flow detection model based on training sample data to obtain each model weight, and determining the material flow detection model corresponding to each model weight;
s70: inputting the test sample image into each material flow detection model to evaluate the accuracy of the material flow detection model corresponding to each model weight;
s80: determining a target material flow detection model according to the accuracy rate evaluation result of each material flow detection model;
s90: and inputting the image to be detected into a target material flow detection model to obtain material flow information of the image to be detected.
The embodiment of the present application further provides a belt material flow detection system, as shown in fig. 1, including a computer device 20 and an image acquisition apparatus 10; the image acquisition device 10 is used for acquiring a test sample image, a training sample image and an image to be detected; the computer device 20 is in communication connection with the image capturing apparatus 10, and the computer device 20 includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the belt material flow detecting method when executing the computer program. The selection criteria of the image capturing apparatus 10 can refer to the selection criteria in the application scenarios listed in the above embodiments. The belt material flow detection system is the same as the scheme in the embodiment of the method, the detection efficiency and the precision of the belt material flow detection system can be improved by introducing the deep learning technology into material flow detection, and when the components in the belt material flow detection system are in fault, the system does not need to be stopped for maintenance, and the efficiency of conveying materials by a belt is not influenced. In addition, because the image acquisition device 10 is adopted to acquire the sample image, compared with the mode of adopting a contact switch or an ultrasonic switch in the prior art, the image acquisition device 10 can be arranged relative to the belt conveyor 30, and further avoids the damage of the belt conveyor 30 to equipment in a detection system in the process of conveying materials, so that the equipment failure risk can be further reduced, and the cost is reduced.
In one embodiment, the computer device 20 of the belt flow detection system is also communicatively coupled to a remote terminal. The remote terminal may be a computer of a port control room or the like.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
s30: acquiring a test sample image and a training sample image;
s40: marking the training sample image to obtain training sample data, wherein the marked content comprises the position of the material flow;
s60: training a pre-constructed material flow detection model based on training sample data to obtain each model weight, and determining the material flow detection model corresponding to each model weight;
s70: inputting the test sample image into each material flow detection model to evaluate the accuracy of the material flow detection model corresponding to each model weight;
s80: determining a target material flow detection model according to the accuracy rate evaluation result of each material flow detection model;
s90: and inputting the image to be detected into a target material flow detection model to obtain material flow information of the image to be detected.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (12)
1. A method of belt flow detection, the method comprising:
acquiring a test sample image and a training sample image;
marking the training sample image to obtain training sample data, wherein the marked content comprises the position of the material flow;
training a pre-constructed material flow detection model based on the training sample data to obtain each model weight, and determining the material flow detection model corresponding to each model weight;
inputting the test sample image into each material flow detection model to evaluate the accuracy rate of the material flow detection model corresponding to each model weight;
determining a target material flow detection model according to the accuracy rate evaluation result of each material flow detection model;
and inputting the image to be detected into the target material flow detection model to obtain the belt material flow information of the image to be detected.
2. The method of claim 1, wherein the step of labeling the training sample image to obtain training sample data comprises:
and marking the position of the material flow in the training sample image by using a marking frame to obtain the position information of the material flow, wherein the position information of the material flow comprises the position information of the marking frame.
3. The method of claim 1, wherein the step of training a pre-constructed material flow detection model based on the training sample data to obtain each model weight, and determining the material flow detection model corresponding to each model weight comprises:
pre-training a pre-constructed material flow detection model based on the training sample data according to the initial value of a basic network to obtain pre-training weights of the models, and determining the initial material flow detection model corresponding to each pre-training weight;
according to the pre-training weight, performing formal training on each initial material flow detection model based on the training sample data to obtain a formal training weight of each model;
and determining the material flow detection model corresponding to the formal training weight of each model.
4. The method of claim 2, wherein inputting the test sample image to each of the flow detection models to evaluate the accuracy of the flow detection models for each weight comprises:
inputting the test sample image into each material flow detection model to obtain a detection result;
calculating the accuracy and the recall rate of each material flow detection model according to the detection result;
calculating an average accuracy rate of each of the stream detection models according to the accuracy rate and the recall rate;
the step of determining the target material flow detection model according to the accuracy rate evaluation result of each material flow detection model comprises the following steps:
and determining the material flow detection model with the highest average accuracy rate as the target material flow detection model.
5. The method of claim 4, wherein the detection result comprises position information of a prediction box; inputting the test sample image into each material flow detection model, and obtaining a detection result comprises the following steps:
and inputting the test sample image into each material flow detection model to obtain the position information of a prediction frame, wherein the position information of the prediction frame is used for representing the material flow position detected in the test.
6. The method of claim 5, wherein the step of calculating the accuracy and recall of each of the flow detection models based on the detection results comprises:
according to the position information of the labeling frame and the position information of the prediction frame, calculating the accuracy and the recall ratio of each material flow detection model by using the following formulas:
the Precision is the accuracy of each material flow detection model, the Recall is the Recall rate of each material flow detection module, TP is the number of prediction frames in the prediction frames, the intersection ratio of which to the marking frame is greater than a set proportion threshold, FP is the number of prediction frames in the prediction frames, the intersection ratio of which to the marking frame is less than the set proportion threshold, and FN is the number of marking frames in the marking frame, which are not predicted.
7. The method of claim 6, wherein said step of calculating an average accuracy rate for each of said stream detection models based on said accuracy rate and said recall rate comprises:
obtaining the accuracy and the recall rate of each material flow detection model under different set proportion thresholds;
obtaining an accuracy rate curve and a recall rate drawing curve of each material flow detection model according to the accuracy rate and the recall rate of each material flow detection model under different set proportion thresholds;
and calculating to obtain the average accuracy of each material flow detection model according to the accuracy curve and the recall rate drawing curve.
8. The method of claim 1, wherein the training sample image comprises:
a first flow stream image, the first flow stream image being an image of a sample where flow is evident;
a second flow stream image, the second flow stream image referring to a sample image having a thin flow stream;
a third flow stream image, the third flow stream image referring to a sample image without flow streams.
9. The method of any one of claims 1 to 8, further comprising, prior to the step of obtaining the test sample image and the training sample image:
collecting material flow video data;
performing frame extraction on the material flow video data according to a preset rule to obtain original sample image data;
the step of obtaining the test sample image and the training sample image comprises:
and dividing the original sample image data into the acquired test sample image and a training sample image.
10. The method according to claim 1, wherein the belt material flow information of the image to be detected comprises material flow position information and confidence of the image to be detected; inputting an image to be detected into the target material flow detection model, and obtaining the belt material flow information of the image to be detected, wherein the method comprises the following steps:
preprocessing the image to be detected;
extracting material flow characteristics in the preprocessed image to be detected;
fusing the characteristics of the material flows through a characteristic fusion network to obtain fusion characteristics;
and determining the material flow position information and the confidence coefficient of the image to be detected according to the fusion characteristics.
11. A belt flow detection apparatus, the apparatus comprising:
the sample image acquisition module is used for acquiring a test sample image and a training sample image;
the sample marking module is used for marking the training sample image to obtain training sample data, wherein the marked content comprises the position of the material flow;
the model training module is used for training a pre-constructed material flow detection model based on the training sample data to obtain each model weight and determining the material flow detection model corresponding to each model weight;
the test module is used for inputting the test sample image into each material flow detection model so as to evaluate the accuracy of the material flow detection model corresponding to each model weight;
the target material flow detection model selection module is used for determining a target material flow detection model according to the accuracy rate evaluation result of each material flow detection model;
and the material flow detection realization module is used for inputting the image to be detected into the target material flow detection model to obtain the belt material flow information of the image to be detected.
12. A belt material flow detection system is characterized by comprising computer equipment and an image acquisition device; the image acquisition device is arranged above the belt and is used for acquiring the test sample image, the training sample image and the image to be detected; the computer device is communicatively connected to the image acquisition apparatus, the computer device comprising a memory storing a computer program and a processor implementing the steps of the method according to any one of claims 1 to 10 when the processor executes the computer program.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011642375.XA CN112785557A (en) | 2020-12-31 | 2020-12-31 | Belt material flow detection method and device and belt material flow detection system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011642375.XA CN112785557A (en) | 2020-12-31 | 2020-12-31 | Belt material flow detection method and device and belt material flow detection system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112785557A true CN112785557A (en) | 2021-05-11 |
Family
ID=75753546
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011642375.XA Pending CN112785557A (en) | 2020-12-31 | 2020-12-31 | Belt material flow detection method and device and belt material flow detection system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112785557A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113239842A (en) * | 2021-05-25 | 2021-08-10 | 三门峡崤云信息服务股份有限公司 | Image recognition-based swan detection method and device |
CN115457297A (en) * | 2022-08-23 | 2022-12-09 | 中国航空油料集团有限公司 | Method and device for detecting oil leakage of aviation oil depot and aviation oil safety operation and maintenance system |
CN115909225A (en) * | 2022-10-21 | 2023-04-04 | 武汉科技大学 | OL-YoloV5 ship detection method based on online learning |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020049517A1 (en) * | 2018-09-07 | 2020-03-12 | Stone Three Digital (Pty) Ltd | Monitoring ore |
CN111242108A (en) * | 2020-04-26 | 2020-06-05 | 华夏天信(北京)智能低碳技术研究院有限公司 | Belt transfer point coal blockage identification method based on target detection |
CN111754498A (en) * | 2020-06-29 | 2020-10-09 | 河南科技大学 | Conveyor belt carrier roller detection method based on YOLOv3 |
CN111807003A (en) * | 2020-08-06 | 2020-10-23 | 广州贯力科技有限公司 | Non-metal foreign matter detection system for belt conveyor |
CN112052826A (en) * | 2020-09-18 | 2020-12-08 | 广州瀚信通信科技股份有限公司 | Intelligent enforcement multi-scale target detection method, device and system based on YOLOv4 algorithm and storage medium |
-
2020
- 2020-12-31 CN CN202011642375.XA patent/CN112785557A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020049517A1 (en) * | 2018-09-07 | 2020-03-12 | Stone Three Digital (Pty) Ltd | Monitoring ore |
CN111242108A (en) * | 2020-04-26 | 2020-06-05 | 华夏天信(北京)智能低碳技术研究院有限公司 | Belt transfer point coal blockage identification method based on target detection |
CN111754498A (en) * | 2020-06-29 | 2020-10-09 | 河南科技大学 | Conveyor belt carrier roller detection method based on YOLOv3 |
CN111807003A (en) * | 2020-08-06 | 2020-10-23 | 广州贯力科技有限公司 | Non-metal foreign matter detection system for belt conveyor |
CN112052826A (en) * | 2020-09-18 | 2020-12-08 | 广州瀚信通信科技股份有限公司 | Intelligent enforcement multi-scale target detection method, device and system based on YOLOv4 algorithm and storage medium |
Non-Patent Citations (3)
Title |
---|
ZHUO木鸟: "从多种模型中选择最合适的模型,用于行人检测", 《HTTPS://BLOG.CSDN.NET/WEIXIN_42141390/ARTICLE/DETAILS/105214153》 * |
刘玉良等, 西安电子科技大学出版社 * |
赵文强 等: "基于 S4-YOLO 的海上目标检测识别方法", 《基于 S4-YOLO 的海上目标检测识别方法》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113239842A (en) * | 2021-05-25 | 2021-08-10 | 三门峡崤云信息服务股份有限公司 | Image recognition-based swan detection method and device |
CN115457297A (en) * | 2022-08-23 | 2022-12-09 | 中国航空油料集团有限公司 | Method and device for detecting oil leakage of aviation oil depot and aviation oil safety operation and maintenance system |
CN115457297B (en) * | 2022-08-23 | 2023-09-26 | 中国航空油料集团有限公司 | Oil leakage detection method and device for aviation oil depot and aviation oil safety operation and maintenance system |
CN115909225A (en) * | 2022-10-21 | 2023-04-04 | 武汉科技大学 | OL-YoloV5 ship detection method based on online learning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109508678B (en) | Training method of face detection model, and detection method and device of face key points | |
US11595737B2 (en) | Method for embedding advertisement in video and computer device | |
CN112785557A (en) | Belt material flow detection method and device and belt material flow detection system | |
TWI649698B (en) | Object detection device, object detection method, and computer readable medium | |
JP2021093161A (en) | Computer vision system for digitization of industrial equipment gauge and alarm | |
CN111612002A (en) | Multi-target object motion tracking method based on neural network | |
CN112560675B (en) | Bird visual target detection method combining YOLO and rotation-fusion strategy | |
US11321938B2 (en) | Color adaptation using adversarial training networks | |
CN112580600A (en) | Dust concentration detection method and device, computer equipment and storage medium | |
CN111046746A (en) | License plate detection method and device | |
CN115797846A (en) | Wind power generation blade block defect comparison method and device and electronic equipment | |
WO2023070955A1 (en) | Method and apparatus for detecting tiny target in port operation area on basis of computer vision | |
Mi et al. | Research on a Fast Human‐Detection Algorithm for Unmanned Surveillance Area in Bulk Ports | |
CN113065379A (en) | Image detection method and device fusing image quality and electronic equipment | |
JP7078295B2 (en) | Deformity detection device, deformation detection method, and program | |
CN117671585A (en) | Mixing station material identification method based on YOLOv5 | |
CN117351409A (en) | Intelligent concrete dam face operation risk identification method | |
CN113450385B (en) | Night work engineering machine vision tracking method, device and storage medium | |
CN114549968A (en) | Target detection method and device and electronic equipment | |
CN111986161A (en) | Part missing detection method and system | |
CN113762248B (en) | Target landing detection method and device, electronic equipment and storage medium | |
CN118164196B (en) | Method and system for monitoring health state of coal conveying belt based on machine vision | |
CN116580277B (en) | Deep learning-based bottom electronic identification tag missing image identification method | |
WO2024000728A1 (en) | Monocular three-dimensional plane recovery method, device, and storage medium | |
Wu | Construction of Interactive Construction Progress and Quality Monitoring System Based on Image Processing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210511 |
|
RJ01 | Rejection of invention patent application after publication |