CN112163481A - Water environment pollution analysis method based on video recognition - Google Patents

Water environment pollution analysis method based on video recognition Download PDF

Info

Publication number
CN112163481A
CN112163481A CN202010973000.5A CN202010973000A CN112163481A CN 112163481 A CN112163481 A CN 112163481A CN 202010973000 A CN202010973000 A CN 202010973000A CN 112163481 A CN112163481 A CN 112163481A
Authority
CN
China
Prior art keywords
model
water environment
picture
training
analysis method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010973000.5A
Other languages
Chinese (zh)
Inventor
程雨涵
梁漫春
刘美丽
曹毅
李梅
钱益武
徐立梅
李楚
王清泉
吴正华
杨思航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Zeone Safety Technology Co ltd
Beijing Chen'an Measurement And Control Technology Co ltd
Hefei Institute for Public Safety Research Tsinghua University
Original Assignee
Anhui Zeone Safety Technology Co ltd
Beijing Chen'an Measurement And Control Technology Co ltd
Hefei Institute for Public Safety Research Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Zeone Safety Technology Co ltd, Beijing Chen'an Measurement And Control Technology Co ltd, Hefei Institute for Public Safety Research Tsinghua University filed Critical Anhui Zeone Safety Technology Co ltd
Priority to CN202010973000.5A priority Critical patent/CN112163481A/en
Publication of CN112163481A publication Critical patent/CN112163481A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/18Status alarms
    • G08B21/24Reminder alarms, e.g. anti-loss alarms
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Abstract

The invention provides a water environment pollution analysis method based on video identification, which is based on high-definition video data returned by a water environment field, and adopts a video comprehensive risk identification model, an image identification opencv module color identification algorithm and an RBG color space three-dimensional sphere distance algorithm; whether abnormal pollution discharge behaviors exist in the current water environment field or not and the risk level of the pollution discharge behaviors are rapidly and accurately output, so that the problem of hysteresis and low efficiency of judging the comprehensive risk of the water environment by the water quality and the water quantity of the water environment is solved; the method solves the defects of randomness, missing report and the like of judging pollution emission by manual monitoring videos.

Description

Water environment pollution analysis method based on video recognition
Technical Field
The invention relates to the technical field of water pollution analysis and identification, in particular to a water environment pollution analysis method based on video identification.
Background
At present, water quality and water quantity monitoring is adopted for water environment pollution alarming, and an alarming threshold value is set to alarm a water pollution event and remind a supervisor to take emergency measures. At present, water body pictures of key areas are mainly obtained through video monitoring, whether abnormal pollution discharge events exist is judged through manual observation of water body colors, the abnormal pollution discharge events can be observed only once every 1 to 2 hours generally, hysteresis exists in sudden water pollution events, subsequent emergency response is not timely, personnel experience is excessively relied on, and the possibility of missing reports exists. The invention patent application with publication number CN109857046A discloses an intelligent monitoring management system and method for river water pollution monitoring and early warning, which obtains video data through a second server to monitor the water pollution situation in real time, but does not disclose how to specifically realize a method for judging the water pollution situation based on video recognition, and cannot meet the requirements of the field on the water pollution recognition technology.
Disclosure of Invention
The invention aims to provide a method for automatically identifying the water environment pollution condition based on video data.
The invention solves the technical problems through the following technical scheme: a water environment pollution analysis method based on video recognition comprises the following steps:
step A: acquiring historical data of water environment field monitoring pictures, manually calibrating whether a label with abnormal emission exists in each historical picture, and dividing the calibrated data into a training set and a test set in proportion;
and B: building a deep learning model, inputting a training set into the deep learning model for training, and outputting the training model when the accuracy of model identification meets a preset threshold;
and C: inputting the test set data into the training model, if the accuracy does not meet the test threshold, returning to the step B, otherwise, outputting as an identification model;
step D: inputting the picture to be identified into an identification model, and acquiring a picture with abnormal emission;
step E: the method comprises the steps of obtaining RGB values of a picture with abnormal emission, calculating the distance between the picture and a water environment background color without a pollution event based on a color space three-dimensional sphere distance algorithm, calculating a risk score and outputting a corresponding risk grade.
According to the method, the historical data training model is used for obtaining the recognition model for recognizing whether abnormal emission exists according to the picture, dependence on personal experience of workers is eliminated, the recognition result of the model is more and more accurate along with increase of data volume and manual correction, the model can be recognized and judged in real time, the problem of hysteresis of manual recognition is solved, and abnormal conditions can be found in time; and the risk score and the risk level of the abnormal picture are judged based on RGB colors, so that the severity of pollution of workers is intuitively reminded, and guidance can be provided for subsequent decision making.
Preferably, in the step B, a convolutional neural network is used to build a deep learning model, which includes the following steps:
step I: building a model, which comprises an input layer, a convolution pooling layer 1, a convolution pooling layer 2, a full-connection layer and an output layer;
step II: defining initial weight, deviation parameter, filter step, convolution layer function, pooling layer function, activation function, loss function, optimization function and model accuracy; wherein the initial weight comprises the weight wc1 of the convolution pooling layer 1, the weight wc2 of the convolution pooling layer 2, the weight wd1 of the full-connected layer, and the weight wo of the output layer, and the bias parameters comprise the bias bc1 of the convolution pooling layer 1, the bias bc2 of the convolution pooling layer 2, the bias bd1 of the full-connected layer, and the bias bo of the output layer;
step III: inputting the training set into a convolutional neural network for training, wherein the loss of the training output and the accuracy of the model are obtained in each round of training;
step IV: and (5) if the model loss tends to be convergent and the model accuracy is more than 90%, ending the training and outputting the training model, otherwise, updating the learning rate parameter, the loss function parameter, the filter size and the dropout parameter, and returning to the step III for training.
Preferably, the activation function of the convolutional neural network is a relu activation function, and the loss function is a cross entropy function.
Preferably, the method for calculating the accuracy of the model in the step III comprises:
Figure BDA0002684788420000021
wherein TP represents that the real result is A, the prediction result is the quantity of A, FN represents that the real result is A, and the prediction result is not the quantity of A; TN indicates that the true result is not a, and the predicted result is not the number of a, FN indicates that the true result is not a, and the predicted result is the number of a.
Preferably, the test threshold for the test set test accuracy in step C is 90%.
Preferably, in the step D, the picture to be recognized is input into the recognition model, and if the recognition model determines that the probability value of the current picture with abnormal emission is greater than a critical threshold, the current picture to be recognized is considered to have abnormal emission, wherein the critical threshold ranges from [ 60%, 80% ].
Preferably, in the step E, a region is selected from the monitoring picture, the average RGB value of the region is used as the RGB value of the picture, and the current water environment pollution risk score D is:
Figure BDA0002684788420000022
Figure BDA0002684788420000023
wherein the RGB value of the water environment color without the pollution event is C0(R)、C0(B)、C0(G)(ii) a The RGB color of the current water environment picture is C1(R)、C1(B)、C1(G)
Preferably, the relationship between the defined risk score D and the risk level is as follows:
Figure BDA0002684788420000031
the water environment pollution analysis method based on video recognition provided by the invention has the advantages that: the identification model for identifying whether abnormal emission exists according to the picture is obtained through the historical data training model, dependence on personal experience of workers is eliminated, the identification result of the model is more and more accurate along with increase of data volume and manual correction, the model can be identified and judged in real time, the problem of hysteresis of manual identification is solved, and abnormal conditions can be found in time; and the risk score and the risk level of the abnormal picture are judged based on RGB colors, so that the severity of pollution of workers is intuitively reminded, and guidance can be provided for subsequent decision making.
Drawings
Fig. 1 is a flowchart of a water environment pollution analysis method based on video recognition according to an embodiment of the present invention.
Detailed Description
In order that the objects, technical solutions and advantages of the present invention will become more apparent, the present invention will be further described in detail with reference to the accompanying drawings in conjunction with the following specific embodiments.
As shown in fig. 1, the embodiment provides a water environment pollution analysis method based on video recognition, which includes the following steps:
step A: acquiring historical data of water environment field monitoring pictures, manually calibrating whether a label with abnormal emission exists in each historical picture, and dividing the calibrated data into a training set and a test set in proportion;
and B: building a deep learning model, inputting a training set into the deep learning model for training, and outputting the training model when the accuracy of model identification meets a preset threshold;
and C: inputting the test set data into the training model, if the accuracy does not meet the test threshold, returning to the step B, otherwise, outputting as an identification model;
step D: inputting the picture to be identified into an identification model, and acquiring a picture with abnormal emission;
step E: the method comprises the steps of obtaining RGB values of a picture with abnormal emission, calculating the distance between the picture and a water environment background color without a pollution event based on a color space three-dimensional sphere distance algorithm, calculating a risk score and outputting a corresponding risk grade.
According to the method, the identification model for identifying whether abnormal emission exists or not according to the picture is obtained through the historical data training model, dependence on personal experience of workers is eliminated, the identification result of the model is more and more accurate along with increase of data volume and manual correction, the model can be identified and judged in real time, the problem of hysteresis of manual identification is solved, and abnormal conditions can be found in time; and the risk score and the risk level of the abnormal picture are judged based on RGB colors, so that the severity of pollution of workers is intuitively reminded, and guidance can be provided for subsequent decision making.
The deep learning model can be constructed by using models such as a CNN convolutional neural network, an LSTM neural network, a combination of the CNN neural network and the LSTM neural network, and the like, or can be constructed by using a plurality of models, and an optimal result is selected for real-time identification; in this embodiment, a method for constructing a deep learning model is specifically described by taking a CNN neural network as an example.
Step A: acquiring historical data of a water environment field monitoring picture, manually calibrating whether a label for abnormal emission exists in each historical picture, dividing the calibrated data into a training set and a testing set according to a proportion, wherein the proportion of the training set to the testing set is 7:3, and the water environment can be a reservoir, surface water, underground water, a drainage port, a rain and sewage pipe network, nodes of the rain and sewage pipe network and the like.
And B: building a deep learning model, inputting a training set into the deep learning model for training, and outputting the training model when the accuracy of model identification meets a preset threshold;
in this embodiment, a CNN convolutional neural network is used to build a learning model, which includes the following steps:
step I: building a model, which comprises an input layer, a convolution pooling layer 1, a convolution pooling layer 2, a full-connection layer and an output layer;
step II: defining initial weight, deviation parameter, filter step, convolution layer function, pooling layer function, activation function, loss function, optimization function and model accuracy; wherein the initial weight comprises the weight wc1 of the convolution pooling layer 1, the weight wc2 of the convolution pooling layer 2, the weight wd1 of the full-connected layer, and the weight wo of the output layer, and the bias parameters comprise the bias bc1 of the convolution pooling layer 1, the bias bc2 of the convolution pooling layer 2, the bias bd1 of the full-connected layer, and the bias bo of the output layer;
step III: inputting the training set into a convolutional neural network for training, and outputting the loss and model accuracy of each round of training;
step IV: and (5) if the model loss tends to be convergent and the model accuracy is more than 90%, ending the training and outputting the training model, otherwise, updating the learning rate parameter, the loss function parameter, the filter size and the dropout parameter, and returning to the step III for training.
The calculation method of the model accuracy comprises the following steps:
Figure BDA0002684788420000041
wherein TP represents that the real result is A, the prediction result is the quantity of A, FN represents that the real result is A, and the prediction result is not the quantity of A; TN indicates that the true result is not a, and the predicted result is not the number of a, FN indicates that the true result is not a, and the predicted result is the number of a.
The learning rate parameter, the filter size and the dropout parameter are all preset with parameter lists, each parameter in the lists is combined with other parameters in sequence during updating and is brought into a model for cyclic iteration execution, the accuracy and the loss rate of the model in each combination are output, and the parameters corresponding to the optimal accuracy and the loss rate are used as the optimal parameters to obtain the optimal parameter combination.
Taking the learning rate parameter as an example, setting an initial learning rate parameter of a model, then setting a sequence value list for the learning rate parameter, such as [0.001,0.002,. 0.1,. 0.2] to be iteratively executed through model loop, outputting the accuracy and loss rate of each model, selecting the learning rate under high accuracy and recall rate as an optimal learning rate, then respectively iteratively updating the size of the filter and the dropout parameter under the optimal learning rate, and determining the optimal parameter by the same method, thereby obtaining the optimal parameter combination.
The skilled person can select the type of the loss function according to the model requirement, and the loss function can be a softmax _ cross _ entry _ with _ logs multi-class cross entropy function, a log-likehood loss function log likelihood loss function, a logarithmical loss function log loss function, and the like; in this embodiment, the loss function is a softmax _ cross _ entry _ with _ locations multi-class cross-entropy function, and the activation function is a relu activation function.
And C: inputting the test set data into a training model, comparing the result identified by the training model with the result manually calibrated, returning to the step B for retraining if the accuracy does not meet the test threshold, and outputting the test set data as an identification model if the accuracy meets the test threshold, wherein the test threshold is 90%;
step D: inputting the picture to be identified into an identification model, and acquiring a picture with abnormal emission;
when the device is actually used, the monitoring video pictures can be transmitted to an upper computer in real time, the upper computer runs an identification model to monitor the monitoring pictures in real time, and the pictures judged to be abnormal are stored and reported; the recognition model outputs a probability value of abnormal emission after the picture to be recognized runs, if the probability value is greater than a critical threshold, the current picture to be recognized is considered to have abnormal emission, wherein the value range of the critical threshold is [ 60%, 80% ], and a person skilled in the art can also properly determine the critical threshold according to the extension of needs.
Step E: acquiring RGB values of the images with abnormal emission, calculating the distance between the images and the background color of the water environment without the occurrence of the pollution event based on a color space three-dimensional sphere distance algorithm, calculating a risk score and outputting a corresponding risk grade;
because the number of pixels of the whole picture is very large, and the imaging quality of the edge part generally cannot meet the use requirement, the picture needs to be processed first, in this embodiment, an area is selected in the central area of the picture, the average RGB value of the pixels in the area is obtained through an opencv module and is used as the RGB value of the whole picture to participate in the calculation, and the calculation method for obtaining the current water environment pollution risk D is as follows:
Figure BDA0002684788420000051
Figure BDA0002684788420000052
wherein the RGB value of the water environment color without the pollution event is C0(R)、C0(B)、C0(G)(ii) a The RGB color of the current water environment picture is C1(R)、C1(B)、C1(G)
The relationship between the risk score D and the risk level is as follows:
Figure BDA0002684788420000061
because this embodiment mainly realizes monitoring and analysis of water environmental pollution condition based on the control picture, changes such as light, season, weather condition all can produce some influences to the picture, but this influence can not lead to the bright change of formation of image picture under most circumstances, considers that the illumination condition can use external light source to carry out the light filling to the control picture under the not good condition of illuminance, perhaps trains the recognition model alone to the not good condition of illuminance, other factors that influence the great to the formation of image picture also can respectively train the model alone and discern.

Claims (8)

1. A water environment pollution analysis method based on video recognition is characterized by comprising the following steps: the method comprises the following steps:
step A: acquiring historical data of water environment field monitoring pictures, manually calibrating whether a label with abnormal emission exists in each historical picture, and dividing the calibrated data into a training set and a test set in proportion;
and B: building a deep learning model, inputting a training set into the deep learning model for training, and outputting the training model when the accuracy of model identification meets a preset threshold;
and C: inputting the test set data into the training model, if the accuracy does not meet the test threshold, returning to the step B, otherwise, outputting as an identification model;
step D: inputting the picture to be identified into an identification model, and acquiring a picture with abnormal emission;
step E: the method comprises the steps of obtaining RGB values of a picture with abnormal emission, calculating the distance between the picture and a water environment background color without a pollution event based on a color space three-dimensional sphere distance algorithm, calculating a risk score and outputting a corresponding risk grade.
2. The water environment pollution analysis method based on video recognition as claimed in claim 1, wherein: and in the step B, a convolutional neural network is used for building a deep learning model, and the method comprises the following steps:
step I: building a model, which comprises an input layer, a convolution pooling layer 1, a convolution pooling layer 2, a full-connection layer and an output layer;
step II: defining initial weight, deviation parameter, filter step, convolution layer function, pooling layer function, activation function, loss function, optimization function and model accuracy; wherein the initial weight comprises the weight wc1 of the convolution pooling layer 1, the weight wc2 of the convolution pooling layer 2, the weight wd1 of the full-connected layer, and the weight wo of the output layer, and the bias parameters comprise the bias bc1 of the convolution pooling layer 1, the bias bc2 of the convolution pooling layer 2, the bias bd1 of the full-connected layer, and the bias bo of the output layer;
step III: inputting the training set into a convolutional neural network for training, wherein the loss of the training output and the accuracy of the model are obtained in each round of training;
step IV: and (5) if the model loss tends to be convergent and the model accuracy is more than 90%, ending the training and outputting the training model, otherwise, updating the learning rate parameter, the loss function parameter, the filter size and the dropout parameter, and returning to the step III for training.
3. The water environment pollution analysis method based on video recognition as claimed in claim 2, wherein: the activation function of the convolutional neural network is a relu activation function, and the loss function is a cross entropy function.
4. The water environment pollution analysis method based on video recognition as claimed in claim 2, wherein: the method for calculating the accuracy of the model in the step III comprises the following steps:
Figure FDA0002684788410000011
wherein TP represents that the real result is A, the prediction result is the quantity of A, FN represents that the real result is A, and the prediction result is not the quantity of A; TN indicates that the true result is not a, and the predicted result is not the number of a, FN indicates that the true result is not a, and the predicted result is the number of a.
5. The water environment pollution analysis method based on video recognition as claimed in claim 2, wherein: and C, the test threshold value of the test accuracy of the test set in the step C is 90%.
6. The water environment pollution analysis method based on video recognition as claimed in claim 2, wherein: and D, inputting the picture to be recognized into the recognition model, and if the recognition model judges that the probability value of the abnormal emission of the current picture is greater than a critical threshold value, considering that the abnormal emission of the current picture to be recognized exists, wherein the range of the critical threshold value is [ 60%, 80% ].
7. The water environment pollution analysis method based on video recognition as claimed in claim 1, wherein: in the step E, a region is selected in the monitoring picture, the average RGB value of the region is taken as the RGB value of the picture, and the current water environment pollution risk score D is as follows:
Figure FDA0002684788410000021
Figure FDA0002684788410000022
wherein the RGB value of the water environment color without the pollution event is C0(R)、C0(B)、C0(G)(ii) a The RGB color of the current water environment picture is C1(R)、C1(B)、C1(G)
8. The water environment pollution analysis method based on video recognition as claimed in claim 7, wherein: the relationship of the risk score D to the risk level is defined as follows:
Figure FDA0002684788410000023
CN202010973000.5A 2020-09-16 2020-09-16 Water environment pollution analysis method based on video recognition Pending CN112163481A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010973000.5A CN112163481A (en) 2020-09-16 2020-09-16 Water environment pollution analysis method based on video recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010973000.5A CN112163481A (en) 2020-09-16 2020-09-16 Water environment pollution analysis method based on video recognition

Publications (1)

Publication Number Publication Date
CN112163481A true CN112163481A (en) 2021-01-01

Family

ID=73858983

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010973000.5A Pending CN112163481A (en) 2020-09-16 2020-09-16 Water environment pollution analysis method based on video recognition

Country Status (1)

Country Link
CN (1) CN112163481A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115545678A (en) * 2022-11-29 2022-12-30 浙江贵仁信息科技股份有限公司 Water quality monitoring method based on water environment portrait and pollutant traceability

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105675623A (en) * 2016-01-29 2016-06-15 重庆扬讯软件技术有限公司 Real-time analysis method for sewage color and flow detection on basis of sewage port video
CN108427928A (en) * 2018-03-16 2018-08-21 华鼎世纪(北京)国际科技有限公司 The detection method and device of anomalous event in monitor video
CN109934805A (en) * 2019-03-04 2019-06-25 江南大学 A kind of water pollution detection method based on low-light (level) image and neural network
CN110490432A (en) * 2019-07-30 2019-11-22 武汉理工光科股份有限公司 Two-way reference characteristic fire-fighting methods of risk assessment and system based on Hausdorff distance

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105675623A (en) * 2016-01-29 2016-06-15 重庆扬讯软件技术有限公司 Real-time analysis method for sewage color and flow detection on basis of sewage port video
CN108427928A (en) * 2018-03-16 2018-08-21 华鼎世纪(北京)国际科技有限公司 The detection method and device of anomalous event in monitor video
CN109934805A (en) * 2019-03-04 2019-06-25 江南大学 A kind of water pollution detection method based on low-light (level) image and neural network
CN110490432A (en) * 2019-07-30 2019-11-22 武汉理工光科股份有限公司 Two-way reference characteristic fire-fighting methods of risk assessment and system based on Hausdorff distance

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
包新月: "颜色空间特征研究及在水质检测中的应用", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
程鼎铉等: "基于色差的点云滤波方法", 《信息与电脑》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115545678A (en) * 2022-11-29 2022-12-30 浙江贵仁信息科技股份有限公司 Water quality monitoring method based on water environment portrait and pollutant traceability

Similar Documents

Publication Publication Date Title
CN111353413B (en) Low-missing-report-rate defect identification method for power transmission equipment
CN112101796A (en) Water environment pollution risk comprehensive perception and recognition system
CN110414400B (en) Automatic detection method and system for wearing of safety helmet on construction site
JP2005025351A (en) Information processor, status judgment device and diagnostic unit, information processing method, status judgment method, and diagnostic method
CN115331172A (en) Workshop dangerous behavior recognition alarm method and system based on monitoring video
CN110057820A (en) Method, system and the storage medium of on-line checking hydrogen chloride synthetic furnace chlorine hydrogen proportion
CN112163481A (en) Water environment pollution analysis method based on video recognition
CN115471487A (en) Insulator defect detection model construction and insulator defect detection method and device
CN112085869A (en) Civil aircraft flight safety analysis method based on flight parameter data
CN110059675A (en) A kind of robot identifies road traffic law enforcement behavior and provides the method for standardization auxiliary
CN110909674A (en) Traffic sign identification method, device, equipment and storage medium
CN116664846B (en) Method and system for realizing 3D printing bridge deck construction quality monitoring based on semantic segmentation
KR102602439B1 (en) Method for detecting rip current using CCTV image based on artificial intelligence and apparatus thereof
CN116310601B (en) Ship behavior classification method based on AIS track diagram and camera diagram group
CN112784494A (en) Training method of false positive recognition model, target recognition method and device
CN116894113A (en) Data security classification method and data security management system based on deep learning
CN116645337A (en) Multi-production-line ceramic defect detection method and system based on federal learning
CN110855467B (en) Network comprehensive situation prediction method based on computer vision technology
CN115620259A (en) Lane line detection method based on traffic off-site law enforcement scene
CN115393900A (en) Intelligent construction site safety supervision method and system based on Internet of things
CN115035256A (en) Mine waste reservoir accident potential and risk evolution method and system
US11982409B2 (en) Safety monitoring methods and Internet of Things systems of pipe network reliability degree based on intelligent gas
US20230175652A1 (en) Safety monitoring methods and internet of things systems of pipe network reliability degree based on intelligent gas
CN111861394A (en) Intelligent cell management method and system based on Internet of things
CN110782431A (en) High-voltage wire icing area detection method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210101