CN112733750A - Dynamic water flow image-based sewage treatment detection neural network training method - Google Patents

Dynamic water flow image-based sewage treatment detection neural network training method Download PDF

Info

Publication number
CN112733750A
CN112733750A CN202110049949.0A CN202110049949A CN112733750A CN 112733750 A CN112733750 A CN 112733750A CN 202110049949 A CN202110049949 A CN 202110049949A CN 112733750 A CN112733750 A CN 112733750A
Authority
CN
China
Prior art keywords
neural network
classification
sewage treatment
loss function
feature maps
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110049949.0A
Other languages
Chinese (zh)
Inventor
张宽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Yuanneng Yuancai Network Technology Co ltd
Original Assignee
Chengdu Yuanneng Yuancai Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Yuanneng Yuancai Network Technology Co ltd filed Critical Chengdu Yuanneng Yuancai Network Technology Co ltd
Priority to CN202110049949.0A priority Critical patent/CN112733750A/en
Publication of CN112733750A publication Critical patent/CN112733750A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application relates to intelligent quality detection in the field of intelligent environmental protection, and particularly discloses a dynamic water flow image-based sewage treatment detection neural network training method, which is used for performing dynamic water flow sewage treatment detection based on a deep learning computer vision method. Specifically, in the training process of the neural network for sewage treatment detection based on the dynamic water flow image, the convolutional neural network is trained by minimizing the co-correlation loss function value, so that the influence of dynamic features in the image is eliminated as much as possible when the features are extracted by the convolutional neural network, and the parameters of the first convolutional neural network and the second convolutional neural network are updated through the weighting and back propagation of the co-correlation loss function value and the classification loss function value, so that the accuracy of the model is enhanced.

Description

Dynamic water flow image-based sewage treatment detection neural network training method
Technical Field
The present invention relates to deep learning and application of a neural network in the field of sewage treatment, and more particularly, to a method for training a neural network for sewage treatment detection based on a dynamic flow image, a method for intelligent sewage treatment detection based on a deep neural network, a system for training a neural network for sewage treatment detection based on a dynamic flow image, a system for intelligent sewage treatment detection based on a deep neural network, and an electronic device.
Background
In the sewage treatment process, generally, sewage is conveyed in a drainage pipeline, and after a plurality of processes of sewage treatment, the treated sewage is discharged through the drainage pipeline. In the whole process of treating the sewage, the work of extracting and sampling the sewage is not considered before or after the sewage is treated, so that the sewage is inconvenient to quickly collect and inspect. In order to be able to compare the effluent before and after treatment, it is desirable to be able to do so in a more convenient manner than by taking a sample and testing.
Therefore, a technical solution optimally used for sewage treatment detection is desired.
At present, deep learning and neural networks have been widely applied in the fields of computer vision, natural language processing, text signal processing, and the like. In addition, deep learning and neural networks also exhibit a level close to or even exceeding that of humans in the fields of image classification, object detection, semantic segmentation, text translation, and the like.
Deep learning and development of a neural network provide a new solution for sewage treatment detection.
Disclosure of Invention
The present application is proposed to solve the above-mentioned technical problems. The embodiment of the application provides a dynamic water flow image-based sewage treatment detection neural network training method, a deep neural network-based intelligent sewage treatment detection method, a dynamic water flow image-based sewage treatment detection neural network training system, a deep neural network-based intelligent sewage treatment detection system and electronic equipment, wherein the dynamic water flow image-based sewage treatment detection neural network training method is used for performing dynamic water flow sewage treatment detection based on a deep learning computer vision method. Specifically, in the training process of the neural network for sewage treatment detection based on the dynamic water flow image, the convolutional neural network is trained by minimizing the co-correlation loss function value, so that the influence of dynamic features in the image is eliminated as much as possible when the features are extracted by the convolutional neural network, and the parameters of the first convolutional neural network and the second convolutional neural network are updated through the weighting and back propagation of the co-correlation loss function value and the classification loss function value, so that the accuracy of the model is enhanced.
According to one aspect of the application, a method for training a neural network for sewage treatment detection based on dynamic water flow images is provided, which comprises the following steps:
acquiring multiframe images of sewage before treatment and multiframe images of purified water after treatment, wherein the multiframe images of the sewage and the multiframe images of the purified water have the same image frame number and time sequence distribution;
passing the multi-frame image of the sewage through a first convolutional neural network to obtain a plurality of first feature maps;
enabling the multi-frame image of the purified water to pass through a second convolutional neural network to obtain a plurality of second characteristic maps;
calculating a co-correlation loss function value between the plurality of first feature maps and the plurality of second feature maps, the co-correlation loss function value being indicative of a correlation between the plurality of first feature maps and the plurality of second feature maps;
calculating differences between the plurality of first feature maps and the plurality of second feature maps to obtain a classification feature map; passing the classification feature map through a classifier to obtain a classification loss function value; and
updating parameters of the first convolutional neural network and the second convolutional neural network based on a weighted sum of the co-correlation loss function values and the classification loss function values.
According to another aspect of the present application, there is provided an intelligent sewage treatment detection method based on a deep neural network, comprising:
acquiring a multi-frame image of a dynamic water flow to be detected;
inputting the image into a first convolution neural network and a classifier trained according to the training method of the neural network for sewage treatment detection based on the dynamic water flow image, wherein the output of the classifier is a first probability corresponding to qualified sewage treatment detection and a second probability corresponding to unqualified sewage treatment detection; and
and determining whether the sewage treatment is qualified or not based on the first probability and the second probability.
According to still another aspect of the present application, there is provided a training system of a neural network for sewage treatment detection based on dynamic flow images, comprising:
the system comprises an image acquisition unit, a data processing unit and a data processing unit, wherein the image acquisition unit is used for acquiring multi-frame images of sewage before treatment and multi-frame images of purified water after treatment, and the multi-frame images of the sewage and the multi-frame images of the purified water have the same image frame number and time sequence distribution;
the first characteristic diagram generating unit is used for enabling the multi-frame image of the sewage obtained by the image obtaining unit to pass through a first convolutional neural network so as to obtain a plurality of first characteristic diagrams;
the second characteristic diagram generating unit is used for enabling the multi-frame images of the purified water obtained by the image obtaining unit to pass through a second convolutional neural network so as to obtain a plurality of second characteristic diagrams;
a co-correlation loss function value calculation unit configured to calculate a co-correlation loss function value between the plurality of first feature maps obtained by the first feature map generation unit and the plurality of second feature maps obtained by the second feature map generation unit, the co-correlation loss function value being indicative of a correlation between the plurality of first feature maps and the plurality of second feature maps;
a classification feature map generation unit configured to calculate a difference between the plurality of first feature maps obtained by the first feature map generation unit and the plurality of second feature maps obtained by the second feature map generation unit to obtain a classification feature map;
the classification loss function value generating unit is used for enabling the classification feature map obtained by the classification feature map generating unit to pass through a classifier so as to obtain a classification loss function value; and
a parameter updating unit configured to update parameters of the first convolutional neural network and the second convolutional neural network based on a weighted sum of the covariance loss function value obtained by the covariance loss function value calculating unit and the classification loss function value obtained by the classification loss function value generating unit.
According to still another aspect of the present application, there is provided an intelligent sewage treatment detection system based on a deep neural network, comprising:
the device comprises an image acquisition unit to be detected, a dynamic water flow detection unit and a dynamic water flow detection unit, wherein the image acquisition unit to be detected is used for acquiring multi-frame images of the dynamic water flow to be detected;
the classification unit is used for inputting the image obtained by the image acquisition unit to be detected into a first convolution neural network and a classifier trained according to the training method of the neural network for sewage treatment detection based on the dynamic water flow image, and the output of the classifier is a first probability corresponding to qualified sewage treatment detection and a second probability corresponding to unqualified sewage treatment detection; and
and the detection result generation unit is used for obtaining a detection result of whether the sewage treatment is qualified or not based on the first probability and the second probability obtained by the classification unit.
According to still another aspect of the present application, there is provided an electronic apparatus including: a processor; and a memory in which computer program instructions are stored, which, when executed by the processor, cause the processor to perform the method of training a neural network for dynamic flow image-based sewage treatment detection as described above, or the method of intelligent sewage treatment detection based on a deep neural network.
Compared with the prior art, the dynamic water flow image-based sewage treatment detection neural network training method, the deep neural network-based intelligent sewage treatment detection method, the dynamic water flow image-based sewage treatment detection neural network training system, the deep neural network-based intelligent sewage treatment detection system and the electronic device provided by the application are based on deep learning computer vision to perform dynamic water flow sewage treatment detection. Specifically, in the training process of the neural network for sewage treatment detection based on the dynamic water flow image, the convolutional neural network is trained by minimizing the co-correlation loss function value, so that the influence of dynamic features in the image is eliminated as much as possible when the features are extracted by the convolutional neural network, and the parameters of the first convolutional neural network and the second convolutional neural network are updated through the weighting and back propagation of the co-correlation loss function value and the classification loss function value, so that the accuracy of the model is enhanced.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing in more detail embodiments of the present application with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the principles of the application. In the drawings, like reference numbers generally represent like parts or steps.
Fig. 1 illustrates an application scenario diagram of a neural network training method for sewage treatment detection based on dynamic water flow images according to an embodiment of the present application;
FIG. 2 illustrates a flow chart of a method of training a neural network for wastewater treatment detection based on dynamic flow images in accordance with an embodiment of the present application;
FIG. 3 illustrates a system architecture diagram of a method for training a neural network for wastewater treatment detection based on dynamic flow images, in accordance with an embodiment of the present application;
FIG. 4 is a flow chart illustrating calculation of co-correlation loss function values between the plurality of first feature maps and the plurality of second feature maps in a method for training a neural network for sewage treatment detection based on dynamic flow images according to an embodiment of the present application;
FIG. 5 is a flow chart illustrating a method for training a neural network for sewage treatment detection based on dynamic flow images, according to an embodiment of the present application, calculating differences between the plurality of first feature maps and the plurality of second feature maps to obtain a classification feature map;
FIG. 6 illustrates another flowchart of calculating the difference between the plurality of first feature maps and the plurality of second feature maps to obtain a classification feature map in a method for training a neural network for sewage treatment detection based on dynamic flow images according to an embodiment of the present application;
FIG. 7 is a flow chart illustrating the process of passing the classification feature map through a classifier to obtain classification loss function values in a method for training a neural network for sewage treatment detection based on dynamic flow images according to an embodiment of the present application;
FIG. 8 illustrates a flow chart of a method for intelligent wastewater treatment detection based on a deep neural network according to an embodiment of the present application;
FIG. 9 illustrates a block diagram of a training system for a neural network for sewage treatment detection based on dynamic flow images, in accordance with an embodiment of the present application;
FIG. 10 illustrates a block diagram of a co-correlation loss function value calculation unit in a training system of a neural network for sewage treatment detection based on dynamic flow images according to an embodiment of the present application;
FIG. 11 illustrates a block diagram of a classification feature map generation unit in a training system of a neural network for sewage treatment detection based on dynamic flow images according to an embodiment of the present application;
FIG. 12 illustrates another block diagram of a classification feature map generation unit in a training system of a neural network for sewage treatment detection based on dynamic flow images according to an embodiment of the present application;
FIG. 13 illustrates a block diagram of a classification loss function value generation unit in a training system for a neural network for dynamic flow image based sewage treatment detection, according to an embodiment of the present application;
FIG. 14 illustrates a block diagram of an intelligent deep neural network-based wastewater treatment detection system according to an embodiment of the present application;
FIG. 15 illustrates a block diagram of an electronic device in accordance with an embodiment of the present application.
Detailed Description
Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be understood that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and that the present application is not limited by the example embodiments described herein.
Overview of a scene
As previously mentioned, in order to be able to compare the effluent before and after treatment, it is desirable to be able to do so in a more convenient manner than by taking a sample and testing. For this reason, the inventors of the present application expected to obtain the inspection result by comparing the images of the sewage before and after the treatment by the computer vision technique based on the deep learning.
However, as described above, in the current sewage treatment system, since both the sewage before treatment and the clean water after treatment flow in the drainage pipeline, and there is no way to rapidly collect a sample of a water sample in the water flow, in the solution of the present application, in order to match the existing implementation conditions as much as possible, it is necessary to perform comparison based on the image of the flowing water flowing in the pipeline.
The running water image adds many dynamic features of water due to the flow of water, compared to the still water image, and these dynamic features are obviously useless for comparison and identification of water samples when examining the sewage treatment effect, so in the solution of the present application, it is desirable to eliminate these dynamic features as much as possible. Also, the inventors of the present application considered that, since the sewage before treatment and the purified water after treatment actually have the same or at least similar dynamic characteristics as described above when flowing in the drainage pipeline, it is possible to eliminate the influence of the dynamic characteristics as much as possible by setting a loss function value for expressing a correlation between the characteristic maps of the two and training the convolutional neural network by minimizing the loss function value so that the convolutional neural network focuses on the characteristics having no such correlation between the two.
Based on this, in order to better characterize the dynamic characteristics of the flowing water image increased by the flow of water, in the solution of the present application, a loss function called the co-correlation loss function value is involved. Specifically, a multiframe image of sewage before treatment and a corresponding multiframe image of purified water after treatment are obtained firstly, a plurality of first feature maps and a plurality of second feature maps are obtained through a first convolution neural network and a second convolution neural network respectively, and then, a function value of a co-correlation loss based on the plurality of first feature maps and the plurality of second feature maps is expressed as a function value of the co-correlation loss
Figure BDA0002898846560000061
In addition, for the plurality of first feature maps and the plurality of second feature maps, in order to realize feature comparison in a dynamic range, a plurality of differential feature maps between the plurality of first feature maps and the plurality of second feature maps are calculated to constitute a classification feature map, and a classification loss function value is obtained after passing through a classifier. In this way, the first convolutional neural network and the second convolutional neural network can be trained based on the weighted sum of the above-mentioned co-correlation loss function value and the classification loss function value, so as to perform the detection of the sewage treatment quality based on the dynamic water flow image.
Based on this, the present application proposes a method for training a neural network for sewage treatment detection based on dynamic water flow images, which includes: acquiring multiframe images of sewage before treatment and multiframe images of purified water after treatment, wherein the multiframe images of the sewage and the multiframe images of the purified water have the same image frame number and time sequence distribution; passing the multi-frame image of the sewage through a first convolutional neural network to obtain a plurality of first feature maps; enabling the multi-frame image of the purified water to pass through a second convolutional neural network to obtain a plurality of second characteristic maps; calculating a co-correlation loss function value between the plurality of first feature maps and the plurality of second feature maps, the co-correlation loss function value being indicative of a correlation between the plurality of first feature maps and the plurality of second feature maps; calculating differences between the plurality of first feature maps and the plurality of second feature maps to obtain a classification feature map; passing the classification feature map through a classifier to obtain a classification loss function value; and updating parameters of the first convolutional neural network and the second convolutional neural network based on a weighted sum of the covariance loss function values and the classification loss function values.
Based on this, this application has still provided an intelligent sewage treatment detection method based on deep neural network, and it includes: acquiring a multi-frame image of a dynamic water flow to be detected; inputting the image into a first convolution neural network and a classifier trained according to the training method of the neural network for sewage treatment detection based on the dynamic water flow image, wherein the output of the classifier is a first probability corresponding to qualified sewage treatment detection and a second probability corresponding to unqualified sewage treatment detection; and determining whether the sewage treatment is qualified or not based on the first probability and the second probability.
Fig. 1 illustrates an application scenario diagram of a neural network training method for sewage treatment detection based on dynamic water flow images and an intelligent sewage treatment detection method based on a deep neural network according to an embodiment of the application.
As shown in fig. 1, in the training phase of the application scenario, a camera (e.g., as indicated by C in fig. 1) acquires a plurality of frames of images of sewage before treatment and a plurality of frames of images of purified water after treatment; then, the multi-frame images are input into a server (for example, S as illustrated in fig. 1) deployed with a training algorithm of the dynamic flow image-based sewage treatment detection neural network, wherein the server can train the dynamic flow image-based sewage treatment detection neural network with the multi-frame images based on the training algorithm of the dynamic flow image-based sewage treatment detection neural network.
After the neural network is trained through the training algorithm of the neural network for sewage treatment detection based on the dynamic water flow image as described above, sewage treatment detection can be performed on the dynamic water flow image based on the deep neural network.
Further, as shown in fig. 1, in the detection stage of the application scenario, a camera (e.g., as indicated by C in fig. 1) is used to obtain a multi-frame image of the dynamic water flow to be detected; then, the multi-frame image is input into a server (for example, S as illustrated in fig. 1) deployed with a deep neural network-based sewage treatment detection algorithm, wherein the server can process the image based on the deep neural network-based sewage treatment detection algorithm to generate a detection result of whether the sewage treatment is qualified.
Having described the general principles of the present application, various non-limiting embodiments of the present application will now be described with reference to the accompanying drawings.
Exemplary method
FIG. 2 illustrates a flow chart of a method of training a neural network for wastewater treatment detection based on dynamic flow images. As shown in fig. 2, a method for training a neural network for sewage treatment detection based on dynamic flow images according to an embodiment of the present application includes: s110, acquiring multiframe images of sewage before treatment and multiframe images of purified water after treatment, wherein the multiframe images of the sewage and the multiframe images of the purified water have the same image frame number and time sequence distribution; s120, enabling the multi-frame image of the sewage to pass through a first convolutional neural network to obtain a plurality of first characteristic maps; s130, enabling the multi-frame images of the purified water to pass through a second convolutional neural network to obtain a plurality of second characteristic maps; s140, calculating a co-correlation loss function value between the plurality of first feature maps and the plurality of second feature maps, wherein the co-correlation loss function value is used to represent a correlation between the plurality of first feature maps and the plurality of second feature maps; s150, calculating differences between the plurality of first feature maps and the plurality of second feature maps to obtain a classification feature map; s160, passing the classification characteristic graph through a classifier to obtain a classification loss function value; and S170 updating parameters of the first convolutional neural network and the second convolutional neural network based on a weighted sum of the decorrelation loss function values and the classification loss function values.
Fig. 3 illustrates an architecture diagram of a training method of a neural network for sewage treatment detection based on dynamic flow images according to an embodiment of the present application. As shown IN fig. 3, IN the network architecture of the training method of the neural network for sewage treatment detection based on dynamic water flow images, first, a plurality of frames of images (e.g., IN1 as illustrated IN fig. 3) of sewage before treatment acquired by a camera are passed through a first convolutional neural network (e.g., CNN1 as illustrated IN fig. 3) to obtain a plurality of first feature maps (e.g., F11 to F1k as illustrated IN fig. 3); next, passing the multi-frame image of the processed purified water acquired by the camera (e.g., IN2 as illustrated IN fig. 3) through a second convolutional neural network (e.g., CNN2 as illustrated IN fig. 3) to obtain a plurality of second feature maps (e.g., F21 to F2k as illustrated IN fig. 3); then, calculating the values of the co-correlation loss functions between the plurality of first feature maps and the plurality of second feature maps; then, calculating differences between the plurality of first feature maps and the plurality of second feature maps to obtain a classification feature map (e.g., Fc as illustrated in fig. 3); then, passing the classification signature through a classifier (e.g., a classifier as illustrated in fig. 3) to obtain a classification loss function value; then, parameters of the first convolutional neural network and the second convolutional neural network are updated based on a weighted sum of the covariance loss function values and the classification loss function values.
In step S110, a multi-frame image of sewage before treatment and a multi-frame image of purified water after treatment are acquired, wherein the multi-frame images of sewage and the multi-frame images of purified water have the same number of image frames and time sequence distribution. Specifically, in the embodiment of the present application, a multi-frame image of sewage before treatment and a multi-frame image of purified water after treatment can be collected at a preset time interval by a camera disposed under water, so that the multi-frame images of sewage and the multi-frame images of purified water have the same time sequence distribution.
In step S120, the multi-frame image of the sewage is passed through a first convolutional neural network to obtain a plurality of first feature maps. Namely, extracting each high-dimensional feature in the sewage image by using a first convolution neural network. In particular, in the embodiment of the present application, the first convolutional neural network may employ a deep residual neural network, for example, ResNet 50.
In step S130, the multi-frame images of the purified water are passed through a second convolutional neural network to obtain a plurality of second feature maps. That is, the second convolutional neural network is used to extract the high-dimensional features in the clean water image. In particular, in embodiments of the present application, the second convolutional neural network may employ a deep residual neural network, e.g., ResNet 50.
In step S140, a co-correlation loss function value between the plurality of first feature maps and the plurality of second feature maps is calculated, and the co-correlation loss function value is used to represent a correlation between the plurality of first feature maps and the plurality of second feature maps. As mentioned above, the running water image adds many dynamic features of water due to the flow of water, compared to the still water image, and these dynamic features are obviously useless for comparison and identification of water samples when examining the sewage treatment effect, so in the solution of the present application, it is desirable to eliminate these dynamic features as much as possible.
Specifically, in this embodiment of the present application, the process of calculating the values of the co-correlation loss function between the plurality of first feature maps and the plurality of second feature maps includes: firstly, global mean pooling is performed on the plurality of first feature maps respectively to obtain a plurality of first feature values, that is, the numerical values in each of the plurality of first feature maps are averaged, and the corresponding plurality of first feature values are output. Then, the second feature maps are subjected to global mean pooling respectively to obtain a plurality of second feature values, that is, the numerical values in each of the second feature maps are averaged, and a corresponding plurality of second feature values are output. Then, a value of a co-correlation loss function between the plurality of first eigenvalues and the plurality of second eigenvalues is calculated with the following formula:
Figure BDA0002898846560000081
wherein xi represents the ith first characteristic value, x represents the average value of the plurality of first characteristic values, and yj represents the jth second characteristic value; y represents an average value of the plurality of second feature values. It should be understood that the objective of calculating the values of the co-correlation loss functions between the plurality of first feature maps and the plurality of second feature maps is: the influence of the dynamic features is eliminated as much as possible by setting a loss function value for expressing a correlation between feature maps of the two and training a convolutional neural network by minimizing the loss function value so that the convolutional neural network focuses on features that do not have the correlation between the two.
Fig. 4 is a flowchart illustrating calculation of the co-correlation loss function values between the plurality of first feature maps and the plurality of second feature maps in the method for training a neural network for sewage treatment detection based on a dynamic water flow image according to an embodiment of the present application. As shown in fig. 4, calculating the values of the co-correlation loss functions between the plurality of first feature maps and the plurality of second feature maps includes: s210, performing global mean pooling on the plurality of first feature maps respectively to obtain a plurality of first feature values; s220, performing global mean pooling on the second feature maps respectively to obtain a plurality of second feature values; and S230, calculating a value of a co-correlation loss function between the plurality of first eigenvalues and the plurality of second eigenvalues with the following formula:
Figure BDA0002898846560000091
wherein xi represents the ith first characteristic value, x represents the average value of the plurality of first characteristic values, and yj represents the jth second characteristic value; y represents an average value of the plurality of second feature values.
In step S150, differences between the plurality of first feature maps and the plurality of second feature maps are calculated to obtain a classification feature map.
Specifically, in the embodiment of the present application, a process of calculating differences between the plurality of first feature maps and the plurality of second feature maps to obtain a classification feature map includes: first, differences in feature values by pixel position between the plurality of first feature maps and the plurality of second feature maps are calculated, respectively, in a time-series dimension to obtain a plurality of difference feature maps. That is, the difference between each of the first feature maps and each of the second feature maps, which are one-to-one in time series, is calculated to realize the feature comparison over the dynamic range. Then, the differential feature maps are arranged in a time sequence dimension to obtain the classification feature map.
Fig. 5 illustrates a flowchart of calculating differences between the plurality of first feature maps and the plurality of second feature maps to obtain a classification feature map in a method for training a neural network for sewage treatment detection based on a dynamic water flow image according to an embodiment of the present application. As shown in fig. 5, calculating the difference between the plurality of first feature maps and the plurality of second feature maps to obtain a classification feature map includes: s310, respectively calculating the characteristic value difference of the plurality of first characteristic maps and the plurality of second characteristic maps according to pixel positions in a time sequence dimension to obtain a plurality of difference characteristic maps; s320, arranging the differential feature maps in a time sequence dimension to obtain the classification feature map.
It is worth mentioning that in other examples of the present application, differences between the plurality of first feature maps and the plurality of second feature maps may be calculated in other manners to obtain the classification feature map. For example, in another example of the present application, the process of calculating differences between the plurality of first feature maps and the plurality of second feature maps to obtain the classification feature map includes: first, maximum value pooling is performed on the plurality of first feature maps in a sample dimension to obtain a sewage feature map, that is, a feature matrix of each of the plurality of first feature maps is subjected to maximum value pooling to obtain a feature value to represent each of the first feature maps, and a data amount is reduced by a maximum value. And then, carrying out maximum value pooling processing on the plurality of second feature maps in a sample dimension to obtain a water purification feature map. It should be appreciated that by pooling the maxima of the sample dimensions for the plurality of first feature maps and the plurality of second feature maps, it is advantageous to preserve timing information, i.e., dynamically changing features, between the plurality of first feature maps and between the plurality of second feature maps. And then, calculating the difference value between the water purification characteristic diagram and the sewage characteristic diagram according to the pixel position to obtain the classification characteristic diagram.
Fig. 6 illustrates a flowchart of calculating differences between the plurality of first feature maps and the plurality of second feature maps to obtain a classification feature map in a method for training a neural network for sewage treatment detection based on a dynamic water flow image according to an embodiment of the present application. As shown in fig. 6, calculating the difference between the plurality of first feature maps and the plurality of second feature maps to obtain a classification feature map includes: s410, performing maximum value pooling treatment on the plurality of first feature maps in a sample dimension to obtain a sewage feature map; s420, performing maximum value pooling treatment on the plurality of second characteristic graphs in the sample dimension to obtain a water purification characteristic graph; and S430, calculating the difference value between the water purification characteristic diagram and the sewage characteristic diagram according to the pixel position to obtain the classification characteristic diagram.
In step S160, the classification feature map is passed through a classifier to obtain a classification loss function value. Specifically, in the embodiment of the present application, the classifier includes an encoder, and the encoder may be composed of a convolutional layer, a pooling layer, or a fully-connected layer.
Specifically, in the embodiment of the present application, the process of passing the classification feature map through a classifier to obtain a classification loss function value includes: firstly, the classification feature map is passed through one or more fully-connected layers to obtain a classification feature vector, that is, the classification feature map is encoded by using one or more fully-connected layers as an encoder to fully utilize each position information in the classification feature map to generate the classification feature vector. Then, the classification feature vector is passed through a Softmax classification function to obtain a classification result. The classification result and the true value are then input to a loss function (e.g., a cross-entropy loss function) to obtain the classification loss function value to obtain a classification loss function value.
Fig. 7 illustrates a flow chart of passing the classification feature map through a classifier to obtain a classification loss function value in a training method of a neural network for sewage treatment detection based on a dynamic water flow image according to an embodiment of the present application. As shown in fig. 7, passing the classification feature map through a classifier to obtain a classification loss function value includes: s510, enabling the classification feature map to pass through one or more full-connection layers to obtain a classification feature vector; s520, enabling the classification feature vectors to pass through a classification function to obtain a classification result; and S530, inputting the classification result and the real value into a loss function to obtain the classification loss function value.
In step S170, parameters of the first convolutional neural network and the second convolutional neural network are updated based on a weighted sum of the decorrelation loss function values and the classification loss function values.
Specifically, in this embodiment of the present application, in a specific training process, updating the first convolutional neural network and the second convolutional neural network based on the covariance loss function value and the classification loss function value may be performed synchronously, that is, jointly training the first convolutional neural network and the second convolutional neural network.
It is also possible that the first convolutional neural network and the second convolutional neural network may be trained separately. For example, in one iteration process, the parameters of the second convolutional neural network are fixed firstly, and the parameters of the first convolutional neural network are updated through gradient back propagation, and then the parameters of the first convolutional neural network are fixed, and the parameters of the second convolutional neural network are updated through gradient back propagation.
According to another aspect of the application, an intelligent sewage treatment detection method based on the deep neural network is further provided.
FIG. 8 illustrates a flow chart of an intelligent wastewater treatment detection method based on a deep neural network according to an embodiment of the present application. As shown in fig. 8, the intelligent sewage treatment detection method based on the deep neural network according to the embodiment of the present application includes: s610, acquiring a multi-frame image of the dynamic water flow to be detected; s620, inputting the image into a first convolution neural network and a classifier trained according to the training method of the neural network based on the sewage treatment detection of the dynamic water flow image, wherein the output of the classifier is a first probability corresponding to the qualified sewage treatment detection and a second probability corresponding to the unqualified sewage treatment detection; and S630, determining whether the sewage treatment is qualified or not based on the first probability and the second probability.
In summary, the training method of the neural network for sewage treatment detection based on dynamic water flow images and the intelligent sewage treatment detection method based on the deep neural network are explained, and the sewage treatment detection of dynamic water flow is performed based on the deep learning computer vision method. Specifically, in the training process of the neural network for sewage treatment detection based on the dynamic water flow image, the convolutional neural network is trained by minimizing the co-correlation loss function value, so that the convolutional neural network eliminates the influence of dynamic characteristics in the image as much as possible, and parameters of the first convolutional neural network and the second convolutional neural network are updated through weighting and back propagation of the co-correlation loss function value and the classification loss function value, so that the accuracy of the model is enhanced.
Exemplary System
FIG. 9 illustrates a block diagram of a training system for a neural network for sewage treatment detection based on dynamic flow images, according to an embodiment of the present application.
As shown in fig. 9, a training system 900 for a neural network for sewage treatment detection based on dynamic water flow images according to an embodiment of the present application includes: the system comprises an image acquisition unit 910, a data processing unit and a data processing unit, wherein the image acquisition unit is used for acquiring a multiframe image of sewage before treatment and a multiframe image of purified water after treatment, and the multiframe images of the sewage and the purified water have the same image frame number and time sequence distribution; a first feature map generating unit 920, configured to pass the multiple frames of images of the sewage obtained by the image obtaining unit 910 through a first convolutional neural network to obtain multiple first feature maps; a second feature map generating unit 930, configured to pass the multiple frames of images of the purified water obtained by the image obtaining unit 910 through a second convolutional neural network to obtain multiple second feature maps; a co-correlation loss function value calculation unit 940 configured to calculate a co-correlation loss function value between the plurality of first feature maps obtained by the first feature map generation unit 920 and the plurality of second feature maps obtained by the second feature map generation unit 930, where the co-correlation loss function value is used to represent a correlation between the plurality of first feature maps and the plurality of second feature maps; a classification feature map generation unit 950 configured to calculate differences between the plurality of first feature maps obtained by the first feature map generation unit 920 and the plurality of second feature maps obtained by the second feature map generation unit 930 to obtain a classification feature map; a classification loss function value generating unit 960, configured to pass the classification feature map obtained by the classification feature map generating unit 950 through a classifier to obtain a classification loss function value; and a parameter updating unit 970 for updating parameters of the first convolutional neural network and the second convolutional neural network based on a weighted sum of the covariance loss function value obtained by the covariance loss function value calculating unit 940 and the classification loss function value obtained by the classification loss function value generating unit 960.
In an example, in the training system 900 for a neural network based on sewage treatment detection of dynamic water flow images, as shown in fig. 10, the decorrelation loss function value calculating unit 940 includes: a first feature value generating subunit 941, configured to perform global mean pooling on the multiple first feature maps respectively to obtain multiple first feature values; a second feature value generation subunit 942, configured to perform global mean pooling on the plurality of second feature maps, respectively, to obtain a plurality of second feature values; and a calculating subunit 943 configured to calculate a value of a co-correlation loss function between the plurality of first eigenvalues obtained by the first eigenvalue generating subunit 941 and the plurality of second eigenvalues obtained by the second eigenvalue generating subunit 942 according to the following formula:
Figure BDA0002898846560000121
wherein xi represents the ith first characteristic value, x represents the average value of the plurality of first characteristic values, and yj represents the jth second characteristic value; y represents an average value of the plurality of second feature values.
In an example, in the training system 900 of the neural network for sewage treatment detection based on dynamic water flow images, as shown in fig. 11, the classification feature map generating unit 950 includes: a difference feature map generation subunit 951, configured to calculate feature value differences between the plurality of first feature maps and the plurality of second feature maps by pixel position in a time-series dimension, respectively, to obtain a plurality of difference feature maps; and an arrangement subunit 952, configured to arrange the plurality of differential feature maps obtained by the differential feature map generation subunit 951 in a time-series dimension, so as to obtain the classification feature map.
In another example, in the training system 900 for a neural network for sewage treatment detection based on dynamic water flow images, as shown in fig. 12, the classification feature map generating unit 950 includes: a sewage feature map generation subunit 953, configured to perform maximum pooling on sample dimensions on the plurality of first feature maps to obtain a sewage feature map; a water purification feature map generation subunit 954, configured to perform maximum pooling processing on the plurality of second feature maps in sample dimension to obtain a water purification feature map; and a difference calculating subunit 955 configured to calculate a difference per pixel between the water purification feature map obtained by the water purification feature map generating subunit 954 and the sewage feature map obtained by the sewage feature map generating subunit 953 to obtain the classification feature map.
In an example, in the training system 900 of the neural network for sewage treatment detection based on dynamic water flow images, as shown in fig. 13, the classification loss function value generating unit 960 includes: a classification feature vector generation subunit 961, configured to pass the classification feature map through one or more fully connected layers to obtain a classification feature vector; a classification result generating subunit 962, configured to pass the classification feature vector obtained by the classification feature vector generating subunit 961 through a classification function to obtain a classification result; and a loss function value calculating operator unit 963, configured to input the classification result and the real value obtained by the classification result generating subunit 962 into a loss function to obtain the classification loss function value.
In one example, in the training system 900 for a dynamic flow image based sewage treatment detection neural network described above, the first convolutional neural network and the second convolutional neural network are depth residual error networks.
Here, it will be understood by those skilled in the art that the specific functions and operations of the respective units and modules in the training system 900 described above have been described in detail in the description of the neural network training method for sewage treatment detection based on dynamic flow images with reference to fig. 1 to 7, and thus, a repetitive description thereof will be omitted.
As described above, the training system 900 according to the embodiment of the present application may be implemented in various terminal devices, such as a server for sewage treatment detection, and the like. In one example, the training system 900 according to embodiments of the present application may be integrated into a terminal device as a software module and/or a hardware module. For example, the training system 900 may be a software module in the operating system of the terminal device, or may be an application developed for the terminal device; of course, the training system 900 may also be one of many hardware modules of the terminal device.
Alternatively, in another example, the training system 900 and the terminal device may be separate devices, and the training system 900 may be connected to the terminal device through a wired and/or wireless network and transmit the interaction information according to an agreed data format.
According to another aspect of the application, an intelligent sewage treatment detection system based on the deep neural network is also provided.
FIG. 14 illustrates a block diagram of an intelligent deep neural network-based wastewater treatment detection system according to an embodiment of the present application. As shown in fig. 14, the intelligent sewage treatment detection system 1400 based on the deep neural network according to the embodiment of the present application includes: the image acquisition unit 1410 is used for acquiring multi-frame images of the dynamic water flow to be detected; a classification unit 1420, configured to input the image obtained by the to-be-detected image obtaining unit 1410 into a first convolution neural network and a classifier trained according to the above-mentioned training method for a neural network for sewage treatment detection based on a dynamic water flow image, where an output of the classifier is a first probability that the sewage treatment detection is qualified and a second probability that the sewage treatment detection is unqualified; and a detection result generating unit 1430 configured to obtain a detection result of whether the sewage treatment is qualified based on the first probability and the second probability obtained by the classifying unit 1420.
Here, it will be understood by those skilled in the art that the detailed functions and operations of the respective units and modules in the above-described intelligent sewage treatment detecting system 1400 have been described in detail in the above description of the intelligent sewage treatment detecting method based on the deep neural network with reference to fig. 8, and thus, a repetitive description thereof will be omitted.
Exemplary electronic device
Next, an electronic apparatus according to an embodiment of the present application is described with reference to fig. 15.
FIG. 15 illustrates a block diagram of an electronic device in accordance with an embodiment of the present application.
As shown in fig. 15, the electronic device 10 includes one or more processors 11 and a memory 12.
The processor 11 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 10 to perform desired functions.
Memory 12 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer readable storage medium and executed by the processor 11 to implement the above-described neural network training method for dynamic flow image-based sewage treatment detection, or the functions of the intelligent sewage treatment detection method based on deep neural network, and/or other desired functions according to the various embodiments of the present application. Various content such as co-correlation loss function values, classification loss function values, and the like may also be stored in the computer-readable storage medium.
In one example, the electronic device 10 may further include: an input system 13 and an output system 14, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
The input system 13 may comprise, for example, a keyboard, a mouse, etc.
The output system 14 may output various information including classification results and the like to the outside. The output system 14 may include, for example, a display, speakers, a printer, and a communication network and its connected remote output devices, among others.
Of course, for simplicity, only some of the components of the electronic device 10 relevant to the present application are shown in fig. 15, and components such as buses, input/output interfaces, and the like are omitted. In addition, the electronic device 10 may include any other suitable components depending on the particular application.

Claims (10)

1. A training method of a neural network for sewage treatment detection based on dynamic water flow images is characterized by comprising the following steps:
acquiring multiframe images of sewage before treatment and multiframe images of purified water after treatment, wherein the multiframe images of the sewage and the multiframe images of the purified water have the same image frame number and time sequence distribution;
passing the multi-frame image of the sewage through a first convolutional neural network to obtain a plurality of first feature maps;
enabling the multi-frame image of the purified water to pass through a second convolutional neural network to obtain a plurality of second characteristic maps;
calculating a co-correlation loss function value between the plurality of first feature maps and the plurality of second feature maps, the co-correlation loss function value being indicative of a correlation between the plurality of first feature maps and the plurality of second feature maps;
calculating differences between the plurality of first feature maps and the plurality of second feature maps to obtain a classification feature map;
passing the classification feature map through a classifier to obtain a classification loss function value; and
updating parameters of the first convolutional neural network and the second convolutional neural network based on a weighted sum of the co-correlation loss function values and the classification loss function values.
2. The method for training a neural network for sewage treatment detection based on dynamic water flow images according to claim 1, wherein calculating a co-correlation loss function value between the plurality of first feature maps and the plurality of second feature maps, the co-correlation loss function value being used for representing a correlation between the plurality of first feature maps and the plurality of second feature maps, comprises:
performing global mean pooling on the plurality of first feature maps respectively to obtain a plurality of first feature values;
performing global mean pooling on the plurality of second feature maps respectively to obtain a plurality of second feature values; and
calculating a value of a co-correlation loss function between the plurality of first eigenvalues and the plurality of second eigenvalues with the following formula:
Figure FDA0002898846550000011
wherein xi represents the ith first characteristic value, x represents the average value of the plurality of first characteristic values, and yj represents the jth second characteristic value; y represents an average value of the plurality of second feature values.
3. The dynamic water flow image-based sewage treatment detection neural network training method according to claim 1, wherein calculating differences between the plurality of first feature maps and the plurality of second feature maps to obtain a classification feature map comprises:
respectively calculating the difference of the characteristic values of the plurality of first characteristic maps and the plurality of second characteristic maps according to the pixel positions in a time sequence dimension to obtain a plurality of difference characteristic maps; and
and arranging the differential feature maps in a time sequence dimension to obtain the classification feature map.
4. The dynamic water flow image-based sewage treatment detection neural network training method according to claim 1, wherein calculating differences between the plurality of first feature maps and the plurality of second feature maps to obtain a classification feature map comprises:
performing maximum value pooling treatment on the plurality of first feature maps in a sample dimension to obtain a sewage feature map;
performing maximum value pooling processing on the plurality of second feature maps in a sample dimension to obtain a purified water feature map; and
and calculating the difference value between the water purification characteristic diagram and the sewage characteristic diagram according to the pixel position so as to obtain the classification characteristic diagram.
5. The dynamic water flow image-based sewage treatment detection neural network training method according to claim 1, wherein the step of passing the classification feature map through a classifier to obtain a classification loss function value comprises the steps of:
passing the classification feature map through one or more fully connected layers to obtain a classification feature vector;
passing the classified feature vector through a classification function to obtain a classification result; and
and inputting the classification result and the real value into a loss function to obtain the classification loss function value.
6. The dynamic flow image-based sewage treatment detection neural network training method according to claim 1, wherein the first convolutional neural network and the second convolutional neural network are deep residual error networks.
7. An intelligent sewage treatment detection method based on a deep neural network is characterized by comprising the following steps:
acquiring a multi-frame image of a dynamic water flow to be detected;
inputting the image into a first convolutional neural network and a classifier, which are trained according to the training method of the neural network based on the dynamic water flow image for the sewage treatment detection as claimed in any one of claims 1 to 6, wherein the output of the classifier is a first probability corresponding to the sewage treatment detection being qualified and a second probability corresponding to the sewage treatment detection being unqualified; and
and determining whether the sewage treatment is qualified or not based on the first probability and the second probability.
8. A training system of a neural network for sewage treatment detection based on dynamic water flow images is characterized by comprising:
the system comprises an image acquisition unit, a data processing unit and a data processing unit, wherein the image acquisition unit is used for acquiring multi-frame images of sewage before treatment and multi-frame images of purified water after treatment, and the multi-frame images of the sewage and the multi-frame images of the purified water have the same image frame number and time sequence distribution;
the first characteristic diagram generating unit is used for enabling the multi-frame image of the sewage obtained by the image obtaining unit to pass through a first convolutional neural network so as to obtain a plurality of first characteristic diagrams;
the second characteristic diagram generating unit is used for enabling the multi-frame images of the purified water obtained by the image obtaining unit to pass through a second convolutional neural network so as to obtain a plurality of second characteristic diagrams;
a co-correlation loss function value calculation unit configured to calculate a co-correlation loss function value between the plurality of first feature maps obtained by the first feature map generation unit and the plurality of second feature maps obtained by the second feature map generation unit, the co-correlation loss function value being indicative of a correlation between the plurality of first feature maps and the plurality of second feature maps;
a classification feature map generation unit configured to calculate a difference between the plurality of first feature maps obtained by the first feature map generation unit and the plurality of second feature maps obtained by the second feature map generation unit to obtain a classification feature map;
the classification loss function value generating unit is used for enabling the classification feature map obtained by the classification feature map generating unit to pass through a classifier so as to obtain a classification loss function value; and
a parameter updating unit configured to update parameters of the first convolutional neural network and the second convolutional neural network based on a weighted sum of the covariance loss function value obtained by the covariance loss function value calculating unit and the classification loss function value obtained by the classification loss function value generating unit.
9. An intelligent sewage treatment detection system based on a deep neural network is characterized by comprising:
the device comprises an image acquisition unit to be detected, a dynamic water flow detection unit and a dynamic water flow detection unit, wherein the image acquisition unit to be detected is used for acquiring multi-frame images of the dynamic water flow to be detected;
the classification unit is used for inputting the image obtained by the image acquisition unit to be detected into a first convolution neural network and a classifier trained according to the dynamic water flow image-based sewage treatment detection neural network training method of any one of claims 1 to 6, and the output of the classifier is a first probability corresponding to qualified sewage treatment detection and a second probability corresponding to unqualified sewage treatment detection; and
and the detection result generation unit is used for obtaining a detection result of whether the sewage treatment is qualified or not based on the first probability and the second probability obtained by the classification unit.
10. An electronic device, comprising:
a processor; and
a memory having stored therein computer program instructions which, when executed by the processor, cause the processor to execute the method of training a neural network for dynamic flow image-based sewage treatment detection according to any one of claims 1 to 6 or the method of intelligent sewage treatment detection based on a deep neural network according to claim 7.
CN202110049949.0A 2021-01-14 2021-01-14 Dynamic water flow image-based sewage treatment detection neural network training method Withdrawn CN112733750A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110049949.0A CN112733750A (en) 2021-01-14 2021-01-14 Dynamic water flow image-based sewage treatment detection neural network training method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110049949.0A CN112733750A (en) 2021-01-14 2021-01-14 Dynamic water flow image-based sewage treatment detection neural network training method

Publications (1)

Publication Number Publication Date
CN112733750A true CN112733750A (en) 2021-04-30

Family

ID=75593136

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110049949.0A Withdrawn CN112733750A (en) 2021-01-14 2021-01-14 Dynamic water flow image-based sewage treatment detection neural network training method

Country Status (1)

Country Link
CN (1) CN112733750A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113139518A (en) * 2021-05-14 2021-07-20 杭州旭颜科技有限公司 Section bar cutting state monitoring method based on industrial internet
CN115521020A (en) * 2021-06-25 2022-12-27 中国石油化工股份有限公司 Chemical sewage treatment method and system
CN116051506A (en) * 2023-01-28 2023-05-02 东莞市言科新能源有限公司 Intelligent production system and method for polymer lithium ion battery

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113139518A (en) * 2021-05-14 2021-07-20 杭州旭颜科技有限公司 Section bar cutting state monitoring method based on industrial internet
CN113139518B (en) * 2021-05-14 2022-07-29 江苏中天互联科技有限公司 Section bar cutting state monitoring method based on industrial internet
CN115521020A (en) * 2021-06-25 2022-12-27 中国石油化工股份有限公司 Chemical sewage treatment method and system
CN115521020B (en) * 2021-06-25 2024-06-04 中国石油化工股份有限公司 Chemical wastewater treatment method and system
CN116051506A (en) * 2023-01-28 2023-05-02 东莞市言科新能源有限公司 Intelligent production system and method for polymer lithium ion battery

Similar Documents

Publication Publication Date Title
CN112733750A (en) Dynamic water flow image-based sewage treatment detection neural network training method
CN107563433B (en) Infrared small target detection method based on convolutional neural network
CN107945210B (en) Target tracking method based on deep learning and environment self-adaption
CN114187311A (en) Image semantic segmentation method, device, equipment and storage medium
CN108520215B (en) Single-sample face recognition method based on multi-scale joint feature encoder
CN108053025B (en) Multi-column neural network medical image analysis method and device
CN116342894B (en) GIS infrared feature recognition system and method based on improved YOLOv5
CN113111968A (en) Image recognition model training method and device, electronic equipment and readable storage medium
CN116704431A (en) On-line monitoring system and method for water pollution
Qian et al. FESSD: SSD target detection based on feature fusion and feature enhancement
CN115797735A (en) Target detection method, device, equipment and storage medium
CN113837230A (en) Image description generation method based on adaptive attention mechanism
CN106682604B (en) Blurred image detection method based on deep learning
CN112884721A (en) Anomaly detection method and system and computer readable storage medium
Islam et al. How certain are tansformers in image classification: uncertainty analysis with Monte Carlo dropout
CN114972871A (en) Image registration-based few-sample image anomaly detection method and system
CN113989835A (en) Personnel re-identification method and device, computer equipment and storage medium
CN113421212A (en) Medical image enhancement method, device, equipment and medium
CN114052675A (en) Pulse anomaly distinguishing method and system based on fusion attention mechanism
CN112686882A (en) Method for detecting stability of pipeline erected on pipeline support frame
CN112949730B (en) Method, device, storage medium and equipment for detecting target with few samples
Hallyal et al. Optimized recognition of CAPTCHA through attention models
Sathya et al. A Novel approach for Sign Language Interpretation based on Convolutional Neural Networks
CN116821724B (en) Multimedia processing network generation method, multimedia processing method and device
KR20140026977A (en) Method and apparatus for classifying the moving objects in video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20210430