CN115147617A - Intelligent sewage treatment monitoring method based on computer vision - Google Patents

Intelligent sewage treatment monitoring method based on computer vision Download PDF

Info

Publication number
CN115147617A
CN115147617A CN202211081028.3A CN202211081028A CN115147617A CN 115147617 A CN115147617 A CN 115147617A CN 202211081028 A CN202211081028 A CN 202211081028A CN 115147617 A CN115147617 A CN 115147617A
Authority
CN
China
Prior art keywords
region
measured
area
image
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211081028.3A
Other languages
Chinese (zh)
Other versions
CN115147617B (en
Inventor
王继富
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Liaocheng Jizhong Environmental Protection Technology Co ltd
Original Assignee
Liaocheng Jizhong Environmental Protection Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Liaocheng Jizhong Environmental Protection Technology Co ltd filed Critical Liaocheng Jizhong Environmental Protection Technology Co ltd
Priority to CN202211081028.3A priority Critical patent/CN115147617B/en
Publication of CN115147617A publication Critical patent/CN115147617A/en
Application granted granted Critical
Publication of CN115147617B publication Critical patent/CN115147617B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/763Non-hierarchical techniques, e.g. based on statistics of modelling distributions
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02WCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO WASTEWATER TREATMENT OR WASTE MANAGEMENT
    • Y02W10/00Technologies for wastewater treatment
    • Y02W10/10Biological treatment of water, waste water, or sewage

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of image processing, in particular to a sewage treatment intelligent monitoring method based on computer vision. The method comprises the following steps: acquiring an alum blossom image to be detected; obtaining each first area in the alum blossom image to be detected according to the corresponding edge image; recording a first area of which the adjacent area set is not empty as a first area to be measured; obtaining corresponding transmittance, relation dispersion and matching regions according to pixel values of pixel points in each first region to be measured; obtaining the filiform edge uniformity and the filiform edge compactness of each first region to be measured according to each corner point corresponding to each first region to be measured, and further obtaining the corresponding merging degree; obtaining a target alum blossom image according to the merging degree and the matching area; and obtaining a flocculant demand state according to the alum blossom image to be detected, the target alum blossom image and the trained target network. The invention realizes the detection of the addition condition of the flocculating agent more quickly and accurately with lower cost.

Description

Intelligent monitoring method for sewage treatment based on computer vision
Technical Field
The invention relates to the technical field of image processing, in particular to a sewage treatment intelligent monitoring method based on computer vision.
Background
The current sewage treatment method mainly comprises the following steps: physical methods such as precipitation and filtration, biological methods such as activated sludge and biofilm, and chemical methods such as coagulation and ion exchange; wherein the flocculation precipitation method is most widely applied; in the process of using the flocculating and settling method, if the flocculating agent is not added sufficiently, a good flocculating and settling effect cannot be achieved, and if the flocculating agent is added excessively, water quality is affected, and meanwhile, the flocculating agent is wasted, so that a proper amount of the flocculating agent needs to be added. The amount of the flocculant to be added is controlled by various factors such as the temperature of water, the pH of water, the alkalinity of water, the nature and concentration of impurity components in water, and the like, and thus it is difficult to properly control the amount of the flocculant to be added.
The currently used methods for judging the input amount of the flocculating agent mainly comprise three methods: empirical judgment, electrical pulse method and turbidimetric method; experience judges that the amount of alum floc formed by the flocculating agent and the water is greatly influenced by manual experience only by observing the amount of alum floc by naked eyes, and the precision is unstable; the electropulse method is greatly influenced by factors such as flow velocity, temperature, components and the like, and has lower resolution; the turbidimeter measurement method has high detection precision, but has limited detection range, slow reaction speed and expensive equipment; therefore, how to detect the addition of the flocculating agent more quickly and accurately at lower cost is a problem to be solved.
Disclosure of Invention
In order to solve the technical problems, the invention aims to provide an intelligent monitoring method for sewage treatment based on computer vision, which adopts the following technical scheme:
the invention provides a computer vision-based intelligent monitoring method for sewage treatment, which comprises the following steps:
acquiring an alum blossom image to be detected and an alum blossom gray image, wherein the alum blossom image to be detected is an RGB image, and the alum blossom gray image is a gray image obtained by preprocessing a gray image corresponding to the alum blossom image to be detected;
extracting edges in the alum blossom gray level image to obtain a corresponding edge image; carrying out region division on the alum blossom image to be detected according to the edge image to obtain each first region in the alum blossom image to be detected; acquiring an adjacent region set corresponding to each first region, wherein the adjacent region set comprises first regions adjacent to the corresponding first regions; recording a first area of which the adjacent area set is not empty as a first area to be measured;
obtaining the transmittance corresponding to each first region to be measured according to the values of the R, G and B channels corresponding to each pixel point in each first region to be measured; obtaining the corresponding connection dispersion and the corresponding matching area of each first area to be measured according to the corresponding transmittance of each first area to be measured;
carrying out corner point detection on the edge of each first area to be measured to obtain each corner point corresponding to each first area to be measured; obtaining the filiform edge uniformity and the filiform edge tightness corresponding to each first region to be measured according to each corner point corresponding to each first region to be measured; obtaining the merging degree corresponding to each first region to be measured according to the filamentous edge uniformity, the filamentous edge tightness and the connection dispersion corresponding to each first region to be measured;
obtaining a target alum blossom image according to the merging degree and the matching area corresponding to each first area to be measured; and obtaining the flocculant demand state at the corresponding position according to the alum blossom image to be detected, the target alum blossom image and the trained target network.
Preferably, the area division of the alum blossom image to be detected according to the edge image to obtain each first area in the alum blossom image to be detected includes:
acquiring each closed edge in the edge image, and recording an area in each closed edge as a first area;
and mapping each first region in the edge image to the alumen ustum image to be detected to obtain each first region in the alumen ustum image to be detected.
Preferably, an adjacent region set corresponding to each first region is obtained, where the adjacent region set includes first regions adjacent to the corresponding first regions; a first region in which the set of adjacent regions is not empty is marked as a first region to be measured, and the method comprises the following steps:
for any first region: recording other first areas where pixel points in eight adjacent areas of the pixel points included in the first area are located as first adjacent areas corresponding to the first area, and recording a set formed by all the first adjacent areas corresponding to the first area as an adjacent area set corresponding to the first area;
recording a first region with empty adjacent region sets as a target alum blossom region, wherein the target alum blossom region is a region corresponding to complete alum blossom; and recording a first area of which the adjacent area set is not empty as a first area to be measured.
Preferably, obtaining the transmittance corresponding to each first region to be measured according to the values of the three channels R, G, and B corresponding to each pixel point in each first region to be measured includes:
for any first region to be determined:
obtaining a first-order color moment of the R channel corresponding to the first region to be measured according to the value of the R channel corresponding to each pixel point in the first region to be measured; obtaining a first-order color moment of the G channel corresponding to the first area to be measured according to the value of the G channel corresponding to each pixel point in the first area to be measured; obtaining a first-order color moment of the B channel corresponding to the first area to be measured according to the value of the B channel corresponding to each pixel point in the first area to be measured;
obtaining the transmittance corresponding to the first region to be measured according to the first-order color moment of the R channel, the first-order color moment of the G channel, the first-order color moment of the B channel and the gray value of the corresponding pixel point corresponding to the first region to be measured;
the formula for obtaining the transmittance corresponding to the first region to be measured is as follows:
Figure DEST_PATH_IMAGE001
wherein, the first and the second end of the pipe are connected with each other,
Figure 478559DEST_PATH_IMAGE002
the transmittance of the first region to be measured,
Figure 501879DEST_PATH_IMAGE003
is the first area to be measuredThe first order color moments of the R channels corresponding to the domain,
Figure 212346DEST_PATH_IMAGE004
is the first order color moment of the G channel corresponding to the first area to be measured,
Figure 487470DEST_PATH_IMAGE005
is the first order color moment of the B channel corresponding to the first area to be measured,
Figure 865361DEST_PATH_IMAGE006
is the average value of the gray values of all the pixel points in the first region to be measured,
Figure 59582DEST_PATH_IMAGE007
is the maximum gray value in the first region to be determined,
Figure 319662DEST_PATH_IMAGE008
is the minimum gray value in the first region to be determined,
Figure 132898DEST_PATH_IMAGE009
is the first adjustment parameter.
Preferably, obtaining the corresponding connection dispersion and the corresponding matching region of each first region to be measured according to the transmittance corresponding to each first region to be measured includes:
for any first region to be determined:
respectively calculating the absolute value of the difference value of the transmittance of the first region to be measured and the transmittance of each first adjacent region in the corresponding adjacent region set;
taking the minimum value in the absolute value of the difference value between the transmittance of the first to-be-measured area and the transmittance of each first adjacent area in the corresponding adjacent area set as the connection dispersion corresponding to the first to-be-measured area; and taking the first adjacent area with the minimum absolute value of the difference value of the light transmittance of the first area to be measured as the matching area corresponding to the first area to be measured.
Preferably, the method for obtaining the uniformity of the filamentous edge corresponding to the first region to be measured includes:
for any first region to be determined:
clustering each corner corresponding to the first area to be measured by using a DBSCAN clustering algorithm to obtain a plurality of clusters corresponding to the first area to be measured;
for any cluster corresponding to the first region to be measured: performing straight line fitting on each angular point in the cluster to obtain a straight line corresponding to the cluster and an inclination angle corresponding to the straight line; equally dividing the range of the inclination angle into a preset number of sub-ranges
Figure 489930DEST_PATH_IMAGE010
Dividing the straight line corresponding to each cluster into corresponding sub-ranges, and counting the number of the straight lines corresponding to the first region to be measured contained in each sub-range;
obtaining the uniformity of the filamentous edge corresponding to the first region to be measured according to the number of straight lines corresponding to the first region to be measured contained in each sub-range;
the formula for obtaining the uniformity of the filamentous edge corresponding to the first region to be measured is as follows:
Figure 730418DEST_PATH_IMAGE011
wherein the content of the first and second substances,
Figure 681057DEST_PATH_IMAGE012
the uniformity of the filamentous edge corresponding to the first region to be measured,
Figure 297983DEST_PATH_IMAGE013
the number of straight lines corresponding to the first region to be measured included in the ith sub-range,
Figure 509521DEST_PATH_IMAGE014
is the average value of the number of straight lines corresponding to the first region to be measured included in all the sub-ranges, and a is the sub-rangeThe number of the circumference is equal to the total number of the circumference,
Figure 186490DEST_PATH_IMAGE015
in order to be able to set the second adjustment parameter,
Figure 358846DEST_PATH_IMAGE016
is the product of the number of straight lines corresponding to the first region to be determined contained in each sub-range.
Preferably, the method for acquiring the tightness of the filamentous edge corresponding to the first region to be measured includes:
for any first region to be determined:
number of straight lines acquired is less than
Figure 513883DEST_PATH_IMAGE014
Will be less than
Figure 845508DEST_PATH_IMAGE014
Each straight line corresponding to each sub-range is marked as a target straight line; acquiring all corner points in the clusters corresponding to all target straight lines, and marking as target corner points;
sequencing all target corner points on the edge of the first area to be measured according to positions to obtain a target corner point sequence; recording a first target corner in the target corner sequence as a first target corner, and recording a last target corner in the target corner sequence as a second target corner;
marking an edge line between a first target corner point and a second target corner point in the edge of the first region to be measured as a target edge line, wherein the target edge line comprises all target corner points;
performing circle fitting on all pixel points on the target edge line to obtain corresponding goodness of fit, and recording as the goodness of fit corresponding to the first region to be measured;
and calculating the product of the number of the clusters corresponding to the first region to be measured and the number of the angular points, and calculating the ratio of the product to the goodness-of-fit as the filamentous edge tightness corresponding to the first region to be measured.
Preferably, obtaining the merging degree corresponding to each first region to be measured according to the filiform edge uniformity, the filiform edge tightness and the connection dispersion corresponding to each first region to be measured includes:
for any first region to be determined:
calculating the product of the uniformity of the filiform edge corresponding to the first region to be measured and the tightness of the filiform edge as the edge integrity corresponding to the first region to be measured; and calculating the ratio of the edge integrity and the connection dispersion corresponding to the first region to be measured, and taking the ratio as the merging degree corresponding to the first region to be measured.
Preferably, the obtaining of the target alum blossom image according to the merging degree and the matching area corresponding to each first region to be measured includes:
traversing each first to-be-determined area in the alum blossom image to be detected, fusing the first to-be-determined areas with the merging degree larger than or equal to the judgment threshold in the alum blossom image to be detected with the corresponding matching areas, and obtaining each second area after traversing is completed, wherein each second area is formed by the first to-be-determined areas with the number larger than or equal to 1; acquiring a second adjacent region set corresponding to each second region, wherein the second adjacent region set comprises each second region adjacent to the corresponding second region; recording a second area with the second adjacent area set as an empty second area as a target alumen ustum area, and recording a second area with the second adjacent area set not as an empty second area to be measured; obtaining a matching area corresponding to each second area to be measured and a corresponding merging degree; if the merging degrees corresponding to the second areas to be measured are smaller than the judgment threshold, recording the second areas to be measured as target alum blossom areas; if the second areas to be measured with the merging degree larger than or equal to the judgment threshold exist, fusing the second areas to be measured with the merging degree larger than or equal to the judgment threshold with the corresponding matching areas to obtain third areas, wherein the third areas are formed by the second areas to be measured with the number larger than or equal to 1; by analogy, until other regions to be determined except the target alumen ustum region do not exist in the alumen ustum image to be detected, obtaining each target alumen ustum region in the alumen ustum image to be detected; and dividing each target alum blossom area in the alum blossom image to be detected, and marking the divided alum blossom image to be detected as a target alum blossom image.
Preferably, the flocculant demand state comprises: the flocculating agent needs to be additionally added, a small amount of flocculating agent needs to be added, the flocculating agent dose does not need to be changed, and the flocculating agent is excessively added.
The invention has the following beneficial effects:
in the invention, different alumen ustums corresponding to different depths are considered to possibly overlap in an image, so that the recognized alumen ustums are possibly larger when the alumen ustums in the image are detected, and in order to divide the different alumen ustums in the obtained to-be-detected alumen ustum image, first regions in the to-be-detected alumen ustum image are divided according to corresponding edge images, and the first regions with non-empty adjacent region sets are marked as first to-be-detected regions; the method comprises the steps of judging which first regions to be measured belong to the same alumen ustum based on the characteristics of the transmittance and the filamentous edges of the alumen ustum, and obtaining the transmittance, the connection dispersion and the corresponding matching regions corresponding to the first regions to be measured according to the values of three channels of R, G and B corresponding to each pixel point in each first region to be measured; then, performing corner point detection on the edge of each first region to be measured, and obtaining filiform edge uniformity and filiform edge tightness corresponding to each first region to be measured according to each corner point corresponding to each first region to be measured; then, obtaining the merging degree corresponding to each first region to be measured according to the filamentous edge uniformity, the filamentous edge tightness and the connection dispersion corresponding to each first region to be measured; the merging degree is used for judging whether the first region to be detected and the corresponding matching region are the same alumen ustum or not, so that each target alumen ustum region in the alumen ustum image to be detected is divided according to the merging degree and the matching region corresponding to each first region to be detected to obtain a target alumen ustum image, and the target alumen ustum region is a region corresponding to a complete alumen ustum; the method divides the target alum floc area in the alum floc image to be detected, and can improve the accuracy of subsequent determination of flocculant addition. And finally, obtaining the flocculant demand state at the corresponding position according to the alum blossom image to be detected, the target alum blossom image and the trained target network. According to the method, different alumen ustums in the image are divided according to the light transmittance of the alumen ustums and the characteristics of filamentous edges, so that the addition condition of the flocculating agent is judged more quickly and accurately according to the divided image, the influence of factors such as flow velocity, temperature and components in the flocculation tank is avoided, and the addition condition of the flocculating agent is detected more quickly and accurately at lower cost.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of an intelligent monitoring method for sewage treatment based on computer vision provided by the invention;
fig. 2 is a schematic view of a first region to be measured.
Detailed Description
To further illustrate the technical means and functional effects of the present invention adopted to achieve the predetermined invention, the following detailed description will be given for an intelligent monitoring method for sewage treatment based on computer vision according to the present invention with reference to the accompanying drawings and preferred embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following describes a specific scheme of the intelligent monitoring method for sewage treatment based on computer vision in detail with reference to the accompanying drawings.
The embodiment of the intelligent monitoring method for sewage treatment based on computer vision comprises the following steps:
as shown in fig. 1, the intelligent monitoring method for sewage treatment based on computer vision of this embodiment includes the following steps:
the method comprises the following steps of S1, acquiring an alum blossom image to be detected and an alum blossom gray image, wherein the alum blossom image to be detected is an RGB image, and the alum blossom gray image is a gray image obtained by preprocessing a gray image corresponding to the alum blossom image to be detected.
With the development of times and the progress of science and technology, the water consumption of industrial water, domestic water and agricultural water is increased explosively, and meanwhile, the discharge of industrial wastewater, domestic wastewater and agricultural wastewater is increased. In order to protect our living environment, the waste water is treated to reach the discharge standard regulated by the state before being discharged. The flocculation sedimentation method is the most extensive method for treating sewage at present, and in the process of using the flocculation sedimentation method, if the addition of a flocculating agent is insufficient, a good flocculation sedimentation effect cannot be achieved, and if the addition of the flocculating agent is excessive, the water quality is affected, meanwhile, the waste of the flocculating agent is caused, so that a proper amount of the flocculating agent needs to be added; in order to judge whether the addition amount of the flocculant is proper, the embodiment provides the intelligent sewage treatment monitoring method based on the computer vision, and the method divides the alum flocs in the image based on the characteristics of the alum flocs so as to judge the addition condition of the flocculant.
Because the flocculation basin is bigger, so need add respectively in different positions when adding the flocculating agent to follow-up also monitor the flocculating agent addition volume in different positions respectively. The embodiment firstly adds the least amount of flocculating agent into the wastewater to be treated at any position in the flocculation tank for flocculation, and after stable alum flocs appear, a CCD camera fixed above the flocculation tank is used for photographing the corresponding position to obtain a water body image containing the alum flocs at the position, the water body image is marked as an image of the alum flocs to be detected, and the image of the alum flocs to be detected is an RGB image. The minimum amount of flocculant is the minimum amount within the recommended range for the flocculant.
Next, the alumen ustum image to be detected is converted into a corresponding gray scale image in the present embodiment, and the method for converting into a gray scale image in the present embodiment is conventional and will not be described herein again. Then, preprocessing the corresponding gray level image to obtain an alum blossom gray level image, specifically: in order to avoid the interference of environment and the like, carrying out denoising processing on the gray level image by using median filtering; in order to enhance the gray level image and make details in the image clearer, a Laplacian operator is used for carrying out convolution on the de-noised gray level image to obtain an enhanced gray level image which is recorded as an alum blossom gray level image; the pixel points in the alum blossom gray image and the alum blossom image to be detected are in one-to-one correspondence. The median filtering and laplacian are well known in this embodiment and will not be described in detail.
S2, extracting edges in the alum blossom gray level image to obtain a corresponding edge image; performing region division on the alum blossom image to be detected according to the edge image to obtain each first region in the alum blossom image to be detected; acquiring an adjacent region set corresponding to each first region, wherein the adjacent region set comprises first regions adjacent to the corresponding first regions; and recording a first region of which the adjacent region set is not empty as a first region to be measured.
The alum floc is floccule formed by adsorbing the flocculating agent and impurities in the wastewater to be treated together, so the water quality becomes clearer after the alum floc appears, and the alum floc and the water have clearer edges. In order to analyze the alum blossom in the image, the embodiment uses a canny edge detection operator to perform edge extraction on the alum blossom gray image to obtain a corresponding edge image, and the edge image is a binary image. The canny edge detection operator in this embodiment is a prior art, and is not described herein again.
When the size and the number of the alum flocs are smaller, the flocculant is added too little and the precipitation is insufficient; however, when the size of the alum flocs is large and the amount of the alum flocs is large, the flocculant is added excessively, so that the flocculant can be added according to the size of the alum flocs in the image in the embodiment. As alum floc can float in water when being formed and gradually fall down along with the lapse of time; the to-be-detected alum blossom image obtained in the embodiment is only the visual angle above the water surface, so different alum blossom images corresponding to different depths may overlap in the image, and are further recognized as the same alum blossom, which causes the recognized alum blossom to have a larger size; therefore, in this embodiment, the image of the alum blossom to be detected is divided to obtain a plurality of small areas, which small areas belong to the same alum blossom are determined according to the characteristics of the alum blossom, the small areas belonging to the same alum blossom are spliced, and finally, a complete image corresponding to each alum blossom is obtained.
Firstly, acquiring each closed edge in an edge image based on the edge image, and recording an area in each closed edge as a first area; then mapping each first region in the edge image to an alum blossom image to be detected to obtain each first region in the alum blossom image to be detected, wherein the first region is a region for forming alum blossom; the same alum blossom may be combined by a plurality of first areas; as shown in fig. 2, the figure shows a alumen ustum region, which is formed by combining three first regions.
Some first regions in the image are isolated, and the first regions are complete alum blossom regions; and some first regions have a first region adjacent to the first region (a common edge exists between two adjacent first regions), then the two adjacent first regions may belong to the same alum blossom or different alum flowers; next, the present embodiment analyzes each first region to determine which first regions belong to the same alum blossom.
In this embodiment, first, an adjacent area set corresponding to each first area is obtained, where the adjacent area set includes a first area adjacent to the corresponding first area; for any first region: recording other first areas where pixel points in the eight adjacent areas of the pixel points included in the first area are located as first adjacent areas corresponding to the first area, and recording a set formed by all the first adjacent areas corresponding to the first area as an adjacent area set corresponding to the first area. As shown in fig. 2, 1, 2, and 3 in the figure are three first regions, where the neighboring region set corresponding to 1 includes 2 and 3.
Thus, the adjacent area set corresponding to each first area can be obtained.
For the first regions with empty neighboring region sets, the first regions are not adjacent to other first regions, and form a complete alum blossom, so the first regions with empty neighboring region sets are regarded as target alum blossom regions in this embodiment, and the target alum blossom regions are regions corresponding to a complete alum blossom; and recording a first area of which the adjacent area set is not empty as a first area to be measured.
S3, obtaining the transmittance corresponding to each first region to be measured according to the values of the three channels R, G and B corresponding to each pixel point in each first region to be measured; and obtaining the corresponding connection dispersion and the corresponding matching area of each first area to be measured according to the corresponding transmittance of each first area to be measured.
In the present embodiment, a part of target alum areas in the image to be detected are divided according to step S2, and then, the present embodiment analyzes each first region to be measured, and combines the first regions to be measured belonging to the same alum to divide all the target alum areas in the image.
Considering that the different depths and positions of the water layers of the alum blossom are different, the scattering and absorption degrees of light are also different, so the different alum blossom presents different shades; secondly, the grown larger alum flowers are more compact, the filamentous edges are finer and finer, the transmittance is poorer, and the larger alum flowers are difficult to bridge with other adsorbates; the growing smaller alum flocs are fragile and easy to adsorb and bridge with the rest alum flocs or flocculated impurities, and have better transmittance. Based on this, in this embodiment, the transmittance corresponding to each first region to be measured is obtained according to the pixel value of each pixel point in each first region to be measured, specifically:
for any first region to be determined: considering that the first-order color moment can reflect the overall brightness of the region, and the different transmittances reflect different brightness, the first-order color moment of the R channel corresponding to the first region to be measured is obtained according to the value of the R channel corresponding to each pixel point in the first region to be measured
Figure 427799DEST_PATH_IMAGE003
(ii) a According to the value of the G channel corresponding to each pixel point in the first area to be measured, obtaining the first-order color moment of the G channel corresponding to the first area to be measured
Figure 87450DEST_PATH_IMAGE004
(ii) a According toThe value of the B channel corresponding to each pixel point in the first area to be measured is obtained to obtain the first-order color moment of the B channel corresponding to the first area to be measured
Figure 905233DEST_PATH_IMAGE005
(ii) a Then, the embodiment obtains the transmittance corresponding to the first region to be measured according to the first-order color moment of the R channel, the first-order color moment of the G channel, the first-order color moment of the B channel corresponding to the first region to be measured, and the gray value of the corresponding pixel point; the calculation formula of the transmittance corresponding to the first region to be measured is as follows:
Figure 966730DEST_PATH_IMAGE001
wherein, the first and the second end of the pipe are connected with each other,
Figure 985502DEST_PATH_IMAGE002
the transmittance of the first region to be measured,
Figure 132449DEST_PATH_IMAGE003
is the first order color moment of the R channel corresponding to the first region to be measured,
Figure 753924DEST_PATH_IMAGE004
is the first order color moment of the G channel corresponding to the first region to be measured,
Figure 669927DEST_PATH_IMAGE005
is the first order color moment of the B channel corresponding to the first area to be measured,
Figure 594021DEST_PATH_IMAGE006
is the average value of the gray values of all the pixels in the first region to be measured (i.e. the average gray value),
Figure 352898DEST_PATH_IMAGE007
is the maximum gray value in this first region to be determined,
Figure 653430DEST_PATH_IMAGE008
is the minimum gray value in the first region to be determined,
Figure 423939DEST_PATH_IMAGE009
is a first adjustment parameter for preventing the denominator from being 0, and thus
Figure 784514DEST_PATH_IMAGE009
The value of (b) should be as small as possible but not 0, and can be specifically set according to actual needs.
When the difference between the maximum gray value and the minimum gray value in the first region to be measured is smaller, the first-order color moment values of the three channels are larger, and the average gray value is larger, it is indicated that the brighter the first region to be measured is, the better the transmittance is, and the larger the corresponding transmittance is; when the transmittance of the first region to be measured is higher, it indicates that the first region to be measured and the first adjacent region corresponding to the first region to be measured are more likely to belong to the same alumen ustum.
To this end, the transmittance corresponding to each first region to be measured can be obtained according to the above process.
Then, according to the transmittance corresponding to each first region to be measured and the transmittance of the first adjacent region in the corresponding adjacent region set, selecting a first adjacent region which is most likely to be the same alum blossom as the corresponding first region to be measured, and obtaining a degree of association, specifically:
for any first region to be determined:
respectively calculating the absolute value of the difference value of the transmittance of the first region to be measured and the transmittance of each first adjacent region in the corresponding adjacent region set; then taking the minimum value in the absolute value of the difference value between the transmittance corresponding to the first region to be measured and the transmittance of each corresponding first adjacent region as the connection dispersion corresponding to the first region to be measured; taking the first adjacent area with the minimum absolute value of the difference value of the light transmittance of the first area to be measured as a matching area corresponding to the first area to be measured; in this embodiment, a calculation formula of the relationship dispersion corresponding to the first area to be measured is as follows:
Figure 30687DEST_PATH_IMAGE017
wherein, the first and the second end of the pipe are connected with each other,
Figure 869330DEST_PATH_IMAGE018
the relation dispersion corresponding to the first region to be measured,
Figure DEST_PATH_IMAGE019
the transmittance of the 1 st first adjacent area corresponding to the first area to be measured,
Figure 353401DEST_PATH_IMAGE020
the transmittance of the 2 nd first adjacent region corresponding to the first region to be measured,
Figure 884877DEST_PATH_IMAGE021
and (c) the transmittance of the (b) th first adjacent area corresponding to the first area to be measured (b is the number of the first adjacent areas corresponding to the first area to be measured), and min { } is a minimum function.
When the transmittances of the first region to be measured and the corresponding first adjacent region are closer, the smaller the corresponding contact dispersion of the first region to be measured is, and the more likely the first region to be measured and the corresponding matching region belong to the same alumen ustum.
To this end, the embodiment can obtain the relationship dispersion and the corresponding matching region corresponding to each first region to be measured according to the above process.
S4, performing corner detection on the edge of each first area to be measured to obtain each corner corresponding to each first area to be measured; obtaining the filiform edge uniformity and the filiform edge tightness corresponding to each first region to be measured according to each corner point corresponding to each first region to be measured; and obtaining the merging degree corresponding to each first region to be measured according to the filamentous edge uniformity, the filamentous edge tightness and the connection dispersion corresponding to each first region to be measured.
The alum blossom is a cluster formed by combining floccules formed by hydrolyzing and adsorbing impurities in water, so the edge of the alum blossom is in a fine thread shape, and the non-edge part has no fine characteristics; meanwhile, the alum floc is formed by naturally combining floccules after standing, so the directions of different filamentous edges are dispersed and uniform, and the condition of consistent directions cannot occur. Based on this, the present embodiment analyzes the edge of each first region to be measured, to obtain the uniformity and the tightness of the filiform edge corresponding to the first region to be measured, so as to reflect the uniformity and the tightness of the distribution of the filiform edge of each first region to be measured, specifically:
for any first region to be determined:
in this embodiment, first, corner detection is performed on the edge of the first area to be measured to obtain each corner corresponding to the first area to be measured, and the number of detected corners is recorded as
Figure 228133DEST_PATH_IMAGE022
The angular point is an irregular filamentous part on the edge; since the distribution of the corner points on the same filament in the edge is concentrated, and there is a certain distance between the corner points on different filaments, the embodiment uses the DBSCAN clustering algorithm (density clustering algorithm) to cluster the corner points corresponding to the first region to be measured to obtain the first region to be measured
Figure 136046DEST_PATH_IMAGE023
A cluster (
Figure 68099DEST_PATH_IMAGE023
The number of tufts obtained, i.e. the number of tufts to which the first area to be determined corresponds), each tuft corresponding to a filament on the edge; in the embodiment, the neighborhood radius of the DBSCAN clustering algorithm is 10, the minimum point number is 4, and the DBSCAN clustering algorithm can be specifically adjusted according to actual needs; the DBSCAN clustering algorithm is prior art and will not be described herein. Next, the embodiment aligns the first to-be-determined region with the corner points in each cluster corresponding to the first to-be-determined regionThe distribution of the filaments on the corresponding edge of the measurement area is analyzed, specifically:
for any cluster corresponding to the first region to be measured: and performing straight line fitting on each corner point in the cluster to obtain a straight line corresponding to the cluster and an inclined angle corresponding to the straight line, wherein the inclined angle is the direction of the edge of the filament corresponding to the cluster.
The embodiment can obtain the straight line corresponding to each cluster corresponding to the first region to be measured and the inclination angle of the corresponding straight line according to the method. The method for fitting a straight line in this embodiment is conventional, and will not be described herein. The range of the inclination angle is
Figure 770476DEST_PATH_IMAGE010
In this embodiment, the range of the tilt angle is equally divided into a, that is, into a sub-ranges (in this embodiment, the value of a is 6, which can be specifically set according to actual needs); dividing the straight line corresponding to each cluster into corresponding sub-ranges based on the inclination angle of the straight line corresponding to each cluster corresponding to the first region to be measured, and counting the number of the straight lines contained in each sub-range.
Next, in this embodiment, according to the number of straight lines corresponding to the first region to be measured included in each sub-range, the uniformity of the distribution of the filaments on the edge of the first region to be measured is analyzed, so as to obtain the uniformity of the edge of the filament corresponding to the first region to be measured, and the specific formula is as follows:
Figure 601029DEST_PATH_IMAGE011
wherein, the first and the second end of the pipe are connected with each other,
Figure 781474DEST_PATH_IMAGE012
the uniformity of the filamentous edge corresponding to the first region to be measured,
Figure 771296DEST_PATH_IMAGE013
the number of straight lines corresponding to the first region to be measured included in the ith sub-range,
Figure 644574DEST_PATH_IMAGE014
is the average value of the number of straight lines corresponding to the first region to be measured included in all the subranges, a is the number of the subranges,
Figure 962423DEST_PATH_IMAGE015
is a second adjustment parameter for preventing the denominator from being 0, and thus
Figure 946559DEST_PATH_IMAGE015
The value of (a) should be as small as possible, but cannot be 0, and can be specifically set according to actual needs;
Figure 790888DEST_PATH_IMAGE016
is the product of the number of straight lines corresponding to the first region to be determined contained in each sub-range.
When the number of straight lines corresponding to the first region to be measured included in each sub-range is closer and the corresponding product is larger, the more uniform the distribution of the filaments on the edge of the first region to be measured is, the more uniform the uniformity of the filament edge corresponding to the first region to be measured is, and the more likely the first region to be measured is a complete alum blossom.
Considering that the edges of alum flowers should be in a fine thread shape, the corresponding edges should be rough, and the number of corresponding clusters and corner points should be increased; the embodiment obtains the number of straight lines less than
Figure 835067DEST_PATH_IMAGE014
Will be less than
Figure 358321DEST_PATH_IMAGE014
Marking each line corresponding to each sub-range as a target line, acquiring all corner points in a cluster corresponding to all the target lines, and marking as target corner points; all target corner points on the edge of the first region to be measured are sequenced according to positions to obtain a target corner point sequence (target corner points adjacent in position and also adjacent in the target corner point sequence)) Recording a first target corner in the target corner sequence as a first target corner, and recording a last target corner in the target corner sequence as a second target corner; and marking an edge line between a first target corner point and a second target corner point in the edge of the first region to be measured as a target edge line, wherein the target edge line comprises all target corner points. And performing circle fitting on all pixel points on the target edge line, obtaining corresponding goodness of fit, and recording as the goodness of fit corresponding to the first region to be measured.
Since the sparse angular points correspond to the smooth edge of the first region to be measured and the irregular and rough alumen ustum edge, the number of selected straight lines is less than that of the selected straight lines
Figure 146148DEST_PATH_IMAGE014
And analyzing the part of the edge separately; when the edge of the part is smoother and is closer to a circle, the filiform edge characteristics presented by the part of the edge are less obvious. Then, in this embodiment, the filamentous edge compactness corresponding to the first to-be-detected region is obtained according to the number of clusters, the number of corner points, and the goodness of fit, and the specific formula is as follows:
Figure 454770DEST_PATH_IMAGE024
wherein, the first and the second end of the pipe are connected with each other,
Figure 794484DEST_PATH_IMAGE025
the tightness of the corresponding filiform edge of the first area to be detected,
Figure 821346DEST_PATH_IMAGE026
the goodness-of-fit corresponding to the first region to be measured,
Figure 412865DEST_PATH_IMAGE022
the number of corner points corresponding to the first region to be measured,
Figure 841572DEST_PATH_IMAGE023
the number of clusters corresponding to the first region to be measured. When in use
Figure 352188DEST_PATH_IMAGE026
The smaller the size of the tube is,
Figure 131925DEST_PATH_IMAGE022
and
Figure 261555DEST_PATH_IMAGE023
the larger the cluster is, the rougher the edge of the first region to be determined is, the larger the number of detected corner points is, the more clusters are obtained by clustering, the greater the compactness of the filamentous edge corresponding to the first region to be determined is, that is, the more likely the first region to be determined is to be a complete alum blossom region.
Then, the embodiment combines the uniformity of the filiform edge corresponding to the first region to be measured with the tightness of the filiform edge to obtain the edge integrity corresponding to the first region to be measured, specifically: calculating the product of the uniformity of the filiform edge corresponding to the first region to be measured and the tightness of the filiform edge as the edge integrity corresponding to the first region to be measured; when the uniformity of the filiform edge is larger and the tightness of the filiform edge is larger, it is indicated that the edge of the first region to be measured is more complete and more in line with the edge characteristics of alum blossom, and the first region to be measured is more likely to be a complete alum blossom, i.e. the corresponding edge integrity is larger.
To this end, the embodiment can obtain the edge integrity corresponding to each first region to be measured according to the above process.
In the embodiment, the similarity of the transmittance between the first region to be determined and the matching region is measured from the transmittance angle in connection with the dispersion, the edge integrity measures the edge characteristic of the first region to be determined from the filiform characteristic of the edge of the alumen ustum and the tightness between the silks, and both the edge characteristic and the filiform characteristic can reflect the possibility that the corresponding first region to be determined is complete alumen ustum; thus for any first region to be determined: and calculating the ratio of the edge integrity and the connection dispersion corresponding to the first region to be measured, and taking the ratio as the merging degree corresponding to the first region to be measured. When the connection dispersion is smaller and the edge integrity is larger, it is indicated that the first region to be determined and the corresponding matching region are more likely to belong to the same alumen ustum, and the corresponding merging degree of the first region to be determined is larger.
Thus, the merging degree corresponding to each first region to be measured can be obtained according to the above process in the embodiment.
S5, obtaining a target alum blossom image according to the merging degree and the matching area corresponding to each first area to be measured; and obtaining the flocculant requirement state at the corresponding position according to the alum blossom image to be detected, the target alum blossom image and the trained target network.
In order to determine which first regions to be determined belong to the same alumen ustum, a determination threshold is first set in this embodiment, and then it is determined which first regions to be determined can be combined with the corresponding matching regions based on the determination threshold and the combining degree; specifically, the method comprises the following steps:
in this embodiment, a plurality of alum images are first obtained according to the above conditions (in this embodiment, 100 alum images are obtained, and may be specifically obtained according to actual needs), and are recorded as sample alum images, where the sample alum images include three situations of insufficient flocculant, moderate flocculant, and excessive flocculant. Obtaining each first area to be measured in each sample alumen ustum image according to the process, and calculating to obtain the corresponding merging degree of each first area to be measured; and manually marking the first region to be measured which belongs to the same alumen ustum in each sample alumen ustum image. For any first region to be measured in any sample alum blossom image: if the first region to be detected corresponds to the matching region which corresponds to the same alum floc, recording the merging degree of the first region to be detected, and fusing the two regions into one region; judging each region to be measured in each sample alumen ustum image by analogy, recording the merging degree corresponding to each first region to be measured which can be fused with the corresponding matching region, and forming a merging degree set; in this embodiment, the minimum value in the combination degree set is used as the judgment threshold
Figure DEST_PATH_IMAGE027
And the judgment threshold is used for judging whether the area to be measured can be fused with the corresponding matching area. When the merging degree corresponding to the first region to be measured is greater than or equal to t, the first region to be measured and the corresponding matching region belong to the same alumen ustum, namely the two regions are fused; otherwise, the first region to be measured and the corresponding matching region are not the same alumen ustum, and the two regions cannot be fused.
Traversing each first region to be detected in the alumen ustum image to be detected, fusing the first region to be detected with the merging degree larger than or equal to the judgment threshold in the alumen ustum image to be detected with the corresponding matching region, and obtaining each second region after traversing is completed, wherein the second region may be formed by one first region to be detected or formed by fusing two or more first regions to be detected; similarly, a second adjacent region set corresponding to each second region is obtained, wherein the second adjacent region set comprises each second region adjacent to the corresponding second region; recording a second area with the second adjacent area set as an empty second area as a target alumen ustum area, and recording a second area with the second adjacent area set not as an empty second area to be measured; obtaining matching areas corresponding to the second areas to be measured and corresponding merging degrees according to the process; if the merging degree corresponding to each second region to be measured is smaller than the judgment threshold, recording each second region to be measured as a target alumen ustum region, and if the second region to be measured with the merging degree larger than or equal to the judgment threshold exists, fusing the second region to be measured with the merging degree larger than or equal to the judgment threshold with the corresponding matching region to obtain each third region, wherein the third region may be formed by one second region to be measured or formed by fusing two or more second regions to be measured; and repeating the steps until other areas to be determined except the target alum blossom area do not exist in the alum blossom image to be detected, so as to obtain each target alum blossom area in the alum blossom image to be detected. In this embodiment, each target alumen ustum region in the alumen ustum image to be detected is divided, and the divided alumen ustum image to be detected is recorded as a target alumen ustum image.
In order to determine the addition condition of the flocculant at the position, a target network is constructed in this embodiment, the input of the target network is an alum floc image to be detected and a target alum floc image, the output result is a corresponding flocculant demand state, and the flocculant demand state includes: a flocculating agent needs to be additionally added, a small amount of flocculating agent needs to be added, the flocculating agent amount does not need to be changed, and the flocculating agent is excessively added; the target network is a convolutional neural network, such as SENEt, a loss function for training the target network is a cross entropy loss function, an optimization algorithm is an Adam algorithm, tag data of the network is marked artificially, and flocculants need to be added additionally and a small amount, the flocculant amount does not need to be changed, the flocculant is added excessively, and the adjustment is specifically needed according to actual conditions. In this embodiment, the training method of the target network is the existing one, and will not be described herein again.
Next, in this embodiment, the to-be-detected alum blossom image and the target alum blossom image are input into the trained target network, and the flocculant requirement state at the position is obtained through output. And finally, carrying out corresponding treatment at the corresponding position of the flocculation tank according to the requirement state of the flocculating agent.
In the embodiment, it is considered that different alumen ustums corresponding to different depths may overlap in an image, and therefore, when the alumen ustums in the image are detected, the identified alumen ustums may be large, and therefore, in order to divide the different alumen ustums in the obtained image of the alumen ustum to be detected, in the embodiment, first regions in the image of the alumen ustum to be detected are divided according to corresponding edge images, and the first region where an adjacent region set is not empty is recorded as a first region to be detected; in the embodiment, which first regions to be measured belong to the same alumen ustum are determined based on the transmittance of the alumen ustum and the characteristics of the filamentous edge, and the transmittance, the connection dispersion and the corresponding matching regions corresponding to the first regions to be measured are obtained according to the values of three channels, namely R, G and B, corresponding to each pixel point in each first region to be measured; then, performing corner point detection on the edge of each first region to be measured, and obtaining filiform edge uniformity and filiform edge tightness corresponding to each first region to be measured according to each corner point corresponding to each first region to be measured; then, obtaining the merging degree corresponding to each first region to be measured according to the filamentous edge uniformity, the filamentous edge tightness and the connection dispersion corresponding to each first region to be measured; the merging degree is used for judging whether the first region to be detected and the corresponding matching region are the same alumen ustum or not, so that each target alumen ustum region in the alumen ustum image to be detected is divided according to the merging degree and the matching region corresponding to each first region to be detected to obtain a target alumen ustum image, and the target alumen ustum region is a region corresponding to a complete alumen ustum; the method divides the target alum floc area in the alum floc image to be detected, and can improve the accuracy of subsequent determination of the flocculant addition condition. And finally, obtaining the flocculant requirement state at the corresponding position according to the alum blossom image to be detected, the target alum blossom image and the trained target network. The embodiment divides different alum flowers in the image according to the transmittance of the alum flowers and the characteristics of filamentous edges, and then judges the adding condition of the flocculating agent more quickly and accurately according to the divided image, thereby avoiding the influence of factors such as flow velocity, temperature and components in the flocculation tank and realizing the detection of the adding condition of the flocculating agent more quickly and accurately with lower cost.
It should be noted that: the above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. A sewage treatment intelligent monitoring method based on computer vision is characterized by comprising the following steps:
acquiring an alum blossom image to be detected and an alum blossom gray image, wherein the alum blossom image to be detected is an RGB image, and the alum blossom gray image is a gray image obtained by preprocessing a gray image corresponding to the alum blossom image to be detected;
extracting edges in the alum blossom gray images to obtain corresponding edge images; carrying out region division on the alum blossom image to be detected according to the edge image to obtain each first region in the alum blossom image to be detected; acquiring an adjacent region set corresponding to each first region, wherein the adjacent region set comprises first regions adjacent to the corresponding first regions; recording a first area of which the adjacent area set is not empty as a first area to be measured;
obtaining the transmittance corresponding to each first region to be measured according to the values of the R, G and B channels corresponding to each pixel point in each first region to be measured; obtaining a corresponding connection dispersion and a corresponding matching region of each first region to be measured according to the transmittance of each first region to be measured;
carrying out corner point detection on the edge of each first area to be measured to obtain each corner point corresponding to each first area to be measured; obtaining the filiform edge uniformity and the filiform edge tightness corresponding to each first region to be measured according to each corner point corresponding to each first region to be measured; obtaining the merging degree corresponding to each first region to be measured according to the filiform edge uniformity, the filiform edge tightness and the connection dispersion corresponding to each first region to be measured;
obtaining a target alum blossom image according to the merging degree and the matching area corresponding to each first area to be measured; and obtaining the flocculant requirement state at the corresponding position according to the alum blossom image to be detected, the target alum blossom image and the trained target network.
2. The intelligent sewage treatment monitoring method based on computer vision as claimed in claim 1, wherein the step of performing region division on the alum blossom image to be detected according to the edge image to obtain each first region in the alum blossom image to be detected comprises:
acquiring each closed edge in the edge image, and recording an area in each closed edge as a first area;
and mapping each first region in the edge image to the alum blossom image to be detected to obtain each first region in the alum blossom image to be detected.
3. The intelligent sewage treatment monitoring method based on computer vision as claimed in claim 1, wherein a neighboring area set corresponding to each first area is obtained, the neighboring area set comprises the first area neighboring to the corresponding first area; a first region in which the set of adjacent regions is not empty is marked as a first region to be measured, and the method comprises the following steps:
for any first region: recording other first areas where pixel points in eight adjacent areas of the pixel points included in the first area are located as first adjacent areas corresponding to the first area, and recording a set formed by all the first adjacent areas corresponding to the first area as an adjacent area set corresponding to the first area;
recording a first area with empty adjacent area sets as a target alum blossom area, wherein the target alum blossom area is an area corresponding to complete alum blossom; and recording a first area of which the adjacent area set is not empty as a first area to be measured.
4. The intelligent sewage treatment monitoring method based on computer vision of claim 1, wherein obtaining the transmittance corresponding to each first region to be measured according to the values of the three channels R, G and B corresponding to each pixel point in each first region to be measured comprises:
for any first region to be determined:
obtaining a first-order color moment of the R channel corresponding to the first region to be measured according to the value of the R channel corresponding to each pixel point in the first region to be measured; obtaining a first-order color moment of the G channel corresponding to the first area to be measured according to the value of the G channel corresponding to each pixel point in the first area to be measured; obtaining a first-order color moment of a B channel corresponding to the first area to be measured according to the value of the B channel corresponding to each pixel point in the first area to be measured;
obtaining the transmittance corresponding to the first region to be measured according to the first-order color moment of the R channel, the first-order color moment of the G channel, the first-order color moment of the B channel and the gray value of the corresponding pixel point corresponding to the first region to be measured;
the formula for obtaining the transmittance corresponding to the first region to be measured is as follows:
Figure DEST_PATH_IMAGE002
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE004
the transmittance of the first region to be measured,
Figure DEST_PATH_IMAGE006
is the first order color moment of the R channel corresponding to the first region to be measured,
Figure DEST_PATH_IMAGE008
is the first order color moment of the G channel corresponding to the first area to be measured,
Figure DEST_PATH_IMAGE010
is the first order color moment of the B channel corresponding to the first area to be measured,
Figure DEST_PATH_IMAGE012
is the average value of the gray values of all the pixel points in the first region to be measured,
Figure DEST_PATH_IMAGE014
is the maximum gray value in this first region to be determined,
Figure DEST_PATH_IMAGE016
is the minimum gray value in the first region to be determined,
Figure DEST_PATH_IMAGE018
is the first adjustment parameter.
5. The intelligent sewage treatment monitoring method based on computer vision of claim 1, wherein obtaining the corresponding connection dispersion and the corresponding matching area of each first area to be measured according to the corresponding transmittance of each first area to be measured comprises:
for any first region to be determined:
respectively calculating the absolute value of the difference value of the transmittance of the first region to be measured and the transmittance of each first adjacent region in the corresponding adjacent region set;
taking the minimum value in the absolute value of the difference value between the transmittance of the first to-be-measured area and the transmittance of each first adjacent area in the corresponding adjacent area set as the connection dispersion corresponding to the first to-be-measured area; and taking the first adjacent area with the minimum absolute value of the difference value of the transmittance of the first area to be measured as the matching area corresponding to the first area to be measured.
6. The intelligent monitoring method for sewage treatment based on computer vision of claim 1, wherein the method for obtaining the uniformity of the filamentous edge corresponding to the first region to be measured comprises:
for any first region to be determined:
clustering each corner corresponding to the first area to be measured by using a DBSCAN clustering algorithm to obtain a plurality of clusters corresponding to the first area to be measured;
for any cluster corresponding to the first region to be measured: performing straight line fitting on each angular point in the cluster to obtain a straight line corresponding to the cluster and an inclination angle corresponding to the straight line; equally dividing the range of the inclination angle into a preset number of sub-ranges
Figure DEST_PATH_IMAGE020
Dividing the straight line corresponding to each cluster into corresponding sub-ranges, and counting the number of the straight lines corresponding to the first area to be measured contained in each sub-range;
obtaining the uniformity of the filiform edge corresponding to the first region to be measured according to the number of straight lines corresponding to the first region to be measured contained in each sub-range;
the formula for obtaining the uniformity of the filamentous edge corresponding to the first region to be measured is as follows:
Figure DEST_PATH_IMAGE022
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE024
the uniformity of the filiform edge corresponding to the first region to be measured,
Figure DEST_PATH_IMAGE026
the number of straight lines corresponding to the first region to be measured included in the ith sub-range,
Figure DEST_PATH_IMAGE028
is the average value of the number of straight lines corresponding to the first region to be measured included in all the sub-ranges, a is the number of the sub-ranges,
Figure DEST_PATH_IMAGE030
as a second adjustment parameter, the first adjustment parameter,
Figure DEST_PATH_IMAGE032
is the product of the number of straight lines corresponding to the first region to be measured included in each sub-range.
7. The intelligent sewage treatment monitoring method based on computer vision as claimed in claim 6, wherein the method for obtaining the tightness of the filamentous edge corresponding to the first area to be measured comprises:
for any first region to be determined:
number of straight lines acquired is less than
Figure 350112DEST_PATH_IMAGE028
Will be less than
Figure 172575DEST_PATH_IMAGE028
Each straight line corresponding to each sub-range is marked as a target straight line; all the corner points in the cluster corresponding to all the target straight lines are obtained and recorded as target cornersPoint;
sequencing all target corner points on the edge of the first area to be measured according to positions to obtain a target corner point sequence; recording a first target corner in the target corner sequence as a first target corner, and recording a last target corner in the target corner sequence as a second target corner;
marking an edge line between a first target corner point and a second target corner point in the edge of the first region to be measured as a target edge line, wherein the target edge line comprises all target corner points;
performing circle fitting on all pixel points on the target edge line to obtain corresponding goodness of fit, and recording as the goodness of fit corresponding to the first region to be measured;
and calculating the product of the number of the clusters corresponding to the first region to be measured and the number of the angular points, and calculating the ratio of the product to the goodness-of-fit as the filamentous edge tightness corresponding to the first region to be measured.
8. The intelligent sewage treatment monitoring method based on computer vision of claim 1, wherein obtaining the merging degree corresponding to each first region to be measured according to the filamentous edge uniformity, the filamentous edge tightness and the connection dispersion corresponding to each first region to be measured comprises:
for any first region to be determined:
calculating the product of the uniformity of the filamentous edges corresponding to the first region to be measured and the compactness of the filamentous edges as the edge integrity corresponding to the first region to be measured; and calculating the ratio of the edge integrity and the connection dispersion corresponding to the first region to be measured, and taking the ratio as the merging degree corresponding to the first region to be measured.
9. The intelligent sewage treatment monitoring method based on computer vision of claim 1, wherein the step of obtaining the target alum blossom image according to the merging degree and the matching area corresponding to each first area to be measured comprises the following steps:
traversing each first to-be-determined area in the alum blossom image to be detected, fusing the first to-be-determined areas with the merging degree larger than or equal to the judgment threshold in the alum blossom image to be detected with the corresponding matching areas, and obtaining each second area after traversing is completed, wherein each second area is formed by the first to-be-determined areas with the number larger than or equal to 1; acquiring a second adjacent region set corresponding to each second region, wherein the second adjacent region set comprises each second region adjacent to the corresponding second region; recording a second area with the second adjacent area set as an empty second area as a target alumen ustum area, and recording a second area with the second adjacent area set not as an empty second area to be measured; obtaining a matching area corresponding to each second area to be measured and a corresponding merging degree; if the merging degrees corresponding to the second areas to be measured are smaller than the judgment threshold, recording the second areas to be measured as target alum blossom areas; if the second areas to be measured with the merging degree larger than or equal to the judgment threshold exist, fusing the second areas to be measured with the merging degree larger than or equal to the judgment threshold with the corresponding matching areas to obtain third areas, wherein the third areas are formed by the second areas to be measured with the number larger than or equal to 1; by analogy, until other regions to be determined except the target alumen ustum region do not exist in the alumen ustum image to be detected, obtaining each target alumen ustum region in the alumen ustum image to be detected; dividing each target alumen ustum area in the alumen ustum image to be detected, and recording the divided alumen ustum image to be detected as a target alumen ustum image.
10. The intelligent monitoring method for sewage treatment based on computer vision of claim 1, wherein the flocculant demand state comprises: the flocculating agent needs to be additionally added, a small amount of flocculating agent needs to be added, the flocculating agent dose does not need to be changed, and the flocculating agent is excessively added.
CN202211081028.3A 2022-09-06 2022-09-06 Intelligent monitoring method for sewage treatment based on computer vision Active CN115147617B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211081028.3A CN115147617B (en) 2022-09-06 2022-09-06 Intelligent monitoring method for sewage treatment based on computer vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211081028.3A CN115147617B (en) 2022-09-06 2022-09-06 Intelligent monitoring method for sewage treatment based on computer vision

Publications (2)

Publication Number Publication Date
CN115147617A true CN115147617A (en) 2022-10-04
CN115147617B CN115147617B (en) 2022-11-22

Family

ID=83415838

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211081028.3A Active CN115147617B (en) 2022-09-06 2022-09-06 Intelligent monitoring method for sewage treatment based on computer vision

Country Status (1)

Country Link
CN (1) CN115147617B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115353181A (en) * 2022-10-17 2022-11-18 南通海阳节能环保科技有限公司 Intelligent flocculant dosage feeding method for papermaking wastewater
CN117011386A (en) * 2023-09-27 2023-11-07 天津水科机电有限公司 Pollution discharge effect evaluation method based on backwashing water filter
CN117315454A (en) * 2023-11-29 2023-12-29 河北中瀚水务有限公司 Evaluation method, device and system for flocculation reaction process
CN117929375A (en) * 2024-03-21 2024-04-26 武汉奥恒胜科技有限公司 Water quality detection method and water quality detector based on image processing
CN117929375B (en) * 2024-03-21 2024-06-04 武汉奥恒胜科技有限公司 Water quality detection method and water quality detector based on image processing

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR1353079A (en) * 1963-01-25 1964-02-21 Allied Chem Method and apparatus for the manufacture of plates or tiles coated with a sheet of thermoplastic material on one side
US4654139A (en) * 1984-06-08 1987-03-31 Hitachi, Ltd. Flocculation basin in water treatment process
DE69907817D1 (en) * 1998-07-07 2003-06-18 Mitsubishi Chem Corp Process for producing a composite film containing aluminum oxide fiber precursors
CN1438605A (en) * 2003-03-14 2003-08-27 西安交通大学 Beer-bottle raised character fetching-identifying hardware system and processing method
EP1817455A1 (en) * 2004-11-03 2007-08-15 J. Rettenmaier & Söhne GmbH + Co. KG Cellulose-containing filling material for paper, tissue, or cardboard products, method for the production thereof, paper, tissue, or cardboard product containing such a filling material, or dry mixture used therefor
CN101208599A (en) * 2005-04-26 2008-06-25 拜尔技术股份有限责任公司 Novel equipment and method for coating substrates for analyte detection by way of an affinity assay method
JP2014054603A (en) * 2012-09-13 2014-03-27 Toshiba Corp Flocculant injection control method and flocculant injection control system
CN104077784A (en) * 2013-03-29 2014-10-01 联想(北京)有限公司 Method for extracting target object and electronic device
CN105540800A (en) * 2016-01-27 2016-05-04 广州市自来水公司 Device and method for automatically monitoring state of alumen ustum in open water treatment pool in water plant
CN110913102A (en) * 2019-11-21 2020-03-24 中冶赛迪工程技术股份有限公司 Image processing device for alum blossom acquisition and recognition
CN111294516A (en) * 2020-02-27 2020-06-16 中冶赛迪重庆信息技术有限公司 Alum image processing method and system, electronic device and medium
CN111833369A (en) * 2020-07-21 2020-10-27 中冶赛迪重庆信息技术有限公司 Alum image processing method, system, medium and electronic device
CN111943332A (en) * 2020-09-14 2020-11-17 马鞍山市天工科技股份有限公司 Super-magnetic separation sewage purification device
CN111966053A (en) * 2020-07-09 2020-11-20 上海威派格智慧水务股份有限公司 Intelligent flocculant decision making system
CN112101352A (en) * 2020-09-10 2020-12-18 广州深视未来智能科技有限责任公司 Underwater alumen ustum state identification method and monitoring device, computer equipment and storage medium
CN112875827A (en) * 2021-01-28 2021-06-01 中冶赛迪重庆信息技术有限公司 Intelligent dosing system and water treatment system based on image recognition and data mining
CN113884299A (en) * 2021-12-02 2022-01-04 武汉市书豪塑胶有限公司 Rotational molding machine fault detection method based on artificial intelligence
CN114255413A (en) * 2021-11-22 2022-03-29 上海赫煊自动化系统工程有限公司 Intelligent monitoring method and system for jarosite

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR1353079A (en) * 1963-01-25 1964-02-21 Allied Chem Method and apparatus for the manufacture of plates or tiles coated with a sheet of thermoplastic material on one side
US4654139A (en) * 1984-06-08 1987-03-31 Hitachi, Ltd. Flocculation basin in water treatment process
DE69907817D1 (en) * 1998-07-07 2003-06-18 Mitsubishi Chem Corp Process for producing a composite film containing aluminum oxide fiber precursors
CN1438605A (en) * 2003-03-14 2003-08-27 西安交通大学 Beer-bottle raised character fetching-identifying hardware system and processing method
EP1817455A1 (en) * 2004-11-03 2007-08-15 J. Rettenmaier & Söhne GmbH + Co. KG Cellulose-containing filling material for paper, tissue, or cardboard products, method for the production thereof, paper, tissue, or cardboard product containing such a filling material, or dry mixture used therefor
CN101208599A (en) * 2005-04-26 2008-06-25 拜尔技术股份有限责任公司 Novel equipment and method for coating substrates for analyte detection by way of an affinity assay method
JP2014054603A (en) * 2012-09-13 2014-03-27 Toshiba Corp Flocculant injection control method and flocculant injection control system
CN104077784A (en) * 2013-03-29 2014-10-01 联想(北京)有限公司 Method for extracting target object and electronic device
CN105540800A (en) * 2016-01-27 2016-05-04 广州市自来水公司 Device and method for automatically monitoring state of alumen ustum in open water treatment pool in water plant
CN110913102A (en) * 2019-11-21 2020-03-24 中冶赛迪工程技术股份有限公司 Image processing device for alum blossom acquisition and recognition
CN111294516A (en) * 2020-02-27 2020-06-16 中冶赛迪重庆信息技术有限公司 Alum image processing method and system, electronic device and medium
CN111966053A (en) * 2020-07-09 2020-11-20 上海威派格智慧水务股份有限公司 Intelligent flocculant decision making system
CN111833369A (en) * 2020-07-21 2020-10-27 中冶赛迪重庆信息技术有限公司 Alum image processing method, system, medium and electronic device
CN112101352A (en) * 2020-09-10 2020-12-18 广州深视未来智能科技有限责任公司 Underwater alumen ustum state identification method and monitoring device, computer equipment and storage medium
CN111943332A (en) * 2020-09-14 2020-11-17 马鞍山市天工科技股份有限公司 Super-magnetic separation sewage purification device
CN112875827A (en) * 2021-01-28 2021-06-01 中冶赛迪重庆信息技术有限公司 Intelligent dosing system and water treatment system based on image recognition and data mining
CN114255413A (en) * 2021-11-22 2022-03-29 上海赫煊自动化系统工程有限公司 Intelligent monitoring method and system for jarosite
CN113884299A (en) * 2021-12-02 2022-01-04 武汉市书豪塑胶有限公司 Rotational molding machine fault detection method based on artificial intelligence

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
LIU YAN 等: "A Fractal Geometrical Model of Alum Ustum Formation in Water Coagulation Process", 《ENVIRONMENTAL ENGINEERING》 *
王新增等: "矾花图像的处理及应用研究", 《成都信息工程学院学报》 *
郭建甲: "基于数字图像处理技术的水厂自动加矾系统", 《计算机软件及计算机应用》 *
韩东等: "运用一致性测度分割矾花图像", 《现代电子技术》 *
黄念禹等: "自来水厂矾花状态自动监测应用研究", 《给水排水》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115353181A (en) * 2022-10-17 2022-11-18 南通海阳节能环保科技有限公司 Intelligent flocculant dosage feeding method for papermaking wastewater
CN117011386A (en) * 2023-09-27 2023-11-07 天津水科机电有限公司 Pollution discharge effect evaluation method based on backwashing water filter
CN117011386B (en) * 2023-09-27 2024-01-26 天津水科机电有限公司 Pollution discharge effect evaluation method based on backwashing water filter
CN117315454A (en) * 2023-11-29 2023-12-29 河北中瀚水务有限公司 Evaluation method, device and system for flocculation reaction process
CN117315454B (en) * 2023-11-29 2024-03-12 河北中瀚水务有限公司 Evaluation method, device and system for flocculation reaction process
CN117929375A (en) * 2024-03-21 2024-04-26 武汉奥恒胜科技有限公司 Water quality detection method and water quality detector based on image processing
CN117929375B (en) * 2024-03-21 2024-06-04 武汉奥恒胜科技有限公司 Water quality detection method and water quality detector based on image processing

Also Published As

Publication number Publication date
CN115147617B (en) 2022-11-22

Similar Documents

Publication Publication Date Title
CN115147617B (en) Intelligent monitoring method for sewage treatment based on computer vision
CN109074033B (en) Method and system for optimizing coagulation and/or flocculation in water treatment processes
CN115353181B (en) Intelligent flocculant dosage feeding method for papermaking wastewater
CN115760852B (en) Marine sewage discharge treatment method
US4783269A (en) Injection control system of flocculating agent
CN114663684A (en) Method, system and operation equipment for real-time intelligent analysis of flocculation reaction
CN116310845B (en) Intelligent monitoring system for sewage treatment
CN115546720A (en) Image type analytic regulation and control method and device for flocculation working condition
JP2020025943A (en) Water treatment method and water treatment system
CN114169403A (en) Coagulating picture shooting device, shooting method and flocculation control dosing method
CN112101352A (en) Underwater alumen ustum state identification method and monitoring device, computer equipment and storage medium
CN112919605A (en) Sewage treatment system and method based on image acquisition
KR20190063188A (en) Method for controlling water purification using real-time image analysis
Benens et al. Evaluation of different shape parameters to distinguish between flocs and filaments in activated sludge images
Yu et al. Applying Online Image Analysis to Simultaneously Evaluate the Removals of Suspended Solids and Color from Textile Wastewater in Chemical Flocculated Sedimentation.
CN117011243A (en) Angelica keiskei image contrast analysis method
CN115684156A (en) Coagulation unit water production state early warning method based on floc identification
CN115410016B (en) Efficient treatment method for sewage in microbial sewage pool based on image frequency domain analysis
CN114269689A (en) Water treatment system, control device, water treatment method, and program
Sivchenko et al. Evaluation of image texture recognition techniques in application to wastewater coagulation
JP6797718B2 (en) Aggregation control device, aggregation control method and aggregation control system
JP3136554B2 (en) Sludge coagulation equipment
JPS63269043A (en) Apparatus for confirming image of flocculated substance
CN114863313A (en) Water treatment process monitoring method based on image recognition
CN114956287B (en) Sewage dephosphorization method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant