CN116665137B - Livestock breeding wastewater treatment method based on machine vision - Google Patents

Livestock breeding wastewater treatment method based on machine vision Download PDF

Info

Publication number
CN116665137B
CN116665137B CN202310952482.XA CN202310952482A CN116665137B CN 116665137 B CN116665137 B CN 116665137B CN 202310952482 A CN202310952482 A CN 202310952482A CN 116665137 B CN116665137 B CN 116665137B
Authority
CN
China
Prior art keywords
pixel
super
gray
pixel region
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310952482.XA
Other languages
Chinese (zh)
Other versions
CN116665137A (en
Inventor
吴文虎
刘诗意
王梦娇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Liaocheng Caishuo Agricultural Technology Co ltd
Original Assignee
Liaocheng Caishuo Agricultural Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Liaocheng Caishuo Agricultural Technology Co ltd filed Critical Liaocheng Caishuo Agricultural Technology Co ltd
Priority to CN202310952482.XA priority Critical patent/CN116665137B/en
Publication of CN116665137A publication Critical patent/CN116665137A/en
Application granted granted Critical
Publication of CN116665137B publication Critical patent/CN116665137B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F16ENGINEERING ELEMENTS AND UNITS; GENERAL MEASURES FOR PRODUCING AND MAINTAINING EFFECTIVE FUNCTIONING OF MACHINES OR INSTALLATIONS; THERMAL INSULATION IN GENERAL
    • F16LPIPES; JOINTS OR FITTINGS FOR PIPES; SUPPORTS FOR PIPES, CABLES OR PROTECTIVE TUBING; MEANS FOR THERMAL INSULATION IN GENERAL
    • F16L55/00Devices or appurtenances for use in, or in connection with, pipes or pipe systems
    • F16L55/24Preventing accumulation of dirt or other matter in the pipes, e.g. by traps, by strainers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B08CLEANING
    • B08BCLEANING IN GENERAL; PREVENTION OF FOULING IN GENERAL
    • B08B9/00Cleaning hollow articles by methods or apparatus specially adapted thereto 
    • B08B9/02Cleaning pipes or tubes or systems of pipes or tubes
    • B08B9/027Cleaning the internal surfaces; Removal of blockages
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/92Dynamic range modification of images or parts thereof based on global image properties
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/54Extraction of image or video features relating to texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • G06T2207/20008Globally adaptive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Image Processing (AREA)
  • Treatment Of Biological Wastes In General (AREA)

Abstract

The invention relates to the field of image data processing, in particular to a livestock breeding wastewater treatment method based on machine vision, which comprises the following steps: acquiring a super-pixel area of a gray level diagram of a wastewater discharge outlet at the current moment; calculating a pixel structure chaotic coefficient and a texture richness of the super pixel region; obtaining the enhancement weight of the super-pixel region by using the pixel structure chaotic coefficient and the texture richness; calculating an enhancement demand index of each pixel point in the super pixel area; obtaining the gray scale stretching coefficient of each pixel point by utilizing the enhancement weight of the super pixel region and the enhancement requirement index of each pixel point; obtaining a gray level diagram after the current time is enhanced according to the gray level stretching coefficient of each pixel point; judging whether the waste water pipeline is blocked at the current moment by utilizing the gray level diagram enhanced at the current moment and the previous moment, and cleaning the waste water pipeline when the waste water pipeline is blocked. The method is used for livestock breeding wastewater treatment, and can improve the evaluation accuracy of whether the wastewater pipeline needs to be cleaned.

Description

Livestock breeding wastewater treatment method based on machine vision
Technical Field
The invention relates to the field of image data processing, in particular to a livestock breeding wastewater treatment method based on machine vision.
Background
At present, livestock breeding wastewater discharged by large-scale farms in China is relatively large every day. Since the livestock breeding wastewater contains a large amount of pollutants, if the wastewater is directly discharged without treatment, serious pollution is caused. Therefore, the livestock-raising wastewater needs to be subjected to solid-liquid separation and then discharged. However, the livestock-raising wastewater after solid-liquid separation still contains a large amount of tiny solid attachments, the solid attachments are attached to the wastewater pipeline in the process of discharging through the wastewater pipeline, and the wastewater pipeline is blocked due to long-term accumulation. Therefore, the waste water pipeline can be cleaned regularly in a large-scale farm, but because the process of cleaning the waste water pipeline is complex and the harm of malodorous gas to personnel is extremely large, people propose to identify the blocking condition of the waste water pipeline so as to evaluate whether the waste water pipeline needs to be cleaned.
Large scale farms often employ methods of installing flow meters in the waste pipes to identify if the waste pipes are plugged. Commonly used flow meters include electromagnetic flow meters, ultrasonic flow meters, plug-in turbine flow meters, rotameters, and the like. When the electromagnetic flowmeter is used for flowing liquid with smaller flow rate and smaller flow velocity, the induced potential of the magnitude order opposite to the interference signal is difficult to amplify and measure, and zero drift is easy to occur in the meter; ultrasonic flow meters are only suitable for detection targets where the water quality is clear and cannot be used for pipelines with very thick liners or scales, and obviously the former two are not suitable for flow monitoring work of aquaculture wastewater with high pollution degree. The traditional turbine and rotor flow sensor needs liquid to pass through and push the impeller, then converts a rotation signal into a flow signal, and is placed in a pipeline for discharging aquaculture wastewater, so that impeller blockage and failure are very easy to occur. Based on the above, a video flow measurement technology is proposed, namely, the waste water flow data is obtained through video analysis, the traditional buoy method principle is adopted, the water surface texture features such as rigid floaters or waves and bubbles in a picture are extracted through processing the waste water discharge outlet image, tracking and matching are carried out, the physical distance and the inter-frame time of the feature points are calculated, so that the real-time flow velocity value of the water surface is obtained, and the section flow can be calculated according to the section data. And identifying the blocking condition of the waste water pipeline according to the section flow of the waste water discharge port, and evaluating whether the waste water pipeline needs to be cleaned according to the blocking condition of the waste water pipeline.
However, in the process of identifying the blocking condition of the waste water pipeline by adopting the video flow measurement technology, the video frame image needs to be denoised, and the existing denoising method mainly adopts a filtering processing method, namely, the same denoising method is adopted for all pixel points of the image. The denoising method is easy to reduce the contrast ratio of a background area and a wastewater area in a frame image, so that inaccurate wastewater contour and inaccurate wastewater flow data are caused, the recognition accuracy of the blockage condition of the wastewater pipeline is reduced, and the evaluation accuracy of whether the wastewater pipeline needs to be cleaned is further reduced.
Disclosure of Invention
The invention provides a livestock-raising wastewater treatment method based on machine vision, which aims to solve the problem that whether a wastewater pipeline needs to be cleaned or not is low in evaluation accuracy in the existing livestock-raising wastewater treatment method.
In order to achieve the purpose, the invention adopts the following technical scheme that the livestock breeding wastewater treatment method based on machine vision comprises the following steps:
acquiring a gray level image of a wastewater discharge port at the current moment, and performing super-pixel segmentation on the gray level image to obtain all super-pixel areas;
calculating to obtain a pixel structure chaotic coefficient of each super pixel region by using the gray value of each pixel point in each super pixel region, the number of the pixel points and the frequency of each gray value;
calculating the average gray gradient value of each super pixel region by using the gray gradient values of each pixel point in each super pixel region in different neighborhood directions;
calculating to obtain the texture richness of each super pixel region by using the average gray gradient value of each super pixel region, the number of pixel points in the super pixel region and the gray gradient values of each pixel point in the super pixel region in different neighborhood directions;
calculating to obtain the enhancement weight of each super-pixel region by using the pixel structure chaotic coefficient and the texture richness of each super-pixel region;
taking the maximum gray gradient value of all pixel points in each super pixel area in different neighborhood directions as the maximum gray gradient value of the super pixel area;
calculating to obtain an enhancement requirement index of each pixel point in each super pixel region by using the gray gradient value of each pixel point in each super pixel region in different neighborhood directions, the maximum gray gradient value of the super pixel region and the number of the neighborhood directions of each pixel point in the super pixel region;
calculating to obtain the gray scale stretching coefficient of each pixel point in each super pixel region by using the enhancement weight of each super pixel region and the enhancement demand index of each pixel point in each super pixel region;
carrying out gray stretching enhancement on the gray level map of the waste water discharge port according to the gray level stretching coefficient of each pixel point in each super pixel region to obtain the gray level map of the waste water discharge port after enhancement at the current moment;
judging whether the waste water pipeline is blocked at the current moment according to the gray level diagram of the waste water discharge outlet after the current moment is enhanced, and cleaning the waste water pipeline when the waste water pipeline is blocked.
The method for treating livestock-raising wastewater based on machine vision calculates a pixel structure chaotic coefficient of each super-pixel region by using the gray value of each pixel point in each super-pixel region, the number of the pixel points and the frequency of each gray value, and specifically comprises the following steps:
the following is performed for each super pixel region:
counting the frequency of each gray value in the super pixel area;
and calculating to obtain the pixel structure chaotic coefficient of each super pixel region by using the gray value of each pixel point in each super pixel region, the number of the pixel points and the frequency of each gray value.
The method for treating livestock-raising wastewater based on machine vision calculates the texture richness of each super pixel region by using the average gray gradient value of each super pixel region, the number of pixel points in the super pixel region and the gray gradient values of each pixel point in different neighborhood directions in the super pixel region, and specifically comprises the following steps:
the following is performed for each super pixel region:
taking any pixel point in the super pixel area as a first pixel point, and acquiring a neighborhood pixel point of the first pixel point;
calculating to obtain gray gradient values of the first pixel points in different neighborhood directions by using gray values of the first pixel points and the neighborhood pixel points thereof;
obtaining gray gradient values of each pixel point in the super-pixel region in different neighborhood directions according to the method for obtaining the gray gradient values of the first pixel point in different neighborhood directions;
the gray gradient values of each pixel point in the super pixel area in different neighborhood directions are taken as horizontal axes, the number of each gray gradient value is taken as vertical axis, and a gray gradient statistical histogram of the super pixel area is constructed;
calculating the average gray gradient value of the super pixel region according to the gray gradient value in the gray gradient statistical histogram of the super pixel region;
and calculating to obtain the texture richness of each super pixel region by using the average gray gradient value of each super pixel region, the number of pixel points in the super pixel region and the gray gradient values of each pixel point in the super pixel region in different neighborhood directions.
According to the livestock-raising wastewater treatment method based on machine vision, the number of the gray gradient values of each pixel point in each super pixel area in different neighborhood directions, the maximum gray gradient value of the super pixel area and the neighborhood directions of each pixel point in the super pixel area are utilized to calculate and obtain the enhancement requirement index of each pixel point in each super pixel area, and the method specifically comprises the following steps:
the following is performed for each super pixel region:
selecting any pixel point in the super pixel area as a first pixel point, and selecting a gray gradient value in any neighborhood direction of the first pixel point as a first gray gradient value;
taking the maximum gray gradient values of all pixel points in the super pixel area in different neighborhood directions as the maximum gray gradient values of the super pixel area;
calculating to obtain an enhancement requirement index of the first gray gradient value by using the maximum gray gradient value and the first gray gradient value of the super pixel region;
obtaining the enhancement requirement indexes of the gray gradient values of the first pixel point in other neighborhood directions according to the method for obtaining the enhancement requirement indexes of the first gray gradient values, and obtaining the enhancement requirement indexes of the gray gradient values of the first pixel point in different neighborhood directions;
obtaining the enhancement requirement indexes of the gray gradient values of each pixel point in the super-pixel region in different neighborhood directions according to the method for obtaining the enhancement requirement indexes of the gray gradient values of the first pixel point in different neighborhood directions;
and calculating to obtain the enhancement requirement index of each pixel point in the super pixel region by using the enhancement requirement indexes of the gray gradient values of each pixel point in the super pixel region in different neighborhood directions and the number of the neighborhood directions of each pixel point in the super pixel region.
The expression of the gray scale stretching coefficient of each pixel point in each super pixel area is as follows:
in the method, in the process of the invention,representing the gray scale stretch factor of the ith pixel point in the g-th super pixel area,/and->Enhancement weight representing the g-th super-pixel region,/->Representing the enhancement requirement index of the ith pixel point in the g-th super pixel area.
The method for treating livestock-raising wastewater based on machine vision comprises the steps of obtaining a gray level diagram of a wastewater discharge outlet at the current moment, and performing super-pixel segmentation on the gray level diagram to obtain all super-pixel areas, wherein the method specifically comprises the following steps:
collecting an image of a wastewater discharge port at the current moment;
gray treatment is carried out on the wastewater discharge outlet image at the current moment, and a wastewater discharge outlet gray level image at the current moment is obtained;
performing super-pixel segmentation on the gray level diagram of the wastewater discharge port at the current moment to obtain all super-pixel blocks;
and merging all the super pixel blocks according to the gray level similarity and the space distance between the super pixel blocks to obtain all the super pixel areas in the gray level diagram of the wastewater discharge port.
Further, judging whether the waste water pipeline is blocked at the current moment according to the enhanced waste water discharge port gray level diagram at the current moment, and cleaning the waste water pipeline when the waste water pipeline is blocked comprises:
obtaining the discharge flow of the waste water discharge port at the current moment by utilizing the gray level graph of the waste water discharge port after the enhancement at the current moment and the gray level graph of the waste water discharge port after the enhancement at the previous moment, judging whether the waste water pipeline at the current moment is blocked according to the obtained discharge flow of the waste water discharge port at the current moment, and cleaning the waste water pipeline when the waste water pipeline is blocked; the method for enhancing the gray level map of the waste water discharge outlet enhanced at the previous moment is the same as the method for enhancing the gray level map of the waste water discharge outlet enhanced at the current moment.
Further, the calculating the pixel structure chaotic coefficient of each super pixel area by using the gray value of each pixel point in each super pixel area, the number of the pixel points and the frequency of each gray value includes:
and obtaining the pixel structure chaotic coefficient by adopting an entropy value calculation formula.
Further, the neighborhood direction is set to be 8 neighborhood directions of the pixel points.
Further, the method for obtaining the enhanced demand index comprises the following steps:
in the method, in the process of the invention,an enhancement requirement index indicating the ith pixel point in the g-th super pixel area,/and->A number of neighborhood directions indicating that a gray gradient value exists at an ith pixel point in a g-th super pixel region,/or->And the enhancement requirement index in the d neighborhood direction of the gray gradient value exists in the ith pixel point in the g super pixel area.
The beneficial effects of the invention are as follows: according to the gray scale characteristics of each super pixel area in the gray scale map of the wastewater discharge outlet, the enhancement weight of each super pixel area is obtained, and the tensile coefficient of each pixel point in the gray scale map of the wastewater discharge outlet at a macroscopic angle is obtained based on macroscopic visual characteristics. The invention calculates the enhancement demand index of each pixel point by using the gray gradient of each pixel point and the neighborhood pixel points in the gray map of the wastewater discharge port, and obtains the stretching coefficient of each pixel point in the gray map of the wastewater discharge port at a microscopic angle based on the gray gradient characteristics of the microscopic pixel points. According to the invention, the self-adaptive stretching coefficient of each pixel point in the waste water discharge port gray level diagram is obtained by combining the stretching coefficients of each pixel point in the waste water discharge port gray level diagram at macroscopic and microscopic angles, so that the details of the image are more outstanding, the distortion problem caused by indiscriminate enhancement of the traditional denoising method is avoided, and the accuracy of identifying the blockage situation of the waste water pipeline is improved. The invention improves the evaluation accuracy of whether the waste water pipeline needs cleaning or not by improving the accuracy of identifying the blockage situation of the waste water pipeline.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a method for treating livestock-raising wastewater based on machine vision according to an embodiment of the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention provides a livestock breeding wastewater treatment method based on machine vision, which provides assistance for reducing ineffective operation caused by regular cleaning of a breeding wastewater pipeline and simplifying the treatment flow of the breeding wastewater.
The content of solid suspended matters in the wastewater discharged from a general farm is high and can reach 160000mg/L at the highest, the content of corresponding organic matters is also high, the pollutant load of a liquid part can be greatly reduced through solid-liquid separation, but for liquid, even if a large amount of small solid attachments still exist after solid-liquid separation, a wastewater pipeline can be blocked after a period of time is discharged.
The wastewater after solid-liquid separation still contains a large amount of tiny solid attachments, and the content is generally about 1.5% -2%, so that the wastewater discharge pipeline needs to be cleaned regularly to prevent the pipeline from being blocked. However, the process of cleaning the waste water pipe is complicated, and the malodorous gas is extremely harmful to the health of personnel, so the necessity of periodic treatment is to be evaluated.
In the solid-liquid separation operation to livestock-raising waste water, clear up the waste water pipeline regularly, can produce a large amount of invalid operations, influence the treatment effeciency of waste water, propose to utilize machine vision to come the discharge flow of real-time supervision waste water to realize whether intelligent early warning waste water pipeline has the jam condition, then clear up, improved the treatment effeciency of livestock-raising waste water, reduce the manpower extravagant. Then, for the problem that the image quality is reduced due to long-time operation of monitoring in the machine vision monitoring process, a preprocessing module is added to carry out denoising and contrast enhancement on the monitored image, and the contrast in the same area is distorted by the indiscriminate enhancement of image elements with different loss rates through a traditional enhancement algorithm.
An embodiment of a machine vision-based livestock-raising wastewater treatment method of the present invention, as shown in fig. 1, includes:
s101, acquiring a gray level image of a wastewater discharge outlet at the current moment, and performing super-pixel segmentation on the gray level image to obtain all super-pixel areas.
The monitoring camera is arranged at the position opposite to the waste water discharge port, the sampling period is set, the discharge flow of the waste water can be monitored by identifying the sampling image, and an implementer can set the sampling period according to specific implementation conditions. And then acquiring an image of the wastewater discharge outlet acquired by the monitoring equipment.
Because we only need to monitor the flow of the waste water discharge outlet in real time, the collected static frame image is subjected to gray processing, the interference of redundant color information is eliminated, the operation amount is reduced, and the operation speed is improved.
Because the monitoring equipment needs to monitor the flow data of the wastewater discharge port in real time, when the light is relatively complex and the image collector works for a long time, a large amount of Gaussian noise can appear in the collected image, the image quality is seriously reduced, any available information can not be extracted from the image even when the noise content is more, the flow of the wastewater is difficult to accurately monitor, and the flow abnormality caused by pipeline blockage is identified.
Therefore, an image preprocessing module is generally needed to perform denoising enhancement on an image, but any denoising algorithm has the characteristic of indiscriminate smoothness, so that the contrast between each element of the image can be weakened, and the definition of the image is lower, so that a contrast enhancement algorithm is needed to be added in the preprocessing module, but the contrast loss rates of different image elements in a smooth image after filtering and denoising are different due to randomness of the distribution of image noise, and the contrast distortion of the same type of image elements in an image area after gray stretching enhancement is easily caused under the condition that the loss rates of the image elements are different by the conventional contrast enhancement algorithm.
The smooth image can lose part of detail information, the detail edge gradient characteristics which are weakened or lost are not obvious enough and are easily ignored, so when the edge information among all image elements can not be clearly obtained, we can not use the thought of dimension reduction, combine the areas with smaller visual characteristic difference in the image, distinguish the areas with larger visual characteristic difference, then carry out gray gradient characteristic analysis in the areas with small visual characteristic difference, and in order to realize the thought, we use the super-pixel segmentation algorithm to carry out super-pixel segmentation on the gray image.
The super-pixel segmentation algorithm is an algorithm for classifying pixel points based on similarity among the pixel points, belongs to a part of image preprocessing and is used for initially carrying out region segmentation.
For example: the tank wall around the discharge port is polluted in a large amount due to long-term discharge of the livestock-raising wastewater, a large area with lower gray level appears on the monitored image, and when the wastewater separated from the livestock-raising wastewater is discharged from the discharge port, the wastewater with lower gray level and the tank wall are blurred due to image quality problems, so that the wastewater with lower gray level and the tank wall are easily divided into the same super-pixel area from visual characteristics. Thus, we need to evaluate whether the coarsely segmented different superpixel regions need to be further finely segmented.
And then obtaining a plurality of super pixel blocks after super pixel segmentation, and merging the super pixel blocks according to the gray level similarity and the space distance between the super pixel blocks to obtain each super pixel region.
S102, calculating to obtain a pixel structure chaotic coefficient of each super pixel region by using the gray values of the pixel points in each super pixel region, the number of the pixel points and the frequency of each gray value.
Counting the frequency of each gray value in the super pixel area;
and calculating to obtain the pixel structure chaotic coefficient of each super pixel region by using the gray value of each pixel point in each super pixel region, the number of the pixel points and the frequency of each gray value. The expression of the pixel structure chaotic coefficient of the super pixel region is as follows:
in the method, in the process of the invention,pixel structure confusion factor representing the g-th super-pixel region, < >>Representing the number of pixel points in the g-th super pixel area, < >>Representing the gray value of the ith pixel point in the g-th super pixel area,/and (b)>Representing the probability of the gray value of the ith pixel point in the g-th super pixel area in the area, namely the probability of each gray value, +.>Representing a logarithmic function. The entropy of the gray value of the pixel point in the g super pixel area is calculated, so that the confusion of the pixel structure in the area is represented, namely, the higher the entropy is, the more the pixel structure in the area is disordered.
S103, acquiring gray gradient values of each pixel point in each super pixel region in different neighborhood directions by using gray values of each pixel point in each super pixel region and neighbor pixel points thereof, and calculating to obtain an average gray gradient value of each super pixel region by using the gray gradient values of each pixel point in each super pixel region in different neighborhood directions.
Preferably, the neighborhood direction in one embodiment of the present invention is set to be the 8 neighborhood direction of the pixel point. And extracting gray scale characteristics in each independent super-pixel region. Solving gray gradient of 8 neighborhood of each pixel point in each super pixel region by utilizing differential operator, namely, a group of gray gradient exists between every two adjacent pixel points, the value of each group of gray gradient is between 0 and 255, then each pixel point has 0 group of gray gradient at least and 8 groups of gray gradient at most, gray gradient values of all pixel points on the image are obtained, the size of the gray gradient values is taken as a horizontal axis, the quantity of each gray gradient value existing on the image is taken as a vertical axis, a gray gradient statistical histogram is constructed, and attention is paid to: the gradient of 0 is not counted in the gray gradient statistical histogram.
And calculating the average gray gradient value of the super pixel region according to the gray gradient value in the gray gradient statistical histogram of the super pixel region.
S104, calculating to obtain the texture richness of each super pixel area by using the average gray gradient value of each super pixel area, the number of pixel points in the super pixel area and the gray gradient values of each pixel point in the super pixel area in different neighborhood directions.
And calculating to obtain the texture richness of each super pixel region by using the average gray gradient value of each super pixel region, the number of pixel points in the super pixel region and the gray gradient values of each pixel point in the super pixel region in different neighborhood directions. The expression of the texture richness of the super pixel region is as follows:
in the method, in the process of the invention,texture richness indicating the g-th super pixel area,/->Representing the number of pixel points in the g-th super pixel area, < >>Mean gray gradient value representing the g-th superpixel region,>representing the number of the (u) th gray gradient values in the (g) th superpixel region,/th gray gradient value>The number of kinds of gray gradient values in the g-th super pixel region is represented. />Indicating that the number of all gray gradient values in the g-th super-pixel region is added up. Then (I)>Represents all of the super pixel regions in the g-th super pixel regionDividing the total number of gray gradient values by the total number of pixel points in the super pixel area to obtain the average gray gradient value number of each pixel point, if the value is larger, the texture detail in the super pixel area is more abundant, but the formula is used for representing the average gray gradient value number of the pixel points in the super pixel area and does not contain the gray gradient value, so the method is more effective>Meaning of (2): the average gray gradient value number of the pixel points in the super pixel area can be used for representing the richness of the texture in the super pixel area, but the gray gradient value between the textures cannot be ignored, otherwise, the average gray gradient value number of the pixel points in one super pixel area is large, but only extremely small gray gradient values are mutually different, so that the super pixel area does not have the characteristic of richness of the texture, and therefore, the average gray gradient value number of the pixel points in the super pixel area is multiplied by the average gray gradient value of the pixel points in the super pixel area to evaluate the richness of the texture in the super pixel area.
S105, calculating the enhancement weight of each super-pixel region by using the pixel structure chaotic coefficient and the texture richness of each super-pixel region.
The method is characterized in that the inside of the same super-pixel area after super-pixel segmentation is considered to be similar pixel points in visual characteristics, and the purpose of counting the 8 neighborhood gray scale gradients of all the pixel points in each super-pixel area is to evaluate whether more detailed textures exist in the inside of the super-pixel area with similar visual characteristics. Constructing an evaluation function:
in the method, in the process of the invention,enhancement weight representing the g-th super-pixel region,/->Pixel structure representing g-th super pixel regionDisorder coefficient, & lt>Texture richness indicating the g-th super pixel area,/->Representing a hyperbolic tangent function. The chaotic coefficient of the whole structure of the super-pixel region is multiplied by the richness of the texture in the super-pixel region, so that whether the interior of the super-pixel region needs to be subdivided or not is evaluated more comprehensively. And normalizing the evaluation result in a proportional relation by using a hyperbolic tangent function to obtain the self-adaptive enhancement weight of the g super-pixel region. The pixel point composition and the richness of the texture in the super pixel area are evaluated, if the evaluation value in the super pixel area is higher, the super pixel area divided according to the visual characteristics is required to be further finely divided, then higher enhancement weight is required, if the evaluation value in one super pixel area is lower, the internal structure of the super pixel area is stable, and the requirement for finely dividing is lower, then lower enhancement weight is required.
S106, taking the maximum gray gradient value of all pixel points in each super pixel area in different neighborhood directions as the maximum gray gradient value of the super pixel area.
And acquiring gray gradient values of each pixel point in the super pixel region in different neighborhood directions, and selecting the maximum value of the gray gradient values as the maximum value of the gray gradient values of the super pixel region.
S107, calculating to obtain an enhancement requirement index of each pixel point in each super pixel region by using the gray gradient values of each pixel point in each super pixel region in different neighborhood directions, the maximum gray gradient value of the super pixel region and the number of the neighborhood directions of each pixel point in the super pixel region.
Because the gray gradient statistical histogram in each super-pixel region is that the gray gradients in the regions of the pixels with similar visual features are arranged, the larger the gray gradient in the region with similar visual features, that is, the same super-pixel region, the more outstanding the gray gradient needs to be represented, so that the gray differences in the regions with similar visual features are pulled. In short, the minor differences between the classes are amplified. Thus:
in the method, in the process of the invention,enhancement requirement index, which represents the value of the u-th gray gradient in the g-th superpixel region,>represents the value of the (u) th gray gradient in the (g) th superpixel region,/and (g)>The maximum value of the gray gradient value of the g-th super pixel region is represented. The saliency of each gray gradient value in the super-pixel region is obtained by calculating the ratio of each gray gradient value to the maximum gray gradient value, namely, the larger the ratio is, the more the gray gradient value needs to be highlighted, and the larger the corresponding enhancement requirement index is.
Then willProjected on pixel points, i.e. 8 neighborhood gray scale gradients of each pixel point, one each>Values of +.about.all gray-scale gradients for each pixel point>And accumulating and averaging the values to obtain the enhanced demand index of the pixel point.
In the method, in the process of the invention,an enhancement requirement index indicating the ith pixel point in the g-th super pixel area,/and->A number of neighborhood directions indicating that a gray gradient value exists at an ith pixel point in a g-th super pixel region,/or->And the enhancement requirement index in the d neighborhood direction of the gray gradient value exists in the ith pixel point in the g super pixel area. The enhancement requirement index of each pixel point and the gray features of the adjacent pixel points in the super pixel area are used for calculating, namely the tensile coefficient of the pixel point is obtained from microscopic vision, the detail part of the image can be highlighted, and the larger the enhancement requirement index of the pixel point is, the larger the tensile coefficient of the pixel point is.
S108, calculating the gray scale stretching coefficient of each pixel point in each super pixel region by using the enhancement weight of each super pixel region and the enhancement requirement index of each pixel point in each super pixel region.
The final gray scale stretch factor for each pixel is:
in the method, in the process of the invention,representing the gray scale stretch factor of the ith pixel point in the g-th super pixel area,/and->Enhancement weight representing the g-th super-pixel region,/->Representing the enhancement requirement index of the ith pixel point in the g-th super pixel area. The meaning here is:enhancement requirement index indicating pixel under macroscopic vision, < +.>The enhancement requirement index of the pixel points under the microscopic vision is represented, the self-adaptive stretching coefficient of each pixel point is obtained by utilizing the cooperative mode of the macroscopic vision characteristics and the gray gradient characteristics of the microscopic pixel points, so that the details of the image are more prominent, the distortion problem caused by indiscriminate enhancement of the traditional algorithm is avoided, the definition and the stability of the monitoring image are greatly improved, and the machine vision intelligent monitoring waste water pipeline blockage is more credible.
And S109, carrying out gray stretching enhancement on the gray level map of the waste water discharge outlet according to the gray level stretching coefficient of each pixel point in each super pixel area, and obtaining the gray level map of the waste water discharge outlet after enhancement at the current moment.
And then obtaining a gray scale stretching result of the pixel point j, namely:
in the method, in the process of the invention,representing gray value before stretching of jth pixel point in wastewater discharge port gray map, +.>Representing the gray value of the j-th pixel point in the gray map of the wastewater discharge port after stretching, and +.>And the gray scale tensile coefficient of the j pixel point in the gray scale map of the wastewater discharge port is represented. On the basis of the gray value before stretching of each pixel point, each pixel point is subjected to self-adaptive stretching, and the stretched pixel point gray value is obtained, namely the pixel point gray value after being enhanced, so that the contrast of the stretched waste water discharge port gray map is enhanced, and the definition and the stability of a monitoring image are greatly improved. Then gray scale adjustment is carried out on all pixel points of the whole image to obtain contrast enhancementAnd (5) a gray scale map of the waste water discharge outlet.
S110, acquiring the discharge flow of the wastewater discharge port at the current moment by utilizing the enhanced gray level graph of the wastewater discharge port at the current moment and the enhanced gray level graph of the wastewater discharge port at the previous moment, judging whether a wastewater pipeline at the current moment is blocked or not according to the acquired discharge flow of the wastewater discharge port at the current moment, and cleaning the wastewater pipeline when the wastewater pipeline is blocked; the method for enhancing the gray level map of the waste water discharge outlet enhanced at the previous moment is the same as the method for enhancing the gray level map of the waste water discharge outlet enhanced at the current moment.
The enhanced image not only eliminates the noise problem, but also ensures the definition of the image, and improves the stability and the practicability of the blocking condition of the wastewater discharge port of the machine vision monitoring.
The existing video flow measurement technology obtains flow velocity data through video analysis, adopts the traditional buoy method principle, and calculates physical distance and detection time of a target object by tracking and matching the enhanced waste water discharge port gray level map at the current moment and the target object in the enhanced waste water discharge port gray level map at the previous moment, so that a real-time flow velocity value of the waste water surface at the current moment is obtained, and section flow of the waste water discharge port can be calculated according to the real-time flow velocity value. And identifying the blocking condition of the waste water pipeline according to the section flow, and evaluating whether the waste water pipeline needs to be cleaned according to the blocking condition of the waste water pipeline. The target object in the enhanced waste water discharge port gray level diagram refers to the water surface texture characteristics such as rigid floaters or waves, bubbles and the like in the gray level diagram. The tracking matching algorithm is in the prior art, and this embodiment is not described in detail.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, alternatives, and improvements that fall within the spirit and scope of the invention.

Claims (4)

1. A method for treating livestock-raising wastewater based on machine vision, which is characterized by comprising the following steps:
acquiring a gray level image of a wastewater discharge port at the current moment, and performing super-pixel segmentation on the gray level image to obtain all super-pixel areas;
calculating to obtain a pixel structure chaotic coefficient of each super pixel region by using the gray value of each pixel point in each super pixel region, the number of the pixel points and the frequency of each gray value;
calculating the average gray gradient value of each super pixel region by using the gray gradient values of each pixel point in each super pixel region in different neighborhood directions;
calculating to obtain the texture richness of each super pixel region by using the average gray gradient value of each super pixel region, the number of pixel points in the super pixel region and the gray gradient values of each pixel point in the super pixel region in different neighborhood directions; calculating to obtain the enhancement weight of each super-pixel region by using the pixel structure chaotic coefficient and the texture richness of each super-pixel region;
taking the maximum gray gradient value of all pixel points in each super pixel area in different neighborhood directions as the maximum gray gradient value of the super pixel area;
calculating to obtain an enhancement requirement index of each pixel point in each super pixel region by using the gray gradient value of each pixel point in each super pixel region in different neighborhood directions, the maximum gray gradient value of the super pixel region and the number of the neighborhood directions of each pixel point in the super pixel region;
calculating to obtain the gray scale stretching coefficient of each pixel point in each super pixel region by using the enhancement weight of each super pixel region and the enhancement demand index of each pixel point in each super pixel region;
carrying out gray stretching enhancement on the gray level map of the waste water discharge port according to the gray level stretching coefficient of each pixel point in each super pixel region to obtain the gray level map of the waste water discharge port after enhancement at the current moment;
judging whether the waste water pipeline is blocked at the current moment according to the enhanced waste water discharge port gray level diagram at the current moment, and cleaning the waste water pipeline when the waste water pipeline is blocked;
the calculating to obtain the pixel structure chaotic coefficient of each super pixel area by using the gray value of each pixel point in each super pixel area, the number of the pixel points and the frequency of each gray value specifically comprises the following steps:
the following is performed for each super pixel region:
counting the frequency of each gray value in the super pixel area; calculating to obtain a pixel structure chaotic coefficient of each super pixel region by using the gray value of each pixel point in each super pixel region, the number of the pixel points and the frequency of each gray value;
the calculating the pixel structure chaotic coefficient of each super pixel area by using the gray value of each pixel point in each super pixel area, the number of the pixel points and the frequency of each gray value comprises the following steps:
obtaining the pixel structure chaotic coefficient by adopting an entropy value calculation formula;
the calculating to obtain the texture richness of each super pixel area by using the average gray gradient value of each super pixel area, the number of pixel points in the super pixel area and the gray gradient values of each pixel point in the super pixel area in different neighborhood directions specifically comprises the following steps:
the following is performed for each super pixel region:
taking any pixel point in the super pixel area as a first pixel point, and acquiring a neighborhood pixel point of the first pixel point;
calculating to obtain gray gradient values of the first pixel points in different neighborhood directions by using gray values of the first pixel points and the neighborhood pixel points thereof;
obtaining gray gradient values of each pixel point in the super-pixel region in different neighborhood directions according to the method for obtaining the gray gradient values of the first pixel point in different neighborhood directions;
the gray gradient values of each pixel point in the super pixel area in different neighborhood directions are taken as horizontal axes, the number of each gray gradient value is taken as vertical axis, and a gray gradient statistical histogram of the super pixel area is constructed;
calculating the average gray gradient value of the super pixel region according to the gray gradient value in the gray gradient statistical histogram of the super pixel region;
calculating to obtain the texture richness of each super pixel region by using the average gray gradient value of each super pixel region, the number of pixel points in the super pixel region and the gray gradient values of each pixel point in the super pixel region in different neighborhood directions;
the method for calculating the enhancement requirement index of each pixel point in each super pixel region by using the gray gradient values of each pixel point in each super pixel region in different neighborhood directions, the maximum gray gradient value of the super pixel region and the number of the neighborhood directions of each pixel point in the super pixel region specifically comprises the following steps:
the following is performed for each super pixel region:
selecting any pixel point in the super pixel area as a first pixel point, and selecting a gray gradient value in any neighborhood direction of the first pixel point as a first gray gradient value;
taking the maximum gray gradient values of all pixel points in the super pixel area in different neighborhood directions as the maximum gray gradient values of the super pixel area;
calculating to obtain an enhancement requirement index of the first gray gradient value by using the maximum gray gradient value and the first gray gradient value of the super pixel region;
obtaining the enhancement requirement indexes of the gray gradient values of the first pixel point in other neighborhood directions according to the method for obtaining the enhancement requirement indexes of the first gray gradient values, and obtaining the enhancement requirement indexes of the gray gradient values of the first pixel point in different neighborhood directions;
obtaining the enhancement requirement indexes of the gray gradient values of each pixel point in the super-pixel region in different neighborhood directions according to the method for obtaining the enhancement requirement indexes of the gray gradient values of the first pixel point in different neighborhood directions;
calculating to obtain the enhancement requirement index of each pixel point in the super pixel region by using the enhancement requirement index of the gray gradient value of each pixel point in the super pixel region in different neighborhood directions and the number of the neighborhood directions of each pixel point in the super pixel region;
the method for acquiring the enhanced demand index comprises the following steps:
in the method, in the process of the invention,an enhancement requirement index indicating the ith pixel point in the g-th super pixel area,/and->A number of neighborhood directions indicating that a gray gradient value exists at an ith pixel point in a g-th super pixel region,/or->An enhancement requirement index in the d neighborhood direction of the gray gradient value of the ith pixel point in the g-th super pixel area;
the expression of the gray scale stretching coefficient of each pixel point in each super pixel area is as follows:
in the method, in the process of the invention,representing the gray scale stretch factor of the ith pixel point in the g-th super pixel area,/and->Enhancement weight representing the g-th super-pixel region,/->An enhancement requirement index representing an ith pixel point in the g-th super pixel region;
the calculation formula of the enhancement weight is as follows:
in the method, in the process of the invention,enhancement weight representing the g-th super-pixel region,/->Pixel structure confusion factor representing the g-th super-pixel region, < >>Texture richness indicating the g-th super pixel area,/->Representing a hyperbolic tangent function.
2. The method for treating livestock-raising wastewater based on machine vision according to claim 1, wherein the steps of obtaining a gray scale image of a wastewater discharge outlet at the current moment, and performing super-pixel segmentation on the gray scale image to obtain all super-pixel areas comprise:
collecting an image of a wastewater discharge port at the current moment;
gray treatment is carried out on the wastewater discharge outlet image at the current moment, and a wastewater discharge outlet gray level image at the current moment is obtained;
performing super-pixel segmentation on the gray level diagram of the wastewater discharge port at the current moment to obtain all super-pixel blocks;
and merging all the super pixel blocks according to the gray level similarity and the space distance between the super pixel blocks to obtain all the super pixel areas in the gray level diagram of the wastewater discharge port.
3. A machine vision-based livestock-raising wastewater treatment method according to claim 1, wherein the determining whether the wastewater pipeline is blocked at the current time according to the enhanced wastewater discharge outlet gray scale map at the current time, and cleaning the wastewater pipeline when the wastewater pipeline is blocked comprises:
obtaining the discharge flow of the waste water discharge port at the current moment by utilizing the gray level graph of the waste water discharge port after the enhancement at the current moment and the gray level graph of the waste water discharge port after the enhancement at the previous moment, judging whether the waste water pipeline at the current moment is blocked according to the obtained discharge flow of the waste water discharge port at the current moment, and cleaning the waste water pipeline when the waste water pipeline is blocked; the method for enhancing the gray level map of the waste water discharge outlet enhanced at the previous moment is the same as the method for enhancing the gray level map of the waste water discharge outlet enhanced at the current moment.
4. A method of machine vision based treatment of livestock-raising wastewater as claimed in claim 1, wherein the neighborhood direction is set to be the 8 neighborhood direction of pixels.
CN202310952482.XA 2023-08-01 2023-08-01 Livestock breeding wastewater treatment method based on machine vision Active CN116665137B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310952482.XA CN116665137B (en) 2023-08-01 2023-08-01 Livestock breeding wastewater treatment method based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310952482.XA CN116665137B (en) 2023-08-01 2023-08-01 Livestock breeding wastewater treatment method based on machine vision

Publications (2)

Publication Number Publication Date
CN116665137A CN116665137A (en) 2023-08-29
CN116665137B true CN116665137B (en) 2023-10-10

Family

ID=87724635

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310952482.XA Active CN116665137B (en) 2023-08-01 2023-08-01 Livestock breeding wastewater treatment method based on machine vision

Country Status (1)

Country Link
CN (1) CN116665137B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116863323B (en) * 2023-09-04 2023-11-24 济宁鑫惠生水产养殖专业合作社 Visual detection method and system for pollution of water source for fishery culture
CN117437129B (en) * 2023-12-18 2024-03-08 山东心传矿山机电设备有限公司 Mining intelligent water pump impeller fault image detail enhancement method
CN117649414B (en) * 2024-01-30 2024-04-09 天津工大纺织助剂有限公司 Textile auxiliary production wastewater treatment equipment

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111914686A (en) * 2020-07-15 2020-11-10 云南电网有限责任公司带电作业分公司 SAR remote sensing image water area extraction method, device and system based on surrounding area association and pattern recognition
AU2020103026A4 (en) * 2020-10-27 2020-12-24 Nanjing Forestry University A Single Tree Crown Segmentation Algorithm Based on Super-pixels and Topological Features in Aerial Images
WO2021217642A1 (en) * 2020-04-30 2021-11-04 深圳市大疆创新科技有限公司 Infrared image processing method and apparatus, and movable platform
CN113705371A (en) * 2021-08-10 2021-11-26 武汉理工大学 Method and device for segmenting aquatic visual scene
CN114972329A (en) * 2022-07-13 2022-08-30 江苏裕荣光电科技有限公司 Image enhancement method and system of surface defect detector based on image processing
CN115034973A (en) * 2022-04-24 2022-09-09 海门宝宏机械有限公司 Part image enhancement method based on texture stability
CN115170572A (en) * 2022-09-08 2022-10-11 山东瑞峰新材料科技有限公司 BOPP composite film surface gluing quality monitoring method
CN115330783A (en) * 2022-10-13 2022-11-11 启东谷诚不锈钢制品有限公司 Steel wire rope defect detection method
CN115861135A (en) * 2023-03-01 2023-03-28 铜牛能源科技(山东)有限公司 Image enhancement and identification method applied to box panoramic detection
WO2023134791A2 (en) * 2022-12-16 2023-07-20 苏州迈创信息技术有限公司 Environmental security engineering monitoring data management method and system
CN116486061A (en) * 2023-06-20 2023-07-25 苏州德斯米尔智能科技有限公司 Sewage treatment effect detection method based on machine vision

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112950508B (en) * 2021-03-12 2022-02-11 中国矿业大学(北京) Drainage pipeline video data restoration method based on computer vision

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021217642A1 (en) * 2020-04-30 2021-11-04 深圳市大疆创新科技有限公司 Infrared image processing method and apparatus, and movable platform
CN111914686A (en) * 2020-07-15 2020-11-10 云南电网有限责任公司带电作业分公司 SAR remote sensing image water area extraction method, device and system based on surrounding area association and pattern recognition
AU2020103026A4 (en) * 2020-10-27 2020-12-24 Nanjing Forestry University A Single Tree Crown Segmentation Algorithm Based on Super-pixels and Topological Features in Aerial Images
CN113705371A (en) * 2021-08-10 2021-11-26 武汉理工大学 Method and device for segmenting aquatic visual scene
CN115034973A (en) * 2022-04-24 2022-09-09 海门宝宏机械有限公司 Part image enhancement method based on texture stability
CN114972329A (en) * 2022-07-13 2022-08-30 江苏裕荣光电科技有限公司 Image enhancement method and system of surface defect detector based on image processing
CN115170572A (en) * 2022-09-08 2022-10-11 山东瑞峰新材料科技有限公司 BOPP composite film surface gluing quality monitoring method
CN115330783A (en) * 2022-10-13 2022-11-11 启东谷诚不锈钢制品有限公司 Steel wire rope defect detection method
WO2023134791A2 (en) * 2022-12-16 2023-07-20 苏州迈创信息技术有限公司 Environmental security engineering monitoring data management method and system
CN115861135A (en) * 2023-03-01 2023-03-28 铜牛能源科技(山东)有限公司 Image enhancement and identification method applied to box panoramic detection
CN116486061A (en) * 2023-06-20 2023-07-25 苏州德斯米尔智能科技有限公司 Sewage treatment effect detection method based on machine vision

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
A Real-Time Distributed Deep Learning Approach for Intelligent Event Recognition in Long Distance Pipeline Monitoring with DOFS;Jiping Chen 等;《2018 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery》;全文 *
Learning to segment roads for traffic analysis in urban images;Marcelo Santos 等;《 2013 IEEE Intelligent Vehicles Symposium》;全文 *
一种红外图像增强算法在无人机巡检输电线路上的应用;陈科羽;石书山;;电子设计工程(16);全文 *
基于像素滤波和中值滤波的深度图像修复方法;刘继忠 等;《光电子 · 激光》;全文 *
结合同异性度量的超像素分割方法;祁瑞光;张和生;;遥感信息(04);全文 *
耦合BIM的长距离输水渠道无人机巡检与险情智能图像识别研究;陈俊杰;《中国博士学位论文全文数据库》;全文 *

Also Published As

Publication number Publication date
CN116665137A (en) 2023-08-29

Similar Documents

Publication Publication Date Title
CN116665137B (en) Livestock breeding wastewater treatment method based on machine vision
CN103630473B (en) Active sludge on-line computer graphical analysis early warning system and method
CN107392885A (en) A kind of method for detecting infrared puniness target of view-based access control model contrast mechanism
CN104075965B (en) A kind of micro-image grain graininess measuring method based on watershed segmentation
CN115841434B (en) Infrared image enhancement method for gas concentration analysis
CN112857471B (en) Industrial Internet of things chemical wastewater treatment emission on-line monitoring and early warning management cloud platform
CN113408687B (en) High-flux fry online counting device and method
CN105547602A (en) Subway tunnel segment leakage water remote measurement method
CN115789527A (en) Analysis system and method based on water environment informatization treatment
CN115761563A (en) River surface flow velocity calculation method and system based on optical flow measurement and calculation
CN104182992A (en) Method for detecting small targets on the sea on the basis of panoramic vision
CN114387235B (en) Water environment monitoring method and system
CN114639064A (en) Water level identification method and device
CN112198170A (en) Detection method for identifying water drops in three-dimensional detection of outer surface of seamless steel pipe
CN116071692A (en) Morphological image processing-based water gauge water level identification method and system
CN107993243B (en) Wheat tillering number automatic detection method based on RGB image
CN111798529B (en) Pipe network free outflow flow on-line monitoring method based on image recognition
CN109272484A (en) A kind of rainfall detection method based on video image
CN117291967A (en) Inland ship topside pixel height automatic measurement method based on monocular image
CN114782561B (en) Smart agriculture cloud platform monitoring system based on big data
CN115115624B (en) Rolling damage detection method for anti-corrosion coating of cable bridge
CN116645628A (en) Pollution discharge detection method and system in severe weather
CN109343063B (en) Automatic clear sky echo identification method and system for millimeter wave cloud measuring instrument
JPH0649195B2 (en) Microorganism detection device
CN117593300B (en) PE pipe crack defect detection method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant