CN112884731A - Water level detection method and river channel monitoring method based on machine vision - Google Patents

Water level detection method and river channel monitoring method based on machine vision Download PDF

Info

Publication number
CN112884731A
CN112884731A CN202110169933.3A CN202110169933A CN112884731A CN 112884731 A CN112884731 A CN 112884731A CN 202110169933 A CN202110169933 A CN 202110169933A CN 112884731 A CN112884731 A CN 112884731A
Authority
CN
China
Prior art keywords
water gauge
blue
water
lines
water level
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110169933.3A
Other languages
Chinese (zh)
Other versions
CN112884731B (en
Inventor
甘小皓
钟璞星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huimu Chongqing Technology Co ltd
Original Assignee
Huimu Chongqing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huimu Chongqing Technology Co ltd filed Critical Huimu Chongqing Technology Co ltd
Priority to CN202110169933.3A priority Critical patent/CN112884731B/en
Publication of CN112884731A publication Critical patent/CN112884731A/en
Application granted granted Critical
Publication of CN112884731B publication Critical patent/CN112884731B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01FMEASURING VOLUME, VOLUME FLOW, MASS FLOW OR LIQUID LEVEL; METERING BY VOLUME
    • G01F23/00Indicating or measuring liquid level or level of fluent solid material, e.g. indicating in terms of volume or indicating by means of an alarm
    • G01F23/04Indicating or measuring liquid level or level of fluent solid material, e.g. indicating in terms of volume or indicating by means of an alarm by dip members, e.g. dip-sticks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Abstract

The invention discloses a water level detection method and a river channel monitoring method based on machine vision, wherein the water level detection method comprises the following steps: firstly, obtaining a water gauge with the following structure, wherein the water gauge comprises a water gauge body, water gauge lines are coated on the water gauge body, the water gauge lines comprise characteristic color lines coated vertically, scale mark blocks which are sequentially staggered along the length direction are arranged on two sides of each characteristic color line, each scale mark block comprises three characteristic scale lines arranged at intervals, the characteristic scale lines and the characteristic color lines are coated in red, and white scale lines are coated between every two adjacent characteristic scale lines, so that the scale mark blocks and the characteristic color lines form an E shape; the areas between the scale mark blocks on one side are coated with blue color blocks, and the areas between the scale mark blocks on the other side are coated with numbers. The method has the advantages that the water gauge characters do not need to be recognized and the like.

Description

Water level detection method and river channel monitoring method based on machine vision
Technical Field
The invention relates to the technical field of water level detection, in particular to a water level detection method and a river channel monitoring method based on machine vision.
Background
The water level is one of the basic hydrological factors of rivers, lakes and reservoirs, and since the information such as water supply amount, rainstorm and flood flow, runoff sediment and nutrient transport rate and the like of cities and irrigation areas are generally obtained according to water level measurement values, continuous and reliable water level monitoring has important significance for comprehensively improving flood prevention and drought resistance early warning and forecasting levels and daily supervision capacities of rivers and lakes. The water gauge records the height of the water level through reading, and is the most intuitive and simple measuring tool; however, the traditional water gauge needs manual timing observation, the automation degree is low, and the labor intensity of personnel is high.
At present, video monitoring systems are built at a plurality of important water level observation points in China and are matched with standard water gauges, and favorable conditions are provided for water gauge water level detection based on video images. The image method uses an image sensor to replace human eyes to obtain a water gauge image, and detects a reading corresponding to a water level line through an image processing technology, so that water level information is automatically obtained. Compared with the existing method, the image method has the advantages of non-contact, no temperature drift, no conversion error and the like in principle, so that the image method water level detection becomes a new research hotspot in the field of machine vision and water conservancy measurement in recent years.
The Chinese patent document discloses a water level identification method (publication number: CN107367310A) based on a binary code character water gauge and image processing, which establishes a binary code character positioning and segmentation model and a binary code character water gauge scale mark extraction model, identifies characters through a template matching method, and further converts the water level value, and the detection precision is difficult to ensure under the conditions that the image resolution is low and the water gauge scale marks and characters are not clear.
Disclosure of Invention
Aiming at the defects of the prior art, the technical problems to be solved by the invention are as follows: how to provide a water level detection method and a river channel monitoring method based on machine vision without identifying water gauge characters.
In order to solve the technical problems, the invention adopts the following technical scheme:
a water level detection method based on machine vision is characterized by comprising the following steps:
s1, firstly, obtaining a water gauge with the following structure, wherein the water gauge comprises a water gauge body, water gauge lines are coated on the water gauge body, the water gauge lines comprise characteristic color lines coated vertically, scale mark blocks which are sequentially arranged in a staggered manner along the length direction are arranged on two sides of each characteristic color line, each scale mark block comprises three characteristic scale lines arranged at intervals, the characteristic scale lines and the characteristic color lines are coated in a red manner, and white scale lines are coated between every two adjacent characteristic scale lines, so that the scale mark blocks and the characteristic color lines form an E shape; the area between the scale mark blocks on one side is coated with blue color blocks, and the area between the scale mark blocks on the other side is coated with numbers;
s2, under various light conditions, acquiring images of the water gauge, converting the images into HSV images, and counting HSV threshold ranges of a red area and a blue area in the water gauge images;
s3, water level reading: converting the acquired water gauge image from the BGR channel into an HSV channel, calculating a binary image of the blue region according to the HSV threshold range of the blue region acquired in the step S2, and performing morphological operation on the image; obtaining the outline of a circumscribed rectangle from the binary image of the blue area to obtain a rectangular frame set of the blue area in the water gauge image; and acquiring the maximum longitudinal coordinate value and the minimum longitudinal coordinate value of each rectangular frame in the rectangular frame set of the blue region, and sequentially converting the maximum longitudinal coordinate value and the minimum longitudinal coordinate value of the rectangular frames into actual longitudinal dimensions by taking the top of a scale mark on the water gauge as a starting point according to the actual longitudinal dimensions of the blue region on the water gauge, so that the height of the water level can be read.
Further, before the water level is read, the water gauge area is extracted, when the water gauge area is extracted, the acquired water gauge image is converted into an HSV channel from the BGR channel, a binary image of the red area and a binary image of the blue area are respectively calculated according to the HSV threshold ranges of the red area and the blue area acquired in the step S2, and morphological operation is performed on the images; respectively obtaining the outline of a circumscribed rectangle from the binary image of the red area and the binary image of the blue area to obtain a rectangular frame set of all red areas and a rectangular frame set of the blue area in the water gauge image; and extracting the regions where the rectangular frame set of the red region and the rectangular frame set of the blue region are located as water gauge regions.
Because the area that the water gauge occupies in the shot image is less, draw the water gauge region earlier and discern again, be favorable to improving the precision of discernment.
Further, sequentially and iteratively calculating the distance SV between two rectangular frames in the rectangular frame set of the blue region, merging the two rectangular frames with the distance SV less than a set value according to the outline of the minimum circumscribed rectangle, and taking the rectangular frames connected into a long strip shape as the rectangular frames of the blue region; SV ═ S1+ S2-SC)/S3, where S1 and S2 are the areas of the two rectangular frames, SC is the intersection area of the two rectangular frames, and S3 is the area of the smallest circumscribed rectangle of the two rectangular frames (this circumscribed rectangle is in the same distribution as the two rectangular frames, i.e., the length and width are parallel to the length and width of the two rectangles).
Because the blue areas on the water gauge are consistent in distance, iterative combination is carried out by adopting the distance between the two rectangular frames, and all the blue areas can be combined into a whole along the length direction of the water gauge. Thereby facilitating rejection of interference to other blue regions in the image.
Further, iteratively calculating the distance SV between each rectangular frame and the blue region rectangular frame in the rectangular set of the red region, and combining the rectangular frame with the distance SV less than a set value with the blue region rectangular frame to finally obtain the water gauge region rectangular frame.
By using the distance relationship between the red area and the blue area on the water gauge, the interference of other reds in the image can be eliminated.
Further, in step S1, a floating block is slidably sleeved on the water gauge body, an upper end surface of the floating block is higher than the water surface and perpendicular to the axis of the water gauge body, a blue color block is coated on one side of the upper end surface of the floating block corresponding to the number on the water gauge, and the blue color block extends to the position corresponding to the characteristic color line toward the middle.
Therefore, the reading line can be determined by utilizing the edge line of the blue color block on the upper end surface of the floating block, and the water level height can be accurately read by utilizing the fixed distance between the upper end surface of the floating block and the water surface.
Furthermore, the back side of the water gauge body is provided with a convex rib arranged along the length direction, and the floating block is provided with a groove arranged corresponding to the convex rib.
Therefore, the floating block can be prevented from rotating relative to the water gauge body, and the relative position between the blue color block on the upper end face of the floating block and the water gauge body is ensured.
Further, the width of the characteristic color line is more than 20 mm.
The river channel monitoring method based on the machine vision is characterized by comprising the water level detection method based on the machine vision.
In conclusion, the method and the device have the advantages that the water gauge characters do not need to be recognized, and the like.
Drawings
Fig. 1 and 2 are schematic structural views of a water gauge.
FIG. 3 is a schematic diagram of the intersection of rectangular frames.
Fig. 4 is a binary diagram of the red region.
Fig. 5 is a rectangular frame recognition diagram of a red region.
Fig. 6 is a binary diagram of the blue region.
Fig. 7 is a rectangular frame recognition diagram of a blue region.
Fig. 8 is a diagram of the merged rectangular box identification of the blue region.
Fig. 9 is an identification diagram of a water gauge area.
Fig. 10 is a schematic view of the region of the water gauge to be read.
FIG. 11 is a schematic view of reading identification of blue region.
Fig. 12 is a schematic view of Y-coordinate reading of the blue region.
FIG. 13 is a schematic view of water line region identification.
FIG. 14 is a schematic diagram of water lines obtained by the clustering method and the slicing method.
FIG. 15 is a schematic view of a water line reading.
Fig. 16 is a schematic structural view of a lateral flow cross section.
Fig. 17 is a calibration plate setting diagram.
Fig. 18 is a schematic diagram illustrating a correspondence relationship between pixel coordinates and physical coordinates.
FIG. 19 is a cross-sectional view of the drawing.
Fig. 20 is a schematic diagram of the division of the current measurement area.
Fig. 21 is a flow velocity distribution graph.
Detailed Description
The present invention will be described in further detail below with reference to a river monitoring method based on machine vision using the method of the present invention.
In the specific implementation: a river channel monitoring method based on machine vision comprises a water level detection method based on machine vision, and a special water gauge type is designed according to the requirement of automatic water level identification and the requirement of GBT 50138-2010-water level observation standard before implementation. As shown in fig. 1 and 2, the water gauge comprises a water gauge body 1, wherein water gauge lines are coated on the water gauge body 1, the water gauge lines comprise characteristic color lines 2 coated vertically, scale mark blocks 3 sequentially arranged in a staggered manner along the length direction are arranged on two sides of each characteristic color line 2, each scale mark block 3 comprises three characteristic scale marks 31 arranged at intervals, the characteristic scale marks 31 and the characteristic color lines 2 are coated in red, and white scale marks 32 are coated between two adjacent characteristic scale marks, so that the scale mark blocks 3 and the characteristic color lines 2 form an 'E' shape; the areas between the scale mark blocks 3 on one side are painted with blue color blocks 4, and the areas between the scale mark blocks 3 on the other side are painted with numbers (not shown in the figure). Fig. 2 is only an illustration, and in practical use, the diameter of the water gauge body 1 is relatively larger, so that the bending degree of the water gauge line is smaller, and the water gauge line is more convenient to identify.
The water gauge is characterized in that a floating block 5 is slidably sleeved on the water gauge body, the upper end face of the floating block 5 is higher than the water surface and is perpendicular to the axis of the water gauge body, a blue color block is coated on one side, corresponding to the number on the water gauge, of the upper end face of the floating block, and the blue color block extends to the position corresponding to the characteristic color line towards the middle part. Therefore, the reading line can be determined by utilizing the edge line of the blue color block on the upper end surface of the floating block, and the water level height can be accurately read by utilizing the fixed distance between the upper end surface of the floating block and the water surface.
In practice, the back side of the water gauge body is provided with a rib (not shown in the figure) arranged along the length direction, and the floating block is provided with a groove arranged corresponding to the rib. Therefore, the floating block can be prevented from rotating relative to the water gauge body, and the relative position between the blue color block on the upper end face of the floating block and the water gauge body is ensured.
Water gauge material:
the material of the water gauge main body is 304 stainless steel,
the material of the surface characteristic colored paint is as follows:
(1) matte: it is advisable to use a matte finish, preferably fully matte (60 ℃ gloss meter measuring less than 5GU, or 85 ℃ gloss meter measuring less than 10 GU). The surface reflection of the water gauge can greatly affect the image quality and influence the identification precision.
(2) Waterproof and corrosion-resistant: because the water gauge is fixed in water bodies such as rivers, reservoirs and the like for a long time and is corroded by acid rain and the like, the paint on the surface of the water gauge needs to be waterproof and corrosion-resistant.
(3) Sun protection: the water gauge is exposed to burning sun for a long time, and common paint is easy to deteriorate, change in color and even fall off.
The overall style of the water gauge is shown in fig. 1, the ground color is white (RGB: 255,255), the number is coated in red (RGB: 255,0,0), the total height of the red and white stripes (i.e. the characteristic scale lines 31 and the white scale lines 32) of the scale mark block 3 is 50mm, the height of the red fine stripes (the characteristic scale lines 31) is 10mm, and the height of the blue rectangle (blue color block 4, RGB: 255,0,0) is 50 mm.
The whole width of the water gauge line is 100mm (not more than 200mm), and the middle characteristic color line 2 is not less than 20 mm.
The following steps are adopted for identification:
1. reading the picture to obtain the BGR data (represented by bgrData) of the picture, which is a three-dimensional matrix of W H D, wherein W is the picture width, H is the picture height, and D is the picture channel (here, BGR, wherein B is blue, G is green, and R is red), and the structure of the BGR data is as follows:
[[[112 127 113][106 121 107][95 111 94]…[43 111 58][47 112 63][48 113 64]]
[[47 108 82][54 115 87][52 116 86]…[116 115 125][113 114 124][113 114 124]]]
2. converting the BGR channel of the obtained picture data into an HSV channel, wherein H is hue, S is saturation and V is brightness, and obtaining hsvData, wherein the data structure is as follows:
[[[58 30 127][58 32 121][62 39 111]…[53 156 111][53 148 112][53 147 113]]
[[43 110 141][44 121 129][44 122 134]…[177 21 122][3 23 122][3 23 121]]]
3. as shown in fig. 3, the area of the rectangular frame a is defined as S1, the area of the rectangular frame b is defined as S2, the intersection of the two is SC, the minimum circumscribed rectangular area of the two is S3, the distance between the two rectangular frames is SV, SV is (S1+ S2-SC)/S3, the available SV varies from 0 to 1, the larger the value of SV, the closer the distance between the two rectangular frames, and the farther the distance between the two rectangular frames.
4. And (4) counting to obtain HSV threshold ranges of the red characteristic color and the blue characteristic color according to the change conditions of the red characteristic color and the blue characteristic color in various light environments. In this example, red is [ [ [165,120,100], [180,255,255] ], [ [0,120,100], [30,255,255] ], and blue is [ [ [80,110,50], [130,255,255] ].
5. According to the red HSV threshold range and the obtained hsvData, a binary image mask Red of a red area is calculated, the image is subjected to morphological operation, firstly, corrosion is carried out, then expansion is carried out, and noise is filtered out, as shown in figure 4. The circumscribed rectangle outline is obtained from the mask Red binary image, a set redBox of a series of rectangle frames of red areas is obtained, and the set redBox is drawn out on bgrData, as shown in FIG. 5, it can be seen from the figure that the red area on the right side does not have a water gauge, and the area is an interference item, and is excluded from filtering in a subsequent algorithm.
6. And (3) calculating a binary image mask blue of a blue area according to the blue HSV threshold range and the obtained hsvData, performing morphological operation on the image, corroding the image, expanding the image, and filtering out noise, wherein the noise is shown in figure 6. And (3) solving a circumscribed rectangle outline of the mask blue binary image to obtain a set bluBoxes of a series of rectangle frames of the blue area, which is drawn at bgrData, as shown in FIG. 7.
Combining the blueBox according to the algorithm proposed in the step (3), wherein a combination principle is as follows: and (3) circularly and iteratively judging the distance SV value of the two frames, and combining the two rectangular frames when the SV value of the two rectangular frames is larger than 0.5 (the SV value is an empirical value and can be adjusted according to actual conditions), replacing the original two rectangular frames with the minimum circumscribed rectangular frame, and taking the circumscribed rectangular frame as the input of the next iteration until all adjacent rectangular frames are merged to finally obtain blueBox 1, wherein the display effect of the blueBox 1 in rgbData is shown in FIG. 8.
7. Since the water gauge has two characteristic colors of red and blue, the above calculation has obtained rectangular frames redBoxes and blueBoxes1 of the distribution areas of the two characteristic colors, and when the coincidence ratio of the red area and the blue area is very high (the distance SV between the two rectangular frames is greater than 0.9), the area is determined to be the water gauge area.
As shown in fig. 9, it can be seen that the red area at the rightmost side cannot find the blue area close to the red area (SV >0.9), so that it is filtered out as an interference term, and the two areas at the left side are reserved as water gauge areas.
8. And intercepting the automatically monitored water gauge area as the input of the subsequent water gauge scale identification.
10. The automatically monitored water gauge region is extracted, taking the left water gauge in fig. 9 as an example, and is used as an input of scale recognition, as shown in fig. 10.
Firstly, preprocessing a picture, wherein the preprocessing steps are as follows:
a. the aspect ratio of the picture is ensured, the width of the picture is set to be 200 pixels, and then scaling is carried out.
b. And performing Gaussian blur on the picture to obtain picture data imgData.
c. imgData is data of the BGR channel, and is converted into the HSV channel to obtain hsvData for later use.
11. And (3) performing blue threshold segmentation on hsvData by adopting the method in the step 6 to obtain a blue region binary image mask blue, and obtaining a circumscribed rectangle outline of the mask blue binary image to obtain a set blue boxes of a series of blue regions as shown in FIG. 11.
12. Sorting the blue boxes according to the Y coordinate (from top to bottom), finding the maximum Y coordinate value and the minimum Y coordinate value of each box, and forming a new set blue ys, in this embodiment, as shown in fig. 12, the result is as follows: 62,126,212,270,358,416,502,561,647,705,791,851, the corresponding Y coordinate of each blue region key point in the image can be obtained.
13. As can be seen from fig. 12, the pixel coordinates corresponding to the key points in the blue region are clear, and with the addition of the known water gauge specification (here, 1m), then the height of each blue contour (blueBox) is 5cm, so that from top to bottom of the water gauge, the water gauge reading corresponding to each blue key point is [1.00, 0.95, 0.90, 0.85, 0.80, 0.75, 0.70, 0.65, 0.60, 0.55, 0.50, 0.45], which is defined as readNums. It can be known that the values of readNums and blueYs correspond one to one. And taking the value of blue Ys as an independent variable and the value of readNums as a dependent variable, establishing a linear relation between the two, and substituting the Y coordinate corresponding to the water level line obtained through detection into the linear relation to calculate the reading corresponding to the water level line. Therefore, the detection of the water level line becomes critical, directly affecting whether the reading of the final water level is accurate or not.
In this embodiment, on the basis of step 13, the maximum Y coordinate of the blue color block on the upper end surface of the float block is used as the water level reference line, the reading of the water level reference line is directly read, and the correct water level height is obtained by correcting the distance from the float block to the water surface.
The water line can also be identified by adopting a clustering method and a slicing method, and the final water line is determined according to the calculation results and the corresponding weights of the two methods.
14. Before the water level line monitoring operation, the water gauge picture is preprocessed, and the detection area is reduced to reduce errors. The method is to select the penultimate value of blue ys as the upper edge of the target region (here, Y ═ 791), and then add the value twice the height of the blue contour (791+2 × (851-. As shown in fig. 13, it is apparent that the water line is located in this region. Then, the water level line is obtained by two methods.
The principle of the clustering method and the slicing method for detecting the water level line is that for a certain selected fixed color threshold (red in this case), when the color threshold is converted into an HSV channel, the change range of the H value (hue) is not large along with the change of the light environment, but the S value (saturation) and the V value (brightness) of the color threshold change along with the change of the ambient light. Therefore, for the red threshold, the saturation and brightness of the part above the water line and the part below the water line change significantly due to reflection or refraction by the water surface.
15. The clustering algorithm is based on unsupervised learning of a binary array consisting of S and V values of the extracted red threshold by using KMeans clustering, and two classifications (top part of the water line and bottom part of the water line) are performed, and class center values center1 and center2 of the two classifications are obtained, and then the average value center mean of the two is obtained (center1+ center2)/2, and the center mean is calculated as an example in this figure [168, 122 ].
16. The preset red threshold range in the step 4 is as follows: the initial S value (saturation) interval is 120-255, and the V value (brightness) interval is 100-255. As can be seen from the centerMean [168, 122], centerS [168 ] and centerV [ 122] are substituted into the initial S-value and V-value intervals to obtain new S-value intervals of 168 to 255 and V-value intervals of 122 to 255. The updated red threshold range is: [[[165,168,122],[180,255,255]],[[0,168,122],[30,255,255]]]. And substituting hsvData into the hsvData to obtain a binary image mask Red. The circumscribed rectangle of the mask red is calculated, the Y value corresponding to the bottom of the maximum outline is taken as the Y coordinate of the waterline, which is defined as waterfine Y1, and the waterfine Y1 is calculated as 836 by taking fig. 13 as an example, and the result is ready for use.
17. Then, a slicing method is used for obtaining a water line, the area obtained in the step 14 is equally divided into 10 equal parts according to the height of the picture (which can be adjusted according to actual conditions), and the height of each equally divided picture is calculated, wherein the height of each equally divided picture is as follows: 12.8. then calculate the HSV mean value within the red threshold range of each aliquot, where the calculation is: [[169,188,127],[169,169,125],[170,169,122],[171,161,121],[171,168,101],[0,0,0],[0,0,0],[0,0,0],[0,0,0],[0,0,0]]. And then, sequentially obtaining the difference value (the difference value of the sum of H, S, V) of the mean values of the two adjacent HSV, and calculating the position of the two HSV with the largest difference value, wherein the difference value of the 5 th picture and the 6 th picture is the largest. Narrowing the waterline range to the fifth picture, and then taking the middle Y coordinate of the fifth picture as the waterline position, which is defined as waterline Y1, where the waterline Y1 is calculated to be 848, and the result is ready for use. The water lines obtained by the clustering method and the slicing method are shown in FIG. 14.
18. Defining the final water level curve as waterfall y, when the difference between waterfall y1 and waterfall y2 is greater than 25, the waterfall y is (waterfall y1+ waterfall y 2)/2; when the difference between waterfliney 1 and waterfliney 2 is less than or equal to 25, then waterfliney is waterfliney 1. Here available as waterfliney 836. Substituting the obtained waterfliney 836 into the step 13, and obtaining a reading readNum corresponding to the water line of 0.467 through linear regression. The reading and water line are shown schematically in fig. 15, and it can be seen that the water line and reading are consistent with the actual situation, and the error is within 1 cm.
Further, the river channel monitoring method based on machine vision also comprises a flow velocity measuring method based on computer vision, and comprises the following steps:
1. selecting a current-measuring section
Referring to GB 50179-2015 river flow test specification, the test river section is preferably selected from upstream river sections which are easy to form section control, such as stone beams, sharp beaches, curves, bayonets, artificial barrages and the like. Wherein the distance between the upstream river section of the stone beam, the sharp beach, the toy arrival and the bayonet and the section control is 5 times of the river width, and the distance of the section control of the mountain stream river can be widened to 3 times of the river width; or the river reach controlled by the river channel is formed by selecting the river reach which is stable in factors such as bottom slope, section shape, roughness and the like of the river channel and is easy to be subjected to the on-way resistance action of the river channel, and no huge rock blocks block water, huge vortex, turbulent flow and the like exist in the river reach. When the section control and the river channel control are carried out at the obstructed position of a certain river reach, selecting the river reach controlled by the section as a test river reach; and on the river reach with the same control characteristics, selecting a narrow deep river reach with larger water depth as a test river reach. If the test river reach of the medium and small rivers hardly meets the control distance, the test river reach can be properly widened, but the test method using conditions should be met.
After the section is selected, the section structure should be measured according to the standard requirements, the measurement data is shown in the following table, and a section structure diagram is drawn, as shown in fig. 16.
Figure BDA0002935922670000091
2. Calibration
Erecting a camera in the area where the flow measuring section is located, adjusting the visual field of the camera to enable the selected flow measuring section to completely enter the visual field range of the camera, then installing a calibration plate on the river surface in the visual field range of the camera, and acquiring an image of the calibration plate through the camera, as shown in fig. 17.
As shown in fig. 18, by calibration, the correlation between the camera plane and the water flow plane coordinate is established, and by the established relationship, the relationship matrix can be calculated. After calibration is completed, preset positions (for fixing the angle, the focal length and the like of the camera) are set for the camera. If the installation position or parameters of the camera are changed or the water level of the water flow plane is greatly changed, the calibration needs to be carried out again.
The calculation method of the relation matrix is as follows:
in computer vision, the homography of a plane is defined as the projection mapping of one plane to another. Thus the mapping of points on a two-dimensional plane onto the camera imager is an example of planar homography. The mapping relation is as follows:
q2∝Hq1
wherein q is1As pixel coordinates of the camera, q2Is q1The physical coordinates of the corresponding points on the water flow plane are shown in the specification, H is q1And q is2The corresponding relationship matrix of (2). After it is unfolded:
Figure BDA0002935922670000092
since homogeneous coordinates are transformed, the homography is independent of scale and therefore has 8 degrees of freedom, normalized using H33-1, yielding:
Figure BDA0002935922670000101
substituting the coordinates of a set of matching points yields 2 equations:
Figure BDA0002935922670000102
in this embodiment, four matching points are selected on the calibration board for calculation, and the following linear equation set is obtained:
Figure BDA0002935922670000103
and solving the linear equation set by using a RANSAC algorithm or a least square method, and calculating to obtain an optimal solution of the corresponding matrix H.
3. Flow rate calculation
In the flow measurement section image obtained by the camera, a flow measurement area in a long strip shape is divided along the width direction of the river channel, as shown in fig. 19 and fig. 20, the change trend of the water flow image pixels around the section in the video image is analyzed according to the selected flow measurement area, the change between the continuous 2 frames of images is analyzed, and the displacement vector of the key point based on the pixel coordinate is calculated.
The displacement vector calculation method based on pixel coordinates of key points between 2 continuous frames is as follows:
considering the frame rate of the camera (typically 25FPS) and the river flow rate (typically less than 5m/s), we assume that the pixel value of a keypoint is constant between 2 consecutive frames, so that the pixel I (x, y, t) of the first frame moves a distance (dx, dy) in the next frame after dt times, and these pixels are the same, we can obtain:
I(x,y,t)=I(x+dx,y+dy,t+dt)
the Taylor series approximation is made to the right side of the above equation, divided by dt, to yield:
fxu+fyv+ft=0
wherein the content of the first and second substances,
Figure BDA0002935922670000111
x and y are respectively the horizontal coordinate and the vertical coordinate of the camera picture, and t is time;
considering that the neighboring pixels around the key point have the same moving trend, the data of 3 × 3 window is used here, and therefore, 9 pixels satisfy the formula fxu+fyv+ft=0。
The problem now becomes solved by 9 equations where two unknown variables are over-determined the number of solutions is greater than the number of unknowns, which is an over-determined equation, using a least squares approach to solve the optimum.
Figure BDA0002935922670000112
And (u, v) is a displacement vector of the key point based on pixel coordinates, and a subscript i represents each pixel point. Similarly, other keypoint displacement vectors in the video are calculated in the same way.
And counting data in a period of time, and converting a series of collected displacement vectors into displacement vectors of coordinates of a water flow plane by combining a relation matrix H obtained by calculation in the calibration process. The calculation method is as follows:
the pixel coordinate of the key point in the previous frame is (x1, y1), and the pixel coordinate of the key point in the next frame is (x2, y2), according to the calibration relation q2∝Hq1The key point can be obtained as (x1 ', y 1') and (x2 ', y 2') at the physical coordinates, and the displacement vector of the key point based on the physical coordinates is calculated as:
S=(u′,v′)
wherein, u '═ x 2' -x1 'and v' ═ y2 '-y 1'.
Acquiring displacement vectors of all key points in the image based on physical coordinates, fitting a flow velocity direction vector L of water flow, projecting the displacement vectors of all the key points based on the physical coordinates onto the flow velocity direction vector L to obtain the displacement D of the key points in the flow velocity direction:
D=S·L/|L|
wherein, S.L is vector point multiplication, and L is vector modular length;
then, the time difference dt between two consecutive frames is calculated by combining the frame rate of the video, where dt is 1/25 seconds and 0.04 seconds, and the flow velocity V at the corresponding point is D/dt, taking 25FPS as an example. Counting a series of obtained flow velocity V sets, filtering out invalid data (such as flow velocity greater than 100m/s) to obtain the flow velocity of each key point, as shown in FIG. 21, wherein the ordinate represents the distance from the river bank, and the abscissa represents the flow velocity. The flux of the water flow section can be calculated by the acquired flow velocity data and the selected section structure data.
The above description is only exemplary of the present invention and should not be taken as limiting, and any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (8)

1. A water level detection method based on machine vision is characterized by comprising the following steps:
s1, firstly, obtaining a water gauge with the following structure, wherein the water gauge comprises a water gauge body, water gauge lines are coated on the water gauge body, the water gauge lines comprise characteristic color lines coated vertically, scale mark blocks which are sequentially arranged in a staggered manner along the length direction are arranged on two sides of each characteristic color line, each scale mark block comprises three characteristic scale lines arranged at intervals, the characteristic scale lines and the characteristic color lines are coated in a red manner, and white scale lines are coated between every two adjacent characteristic scale lines, so that the scale mark blocks and the characteristic color lines form an E shape; the area between the scale mark blocks on one side is coated with blue color blocks, and the area between the scale mark blocks on the other side is coated with numbers;
s2, under various light conditions, acquiring images of the water gauge, converting the images into HSV images, and counting HSV threshold ranges of a red area and a blue area in the water gauge images;
s3, water level reading: converting the acquired water gauge image from the BGR channel into an HSV channel, calculating a binary image of the blue region according to the HSV threshold range of the blue region acquired in the step S2, and performing morphological operation on the image; obtaining the outline of a circumscribed rectangle from the binary image of the blue area to obtain a rectangular frame set of the blue area in the water gauge image; and acquiring the maximum longitudinal coordinate value and the minimum longitudinal coordinate value of each rectangular frame in the rectangular frame set of the blue region, and sequentially converting the maximum longitudinal coordinate value and the minimum longitudinal coordinate value of the rectangular frames into actual longitudinal dimensions by taking the top of a scale mark on the water gauge as a starting point according to the actual longitudinal dimensions of the blue region on the water gauge, so that the height of the water level can be read.
2. The water level detection method based on machine vision according to claim 1, characterized in that before reading the water level, the water gauge region is extracted, when extracting, the obtained water gauge image is converted from the BGR channel to the HSV channel, the binary image of the red region and the binary image of the blue region are respectively calculated according to the HSV threshold ranges of the red region and the blue region obtained in step S2, and morphological operation is performed on the images; respectively obtaining the outline of a circumscribed rectangle from the binary image of the red area and the binary image of the blue area to obtain a rectangular frame set of all red areas and a rectangular frame set of the blue area in the water gauge image; and extracting the regions where the rectangular frame set of the red region and the rectangular frame set of the blue region are located as water gauge regions.
3. The machine vision-based water level detection method according to claim 2, wherein the distance SV between two rectangular frames in the set of rectangular frames of the blue region is iteratively calculated in turn, and the two rectangular frames whose distance SV is smaller than the set value are merged according to the outline of the minimum bounding rectangle, and the rectangular frames connected in a long strip shape are taken as the rectangular frames of the blue region; SV ═ S1+ S2-SC)/S3, where S1 and S2 are the areas of the two rectangular frames, SC is the intersection area of the two rectangular frames, and S3 is the area of the smallest circumscribed rectangle of the two rectangular frames, respectively.
4. The machine vision-based water level detection method as claimed in claim 3, wherein a distance SV between each rectangular frame in the rectangular set of the red region and the rectangular frame of the blue region is iteratively calculated, and the rectangular frames with the distance SV less than a set value are merged with the rectangular frame of the blue region to finally obtain the rectangular frame of the water gauge region.
5. The water level detection method based on machine vision of claim 1, wherein in step S1, a floating block is slidably sleeved on the water gauge body, an upper end surface of the floating block is higher than the water surface and perpendicular to the axis of the water gauge body, a side of the upper end surface of the floating block corresponding to the number on the water gauge is coated with a blue color block, and the blue color block extends towards the middle to a position corresponding to the characteristic color line.
6. The machine vision-based water level detecting method of claim 5, wherein the back side of the water gauge body has a rib formed along a length direction, and the floating block has a groove formed thereon corresponding to the rib.
7. The machine vision-based water level detection method of claim 1, wherein the width of the characteristic color line is greater than 20 mm.
8. A river channel monitoring method based on machine vision is characterized by comprising the water level detection method based on machine vision according to any one of claims 1 to 7.
CN202110169933.3A 2021-02-05 2021-02-05 Water level detection method and river channel monitoring method based on machine vision Active CN112884731B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110169933.3A CN112884731B (en) 2021-02-05 2021-02-05 Water level detection method and river channel monitoring method based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110169933.3A CN112884731B (en) 2021-02-05 2021-02-05 Water level detection method and river channel monitoring method based on machine vision

Publications (2)

Publication Number Publication Date
CN112884731A true CN112884731A (en) 2021-06-01
CN112884731B CN112884731B (en) 2022-03-29

Family

ID=76055985

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110169933.3A Active CN112884731B (en) 2021-02-05 2021-02-05 Water level detection method and river channel monitoring method based on machine vision

Country Status (1)

Country Link
CN (1) CN112884731B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114972793A (en) * 2022-06-09 2022-08-30 厦门大学 Lightweight neural network ship water gauge reading identification method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7657089B2 (en) * 2006-02-21 2010-02-02 Microsoft Corporation Automatic classification of photographs and graphics
CN203148531U (en) * 2013-03-18 2013-08-21 河海大学 Water level and water quality monitoring terminal based on machine vision

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7657089B2 (en) * 2006-02-21 2010-02-02 Microsoft Corporation Automatic classification of photographs and graphics
CN203148531U (en) * 2013-03-18 2013-08-21 河海大学 Water level and water quality monitoring terminal based on machine vision

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
羊冰倩: "《基于视觉检测技术的液位测量系统的研究》", 《万方硕士学位论文》 *
高晓亮 等: "《基于HSV空间的视频实时水位检测算法》", 《郑州大学学报(理学版)》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114972793A (en) * 2022-06-09 2022-08-30 厦门大学 Lightweight neural network ship water gauge reading identification method

Also Published As

Publication number Publication date
CN112884731B (en) 2022-03-29

Similar Documents

Publication Publication Date Title
US11836976B2 (en) Method for recognizing seawater polluted area based on high-resolution remote sensing image and device
CN110175576B (en) Driving vehicle visual detection method combining laser point cloud data
CN107588823B (en) Water gauge water level measurement method based on dual-waveband imaging
CN108759973B (en) Water level measuring method
CN103345755B (en) A kind of Chessboard angular point sub-pixel extraction based on Harris operator
CN109764930B (en) Water gauge water line visual detection method suitable for complex illumination conditions
CN103735269B (en) A kind of height measurement method followed the tracks of based on video multi-target
CN112862898B (en) Flow velocity measuring method based on computer vision
CN106127205A (en) A kind of recognition methods of the digital instrument image being applicable to indoor track machine people
CN112013921B (en) Method, device and system for acquiring water level information based on water level gauge measurement image
CN113819974A (en) River water level visual measurement method without water gauge
CN109186706A (en) A method of for the early warning of Urban Storm Flood flooding area
CN104700071A (en) Method for extracting panorama road profile
CN108846402A (en) The terraced fields raised path through fields based on multi-source data automates extracting method
CN114639064B (en) Water level identification method and device
CN111476157B (en) Lane guide arrow recognition method under intersection monitoring environment
CN109284663A (en) A kind of sea obstacle detection method based on normal state and uniform Mixture Distribution Model
CN111539330A (en) Transformer substation digital display instrument identification method based on double-SVM multi-classifier
CN112884731B (en) Water level detection method and river channel monitoring method based on machine vision
CN108022245B (en) Facial line primitive association model-based photovoltaic panel template automatic generation method
CN109961065B (en) Sea surface ship target detection method
CN114241469A (en) Information identification method and device for electricity meter rotation process
CN113269049A (en) Method for detecting handwritten Chinese character area
CN111524143B (en) Foam adhesion image region segmentation processing method
CN103438802B (en) Optical fiber coating geometric parameter measurement method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant