CN114677594A - River water level intelligent identification algorithm based on deep learning - Google Patents

River water level intelligent identification algorithm based on deep learning Download PDF

Info

Publication number
CN114677594A
CN114677594A CN202210398388.XA CN202210398388A CN114677594A CN 114677594 A CN114677594 A CN 114677594A CN 202210398388 A CN202210398388 A CN 202210398388A CN 114677594 A CN114677594 A CN 114677594A
Authority
CN
China
Prior art keywords
line
marking
color
water
water level
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210398388.XA
Other languages
Chinese (zh)
Inventor
朱言庆
方亮
田野
张悦
郭守飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhiyang Innovation Technology Co Ltd
Original Assignee
Zhiyang Innovation Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhiyang Innovation Technology Co Ltd filed Critical Zhiyang Innovation Technology Co Ltd
Priority to CN202210398388.XA priority Critical patent/CN114677594A/en
Publication of CN114677594A publication Critical patent/CN114677594A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/18Status alarms
    • G08B21/182Level alarms, e.g. alarms responsive to variables exceeding a threshold

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an intelligent river water level recognition algorithm based on deep learning, which comprises the following steps: and N marking lines with different colors are arranged on the river shoreside. And acquiring images of the marking line and the water area near the marking line, establishing a marking line and a water area image sample library near the marking line, and marking the marking line, the reflection of the marking line and the outline of the water surface. And constructing a Mask RCNN model, and training the Mask RCNN model by using an image sample library. And acquiring real-time images of the mark line and the water area near the mark line, and segmenting the mark line and the water surface in the currently acquired images of the mark line and the water area near the mark line by using a trained Mask RCNN model. And judging the current water level condition according to the segmentation result. The method and the device can improve the real-time performance and accuracy of river water level identification.

Description

River water level intelligent identification algorithm based on deep learning
Technical Field
The invention relates to the technical field of intelligent water conservancy monitoring, in particular to an intelligent river water level identification algorithm based on deep learning.
Background
Water conservancy is the infrastructure of national economy, and the discernment of waters water level such as river course, reservoir, small reservoir, dyke, sluice is the important a ring of flood control drought defying, and most of the water level monitoring equipment on the market adopts signal sensing technologies such as laser, radar, and the price is expensive, and is often just a few, is not suitable for large tracts of land popularization. Many waters of China lack water level detection equipment, or only a monitoring camera is arranged near the water area, operation and maintenance personnel regularly check water area images uploaded to a background by the camera, and human eyes recognize the water level, which wastes time and labor.
In recent years, with the rapid development of deep learning techniques, computer vision techniques based on deep learning are beginning to be widely used in various industries.
A water level detection method based on fusion of Faster R-CNN and GrabCut is disclosed by a Master thesis of Zhao Na of Hebei geological university. The method has the following defects: firstly, in the process of positioning the water level gauge by the Faster R-CNN, the processing method of the reflection of the water level gauge in the water surface completely depends on the performance of the Faster R-CNN network, the reflection of the water level gauge in the calm and clear water surface is very similar to the characteristics of the water level gauge, and at the moment, the Faster R-CNN can easily identify the reflection of the water level gauge as the water level gauge, so that the positioning deviation of the water level gauge is large, the segmentation effect of GrabCut on the water level gauge is poor, and the water level identification has large errors. Secondly, the algorithm needs to establish and fix the mapping relation between the pixel coordinates and the world coordinates in advance, and the problem that the relative position of the water level gauge and the camera deviates in practical application exists, for example, the camera deviates downwards due to gravity, the resetting precision of the preset position of the ball machine deviates, and the like, at the moment, the mapping relation between the pixel coordinates and the world coordinates changes, and the deviation is generated by calculating the water level by using the original mapping relation.
Chinese patent application 202110134842.6 discloses a water gauge image water level automatic reading method and system based on Mask RCNN algorithm, the method has the following disadvantages: the influence of the inverted image of the water level gauge is not considered. On the water level of calm and limpid, the reflection of water level chi is very similar with the characteristic of water level chi, and Mask RCNN also discerns the reflection of water level chi for the water level chi very easily, leads to the detection of water level chi and cuts apart the great error that produces, and then causes the great error of the reading of water level chi. And secondly, recording coordinates of four corner points of the water level gauge in the image. In practical application, the camera deflects downwards due to gravity, the reset precision of the preset position of the dome camera deviates, coordinates of four corner points of the water level gauge in an image deviate, and the original set parameters are used again, so that errors are caused to the reading of the water level gauge.
Chinese patent application 201910536834.7 discloses a method for identifying a water gauge based on deep learning, which considers the influence of water surface reflection of the water gauge on the positioning accuracy of the water gauge, but has low universality on the processing method of the water gauge reflection. The processing method is that the water surface is identified by a color binarization method, and the color is judged by using Lab color space. In practical application, the water surface is difficult to identify through colors under the influence of complex conditions of light, weather and background, and the accuracy of the method for identifying the water surface is not high.
Therefore, it is required to design a river water level recognition algorithm capable of improving the water level recognition accuracy.
Disclosure of Invention
Aiming at the current situation of water level detection in the water conservancy field, the invention discloses an intelligent river water level identification algorithm based on deep learning, which realizes high-precision identification of water levels of rivers, reservoirs, small reservoirs, embankments, sluice gates and other water areas.
In order to achieve the purpose, the invention adopts the following technical scheme:
an intelligent recognition algorithm of river water level based on deep learning comprises the following steps:
s1, sequentially arranging N marking lines with different colors from top to bottom on the river bank; wherein N is a positive integer.
S2, obtaining images of the marking lines and the water areas near the marking lines through a monitoring camera installed on the river reach, establishing a sample library of the water areas near the marking lines and the marking lines, and marking the marking lines, the inverted images of the marking lines and the outlines of the water surfaces in the images in the sample library of the water areas near the marking lines and the marking lines to obtain marked marking lines and the sample library of the water areas near the marking lines.
S3, constructing a Mask RCNN model, and training the Mask RCNN model by using the marked mark line and the water area image sample library near the mark line to obtain the trained Mask RCNN model.
And S4, acquiring real-time images of the mark line and the water area near the mark line through a monitoring camera installed on the river reach, and segmenting the mark line and the water surface in the currently acquired images of the mark line and the water area near the mark line by using a trained Mask RCNN model.
S5, judging the current water level situation according to the segmentation result of the step S4, and if the marking lines of all color types are not segmented, giving an alarm of a corresponding level according to the colors of the non-segmented marking lines; if the marking lines of all the color types are segmented, the current water level of the water surface is calculated by adopting a graphical algorithm and a mathematical formula.
Further, the marked mark line and data in the water area image sample library near the mark line are randomly divided into a training set, a verification set and a test set according to the ratio of 8:1:1, a Mask RCNN model is trained by the training set, the optimal weight of the Mask RCNN model is screened by the verification set, and the trained Mask RCNN model is tested by the test set. And when the recognition rate of the trained Mask RCNN model on the test set reaches 95%, the algorithm of the Mask RCNN model is considered to be qualified, otherwise, the training hyper-parameters are adjusted, and the algorithm is retrained.
Further, if the value of N is 3 and three colors are sequentially from top to bottom, namely color a, color B and color C, the step of "if the marking lines of all color types are not divided, performing alarm of corresponding level according to the colors of the marking lines which are not divided" specifically includes the following steps:
if the C color marking line is not divided, the water level is judged to be positioned above the C color marking line, and C color alarm is executed; if the mark lines of the color C and the color B are not divided, the water level is judged to be above the mark line of the color B, and the color B alarm is executed; if the mark lines of the colors C, B and A are not divided, the water level is judged to be above the mark line of the color A, and the warning of the color A is executed. The color a alarm, the color B alarm and the color C alarm represent alarms of different levels.
Further, the step of calculating the water level of the current water surface by using a graphical algorithm and a mathematical formula specifically comprises the following steps:
s51, removing the reflection of the marking line in the water surface
The area of the region surrounded by the contour lines of the N color marking lines and the area of the region surrounded by the water surface contour lines are obtained, the area of the intersection of the region surrounded by the contour lines of each color marking line and the region surrounded by the water surface contour lines is respectively calculated, the ratio of the area to the area of the region surrounded by the contour lines of the marking lines is obtained, and the marking lines with the area ratio larger than 0.5 are removed.
Considering the execution speed of the algorithm, the area ratio is roughly calculated by adopting a sampling method, which comprises the following specific steps:
the 9 points on the contour line on the mark line are extracted uniformly.
Secondly, judging whether each point is in the area surrounded by the contour line of the water surface by using a ray method.
And thirdly, counting the number of points in the region surrounded by the water surface contour line in the 9 points, and recording the number as n, so that the calculated area ratio can be roughly calculated as n/9.
And S52, respectively obtaining A, B, C minimum circumscribed rotation rectangles of the contour lines of the three color marking lines according to the segmentation result. The minimum circumscribed rotation rectangle is obtained by calling the minAreaRect function in the computer vision and machine learning software library opencv.
And S53, respectively obtaining A, B, C long sides of the minimum circumscribed rotating rectangle of the three color marking lines, and obtaining three sides in total.
S54 and (v) calculating the unit direction vectors of the three sides acquired in step S53 such that the signs of the components of the three unit direction vectors on the x axis are the same, calculating the mean vector of the three unit vectors, and taking the mean vector as a horizontal vector and the unit vector orthogonal to the horizontal vector as a vertical vector, which is denoted by (x _ v, y _ v).
The calculation method of the unit direction vector comprises the following steps:
assuming that (x1, y1), (x2, y2) are two end points of an edge, the unit direction vector of the edge is:
Figure BDA0003598431970000041
the calculation method of the mean vector comprises the following steps:
let e1 (x)1,y1),e2:(x2,y2),e3:(x3,y3) For three unit vectors, the mean vector of the three unit vectors is:
((x1+x2+x3)/3,(y1+y2+y3)/3)
the calculation method of the vertical vector comprises the following steps:
the unit vector orthogonal to vector (x, y) is:
Figure BDA0003598431970000051
and S55, obtaining the central point Ca of the minimum external rotation rectangle of the A color marking line, namely the central point of the A color marking line.
Assuming that (x1 ', y 1'), (x2 ', y 2'), (x3 ', y 3'), (x4 ', y 4') are coordinates of four vertices of the rotated rectangle, coordinates of the center point Ca of the rotated rectangle are (x _ c, y _ c), wherein,
x_c=(x1’+x2’+x3’+x4’)/4
y_c=(y1’+y2’+y3’+y4’)/4。
and S56, acquiring a straight line L which passes through the center point Ca of the color A marking line and is parallel to the vertical vector v, solving the intersection point of the straight line L and the water surface contour line, and acquiring the point P with the minimum ordinate in all the intersection points.
Assuming that the coordinates of the center point Ca are (x _ c, y _ c) and the vertical vector v is (x _ v, y _ v), the equation of the straight line L passing through the center point Ca and parallel to the vertical vector v is:
A*x+B*y+C=0
wherein the content of the first and second substances,
A=y_v
B=-x_v
C=x_v*y_c-y_v*x_c
let the equation of the straight line L be a x + B y + C equal to 0, and the water surface contour line is a group of points arranged clockwise or counterclockwise and is denoted as [ pt1, pt 2.,. ptn ], where the first point is identical to the last point, then the method of calculating the intersection point of the straight line L and the water surface contour line is as follows:
and traversing the line segment [ pti, pti +1] on the contour line of the water surface, wherein i is 1, 2.
The intersection of line segment [ (xa, ya), (xb, yb) ] and line segment (0) where a x + B y + C is 0 is calculated as follows:
step 1: judging whether the two end points (xa, ya), (xb, yb) of the line segment are on the straight line L, that is, checking whether the following two equations hold,
A*xa+B*ya+C=0
A*xb+B*yb+C=0
if the equation is established, the corresponding point is the intersection point of the straight line and the line segment, otherwise, step 2 is executed.
Step 2: judging whether the two end points (xa, ya) and (xb, yb) of the line segment are on the same side of the straight line L, namely judging whether the following inequality is true,
(A*xa+B*ya+C)*(A*xb+B*yb+C)>0
if the inequality is true, two end points of the line segment are on the same side of the straight line, and the straight line and the line segment have no intersection point; if the inequality is not true, the two end points are on different sides of the straight line, the straight line and the line segment have an intersection point, and the step 3 is switched;
and step 3: calculating to obtain a functional relation formula of a straight line A2 x + B2 y + C2 ═ 0, wherein
A2=yb-ya
B2=xa-xb
C2=xb*ya-xa*yb
The intersection (x0, y0) of the line L and the line segment is determined by the following formula
x0=(C2*B-C*B2)/(A*B2-A2*B)
y0=(C*A2-C2*A)/(A*B2-A2*B)
S57, calculating the distance d (Ca, P) from the point Ca to the point P, namely the pixel distance from the water surface to the center point of the A color type mark line;
the coordinates of the point Ca are (x _ c, y _ c) and the coordinates of the point P are (x0, y0), the distance from the point Ca to the point P is
Figure BDA0003598431970000061
And S58, calculating the physical distance corresponding to the single pixel of the real-time marker line and the image of the water area near the marker line by adopting the following formula:
d_per_pixel=A_w_cm/A_w_pixel
wherein d _ per _ pixel is a physical distance corresponding to the unit pixel distance, and the unit is cm; a _ w _ cm is the actual width of the A color marking ruler, the unit is cm, and the A _ w _ cm is obtained through actual measurement when a marking line is arranged on a river bank; a _ w _ pixel is the length of the minimum circumscribed rectangle short side of the A color marking line, and the unit is a pixel;
s59, calculating the physical distance d (water, A) from the center point Ca of the marking line of the color A to the water surface along the vertical direction by adopting the following formula:
d(water,A)=d(Ca,P)×d_per_pixel
s510, calculating the water level of the current water surface by adopting the following formula:
water_level=A_level-d(water,A)-A_w_cm/2
wherein, water _ level is the water level of the current water surface, a _ level is the water level of the upper edge of the mark line of color a, d (water, a) is the physical distance from the center Ca of the mark line of color a to the water surface along the vertical direction, and a _ w _ cm is the actual width of the mark line of color a obtained by measurement.
According to the technology, the Mask RCNN model obtained through training of a large number of samples has high segmentation accuracy on A, B, C water level mark lines with three colors and the water surface, and the segmentation of the water surface can eliminate the reflection of the water level mark lines in the water surface and eliminate the influence of the reflection on the algorithm. In addition, the algorithm allows the camera to slightly deviate in use, so long as the water level mark line and the water level are in the camera view field, the identification can be carried out, and the accuracy is not influenced. And the algorithm also realizes the warning of A, B, C three levels of water level by identifying A, B, C three water level mark lines while giving the water level of the water surface, and the level warning of the water level is not influenced by the water level identification precision, so that compared with the prior art, a guarantee is added, and the practicability of the algorithm is improved.
Drawings
FIG. 1 is a flow chart of a method of the intelligent recognition algorithm of the present invention;
fig. 2 is an image of a water area near the marker line acquired in step S2 in embodiment 1 of the present invention;
FIG. 3 is a contour line of the label described in step S3 in embodiment 1 of the present invention;
fig. 4 shows the center point Ca of the mark line of color a, the straight line L, and the intersection point P between the straight line L and the water surface profile calculated in step S5 in embodiment 1 of the present invention;
fig. 5 is an image of the water area near the marker line acquired in step S2 in embodiment 2 of the present invention;
fig. 6 is an outline of the A, B, C color mark line and the water surface divided in step S3 in embodiment 2 of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings:
fig. 1 shows an intelligent recognition algorithm for river water level based on deep learning, which comprises the following steps:
s1, sequentially arranging N marking lines with different colors from top to bottom on the river bank; wherein N is a positive integer.
S2, obtaining images of the marking lines and the water areas near the marking lines through a monitoring camera installed on the river reach, establishing a sample library of the water areas near the marking lines and the marking lines, and marking the marking lines, the inverted images of the marking lines and the outlines of the water surfaces in the images in the sample library of the water areas near the marking lines and the marking lines to obtain marked marking lines and the sample library of the water areas near the marking lines.
S3, constructing a Mask RCNN model, and training the Mask RCNN model by using the marked mark line and the water area image sample library near the mark line to obtain the trained Mask RCNN model.
The Network structure of the conventional Mask RCNN algorithm includes five modules, namely a feature extraction module, a candidate Region extraction module (RPN), a ROIAlign module, a classification module, and a Mask module.
The feature extraction module is used for generating a feature map and comprises a Convolutional Neural Network (CNN) and a Feature Pyramid Network (FPN). The input to the module is an image and the output is a feature map of various sizes of the image. The CNN extracts the high-dimensional features of the image through a series of convolution and pooling operations, and the CNN used herein is Resnet 101.
The FPN is a pyramid-like network structure, the FPN further processes the deep feature map of the CNN into a feature map with the same size as the shallow feature map of the CNN through operations such as deconvolution or upsampling, and then fuses the two feature maps with the same size and different depths, and the fused feature map simultaneously contains the high abstract features of the image and the detailed information of the image, so that the FPN has stronger feature expression capability. And finally, outputting the fusion feature maps with different sizes to a following service module for corresponding operation.
The candidate region extraction module (RPN) is used to extract foreground frames, including a convolution layer, a category regression submodule, a frame regression layer, and finally a non-maximum suppression (nms) operation. And the convolution layer of the RPN executes convolution operation of 3 multiplied by 3 on the feature map output by the feature extraction module to obtain the feature map of 256 channels.
Each point (256-dimensional vector) on the feature map is called an anchor point, and each anchor point generates 9 anchor frames, and the 9 anchor frames are obtained by combining 3 frame sizes and 3 aspect ratios. If the feature map size is H W256, then the RPN will preset H W9 anchor frames. The class regression submodule of the RPN is used to classify the anchor frame, here, it is divided into two classes of foreground frame and background frame. Specifically, a 1 × 1 convolution operation is performed on the feature map output by the RPN convolution layer to obtain a feature map with a size of H × W × 18, where 18 is 9 × 2, 9 represents 9 anchor frames corresponding to anchor points, and 2 represents two categories to which the anchor frames belong. And then obtaining the score of each category through reshape and softmax operations. And the frame regression layer of the RPN is used for finely adjusting the coordinates of the anchor frame so that the anchor frame is more fit with the target. Specifically, a 1 × 1 convolution operation is performed on the feature map output by the RPN convolution layer, so as to obtain a feature map with a size of H × W × 36, where 36 is 9 × 4, 9 represents 9 anchor frames corresponding to anchor points, and 4 represents offsets of 4 parameters (x, y, W, H) of anchor frame coordinates. Where (x, y) is the center point of the anchor frame and w, h are the width and height of the anchor frame. The last non-maximum suppression (nms) operation of the RPN is used to filter the foreground frames generated by the RPN and reject the overlapping frames. Specifically, the foreground frames generated by the RPN are arranged from large to small according to the scores to obtain the first 12000 foreground frames, the intersection ratio of the foreground frames is calculated pairwise, if the intersection ratio is larger than 0.7, the foreground frames with lower scores are removed, the remaining foreground frames are arranged from large to small according to the scores, and the first 1200 foreground frames are obtained. The intersection ratio calculation formula of the two foreground frames described herein is as follows:
Figure BDA0003598431970000091
here, box1, box2 are two foreground frames, the upper equal-sign right box1 ≦ box2 represents the area of the intersection of the two foreground frames, the upper equal-sign right box1 represents the area of box1, and the upper equal-sign right box2 represents the area of box 2.
The ROIAlign module is used for replacing ROIPooling operation, and the problem of region mismatching (mis-alignment) caused by two times of quantization in POIPooling operation is solved. The ROIPooling maps the candidate frame generated by the RPN layer into a feature map with a fixed size, and the specific steps are as follows:
a. and mapping the candidate frame to the feature map to obtain a candidate frame feature region, and quantizing the boundary of the feature region into integer point coordinate values.
b. The candidate frame feature region is divided into k × k cells, and the boundaries of the cells are quantized to integer point coordinate values.
c. Max firing was done inside each cell to get a fixed size profile.
The above steps a and b are quantized once respectively.
The method comprises the following steps of using a bilinear interpolation method to obtain pixels of floating point number coordinates in ROIAlign, replacing quantization operation in ROIPooling, and specifically comprising the following steps:
a. and mapping the candidate frame to the feature map to obtain a candidate frame feature region, and keeping the floating point number boundary of the feature region from being quantized.
b. And dividing the candidate frame feature region into k multiplied by k units, and keeping the floating point number boundary of the unit not to be quantized.
c. Fixed four coordinate positions are calculated in each cell, the values of the four positions are calculated by a bilinear interpolation method, and then the maximum pooling operation is performed.
The classification module has two functions, classifying the candidate frames and regressing the boundaries of the candidate frames. The specific steps are described as follows: after the candidate frame passes through ROIAlign to obtain a feature map with a fixed size, the candidate frame passes through a full connection layer of the module, and then is divided into two branches, wherein one branch predicts the category of the candidate frame, and the other branch regresses the boundary of the candidate frame.
The mask module is used for segmenting the object in the candidate frame. The specific steps are described as follows: after the candidate box is processed by ROIAlign to obtain a feature map with a fixed size, several convolution operations of the module are firstly carried out, and then deconvolution operation is carried out to obtain a mask of 28 × 28 × 4. And finally, combining the candidate frame category obtained by prediction of the classification module, obtaining the mask of the channel corresponding to the category, and obtaining the final segmentation result of the candidate frame.
And S4, acquiring real-time images of the mark line and the water area near the mark line through a monitoring camera installed on the river reach, and segmenting the mark line and the water surface in the currently acquired images of the mark line and the water area near the mark line by using a trained Mask RCNN model.
S5, judging the current water level situation according to the segmentation result of the step S4, and if the marking lines of all color types are not segmented, giving an alarm of a corresponding level according to the colors of the non-segmented marking lines; if the marking lines of all the color types are segmented, the current water level of the water surface is calculated by adopting a graphical algorithm and a mathematical formula.
The invention adopts Mask RCNN example segmentation algorithm to segment the water level marking line and the water surface, improves the accuracy of water level marking line and water surface segmentation, can furthest exert the advantage of computer vision based on deep learning, and improves the accuracy of river water level identification. The invention can calculate the water level of the water surface according to the river water level mark line and the water surface segmentation result. The invention also provides an alarm strategy of the river water level, namely, the alarm level is returned when the river water level reaches the alarm height, and the height difference between the returned water level and the water level of each alarm level is not reached.
Example 1
And selecting a river channel at a high and new area in a certain city as a test point for water level detection. The method comprises the following specific steps:
s1: and (3) sequentially sticking the A, B, C three-color marking lines on the riverway shore from top to bottom. Recording the water level of the upper edge of the A color mark line as A _ level, and recording the actual width of the obtained A color mark line as A _ w _ cm, wherein the actual width is 300 cm. In the present embodiment, a represents red, B represents yellow, and C represents blue.
S2: a camera is erected near the river bank, a water area near the marking line is monitored, video information is obtained in real time and sent to a server, and an image of the water area near the marking line is obtained in a video streaming frame extraction mode, as shown in fig. 2.
S3: the Mask RCNN-based segmentation algorithm is used for segmenting A, B, C three marking lines and water surfaces, and the method comprises the following specific steps:
s3.1: 5000 images of a water area near a mark line with a resolution of 1920 x 1080 are obtained through cameras erected at different river reach, and a data set is expanded through rotation, Gaussian noise addition and other modes to obtain a sample data set.
S3.2: labeling the sample data set, and specifically comprising the following steps:
a. and downloading, installing and opening an image marking tool labelme.
b. And sequentially clicking a 'File- > Open Dir' button on the upper right corner of the labelme, selecting a folder where the image is located in a popped dialog box, and displaying the first image of the image folder in the working area of the labelme.
c. Clicking on the "Create Polygons" button in the toolbar on the left side of labelme creates Polygons along the edges of the a color marking line in the image and names red for the Polygons, which in the same way Create Polygons along the edges of the B, C marking line and the water surface in the image and names yellow, blue and water in that order. It should be noted that the reflection of the mark line in the water surface is not distinguished from the mark line, and a polygon is also created along the edge of the reflection of the mark line, and the name of the polygon is consistent with the mark line.
d. After polygons are created for all A, B, C color marking lines and the water surface, clicking a 'Save' button of a toolbar on the left side of labelme, saving the created polygon information as a json file, wherein the json file name is consistent with the image name, and thus, the current image marking is completed.
e. After the current image is labeled, clicking a 'NextImage' button of a left toolbar of label to label the next image.
S3.3: and constructing a Mask RCNN model and training the Mask RCNN model. The method comprises the following specific steps: and (3) randomly dividing the data marked in the step (S3.2) into a training set, a verification set and a test set according to the proportion of 8:1:1, training a Mask RCNN-based segmentation algorithm by using the training set, screening the optimal weight of the algorithm by using the verification set, and testing the trained algorithm by using the test set. And when the recognition rate of the trained algorithm on the test set reaches 95%, the algorithm is considered to be qualified, otherwise, the training hyper-parameter is adjusted, and the algorithm is retrained.
S3.4: and (3) carrying out water level marking line and water surface segmentation on the image of the water area near the marking line to be detected by adopting the Mask RCNN-based segmentation algorithm trained in the step S3.3 to obtain the contour lines of the water level marking line and the water surface as shown in the figure 3.
S4: and (4) directly turning to the next step because three contour lines are segmented.
S5: calculating the water level of the current water surface through a series of graphical algorithms and mathematical formulas, and specifically comprising the following steps:
s5.1: eliminating the reflection of the marking line in the water surface, which comprises the following steps: and respectively calculating the ratio of the intersection area of the area surrounded by the contour line of each mark line and the area surrounded by the water surface contour line to the area of the area surrounded by the mark line, and rejecting the mark lines with the area ratio being more than 0.5.
By the calculation, the areas of intersections of the regions surrounded by the contour lines of the three mark lines divided in step S4 and the region surrounded by the water surface contour line are all 0, and therefore the divided mark lines are not eliminated here.
S5.2: according to the segmentation result in step S3, the minimum bounding rectangles of the contour lines of the a-color marking lines, that is, the minimum bounding rectangles of the A, B, C three marking lines, are respectively obtained, and the following results are obtained:
the minimum external rotation rectangle four vertexes of the A color marking line are as follows: (1012,453), (1115,453), (1115,480), (1012,480).
The minimum external rotation rectangle four vertexes of the B color mark line are as follows: (1012, 524), (1115, 524), (1115,551), (1012,551).
The minimum external rotation rectangle four vertexes of the C color mark line are as follows: (1012,588), (1115,588), (1115,615), (1012,615).
S5.3: and respectively obtaining A, B, C a long side of the minimum external rotation rectangle of the three marking lines, and obtaining three sides in total.
S5.4: calculating the unit direction vectors of the three edges obtained in S5.3, ensuring that the signs of the components of the three unit direction vectors on the x axis are consistent, calculating the mean vector of the three vectors, considering the mean vector as a horizontal vector, taking the unit vector orthogonal to the horizontal vector as a vertical vector, and calculating the obtained vertical vector v as follows: is denoted as v (0, 1).
S5.5: obtaining the central point Ca of the minimum external rotation rectangle of the A color marking line, namely the central point of the A color marking line, wherein the result is as follows:
Ca:(1063.5,466.5)。
s5.6: the straight line L parallel to the vertical vector v and passing through the center point Ca of the mark line of color a is calculated, and the result is as follows:
L:x=1063.5。
calculating the intersection point of the straight line L and the water surface contour line, and acquiring the point with the minimum vertical coordinate in all the intersection points, wherein the result is as follows:
P:(1063.5,926.5)。
s5.7: calculating the distance d (Ca, P) from the point Ca to the point P, namely the pixel distance from the water surface to the center point of the marking line of the color type A, and the result is as follows:
d(Ca,P)=460。
s5.8: the physical distance corresponding to a single pixel of the image is calculated as follows,
d_per_pixel=A_w_cm/A_w_pixel
=15/d((1012,480),d(1012,453))
=15/(480-453)
=0.555(cm/pixel)
wherein d _ per _ pixel is a physical distance corresponding to the unit pixel distance, and the unit is cm; a _ w _ cm is the actual width of the A color marking ruler, the unit is cm, and the A _ w _ cm is obtained through actual measurement when a marking line is pasted on a river bank; the A _ w _ pixel is the length of the minimum circumscribed rectangle short side of the A color mark line, and the unit is pixel.
S5.9: the physical distance from the center point of the mark line of the color A to the water surface along the vertical direction is calculated according to the following formula,
d(water,A)=d(Ca,P)×d_per_pixel
=460×0.555
=255.56
wherein d (water, A) is the physical distance from the water surface to the A color marking line along the vertical direction.
S5.10: and calculating the water level of the current water surface, wherein the calculation formula is as follows:
water_level=A_level-d(water,A)-A_w_cm/2
=300-255.56-15/2
=36.94(cm)
wherein, water _ level is the water level of the current water surface, a _ level is the water level of the upper edge of the mark line of color a, d (water, a) is the physical distance from the center point of the mark line of color a to the water surface along the vertical direction obtained by calculation in S5.9, and a _ w _ cm is the actual width of the mark line of color a obtained by measurement.
Example 2
And selecting a river channel at a high and new area in a certain city as a test point for water level detection.
S1: and sequentially sticking A, B, C marking lines on the bank of the river channel from top to bottom, recording the water level of the upper edge of the A-color marking line as 100cm, and recording as A _ level, wherein the actual width of the A-color marking line obtained by measurement is 15cm, and recording as A _ w _ cm.
S2: a camera is erected near the river bank, a water area near the marking line is monitored, video information is obtained in real time and sent to a server, and an image of the water area near the marking line is obtained in a video streaming frame extraction mode, as shown in fig. 5.
S3: the method comprises the following steps of utilizing a Mask RCNN-based segmentation algorithm to segment A, B, C three marking lines and a water surface, and specifically comprising the following steps:
s3.1: 5000 images of a water area near a mark line with a resolution of 1920 x 1080 are obtained through cameras erected at different river reach, and a data set is expanded through rotation, Gaussian noise addition and other modes to obtain a sample data set.
S3.2: labeling the sample data set, and specifically comprising the following steps:
a. and downloading, installing and opening an image marking tool labelme.
b. And sequentially clicking a 'File- > Open Dir' button on the upper right corner of the labelme, selecting a folder where the image is located in a popped dialog box, and displaying the first image of the image folder in the working area of the labelme.
c. Clicking on the "Create Polygons" button in the left toolbar of labelme creates Polygons along the edges of the a-color marking line in the image and names red for the Polygons, which in the same way creates Polygons along the edges of the B, C marking line and the water surface in the image and names yellow, blue and water in that order. It should be noted that the reflection of the mark line in the water surface is not distinguished from the mark line, and a polygon is also created along the edge of the reflection of the mark line, and the name of the polygon is consistent with the mark line.
d. After polygons are created for all A, B, C marking lines and the water surface, a Save button of a toolbar on the left side of labelme is clicked, the created polygon information is saved as a json file, the json file name is consistent with the image name, and therefore the labeling of the current image is completed.
e. After the current image is labeled, clicking a 'NextImage' button of a left toolbar of label to label the next image.
S3.3: and constructing a Mask RCNN model and training the Mask RCNN model. The method comprises the following specific steps: and (4) randomly dividing the data marked in the step (S3.2) into a training set, a verification set and a test set according to the proportion of 8:1:1, training a Mask RCNN model by using the training set, screening the optimal weight of the algorithm by using the verification set, and testing the trained algorithm by using the test set. And when the recognition rate of the trained algorithm on the test set reaches 95%, the algorithm is considered to be qualified, otherwise, the training hyper-parameter is adjusted, and the algorithm is retrained.
S3.4: and (4) performing water level marking line and water surface segmentation on the image of the water area near the marking line to be detected by adopting the Mask RCNN-based segmentation algorithm trained in the step S3.3 to obtain the contour lines of the water level marking line and the water surface, as shown in fig. 6.
S4: because only the mark lines of the colors A and B are separated, the mark lines of the color C are not separated, and the alarm of the color C is executed.
The above-mentioned embodiments are merely illustrative of the preferred embodiments of the present invention, and do not limit the scope of the present invention, and various modifications and improvements of the technical solution of the present invention by those skilled in the art should fall within the protection scope defined by the claims of the present invention without departing from the spirit of the present invention.

Claims (8)

1. An intelligent recognition algorithm of river water level based on deep learning is characterized by comprising the following steps:
s1, sequentially arranging N marking lines with different colors from top to bottom on the river bank; wherein N is a positive integer;
s2, acquiring images of the marking lines and the water areas near the marking lines through a monitoring camera arranged on the river reach, establishing a sample library of the water areas near the marking lines and the marking lines, and marking the marking lines, the inverted images of the marking lines and the outlines of the water surfaces in the images in the sample library of the water areas near the marking lines and the marking lines to obtain marked marking lines and a sample library of the water areas near the marking lines;
s3, constructing a Mask RCNN model, and training the Mask RCNN model by using the marked mark line and a water area image sample library near the mark line to obtain a trained Mask RCNN model;
s4, acquiring real-time marking lines and images of water areas near the marking lines through a monitoring camera installed on a river reach, and segmenting the marking lines and water surfaces in the images of the marking lines and the water areas near the marking lines which are acquired currently by using a trained Mask RCNN model;
s5, judging the current water level situation according to the division result, and if the marking lines of all color types are not divided, giving an alarm of a corresponding grade according to the colors of the non-divided marking lines; if the marking lines of all the color types are segmented, the current water level of the water surface is calculated by adopting a graphical algorithm and a mathematical formula.
2. The intelligent recognition algorithm of river water level based on deep learning of claim 1, wherein: and randomly dividing the marked mark line and data in the water area image sample library near the mark line into a training set, a verification set and a test set according to the ratio of 8:1:1, training a Mask RCNN model by adopting the training set, screening the optimal weight of the Mask RCNN model by adopting the verification set, and testing the trained Mask RCNN model by adopting the test set.
3. The intelligent recognition algorithm of river water level based on deep learning of claim 1, wherein: and N is 3, and three colors are sequentially A, B and C from top to bottom, so that the method specifically comprises the following steps of:
if the C color marking line is not divided, the water level is judged to be positioned above the C color marking line, and C color alarm is executed;
if the mark lines of the color C and the color B are not divided, the water level is judged to be above the mark line of the color B, and the color B alarm is executed;
if the mark lines of the color C, the color B and the color A are not separated, the water level is judged to be above the mark line of the color A, and the warning of the color A is executed.
4. The intelligent recognition algorithm of river water level based on deep learning according to claim 3, characterized in that: the method for calculating the water level of the current water surface by adopting a graphical algorithm and a mathematical formula specifically comprises the following steps of:
s51, removing the reflection of the marking line in the water surface
Acquiring the areas of the regions surrounded by the contour lines of the N color marking lines and the area of the region surrounded by the water surface contour line, respectively calculating the intersection area of the region surrounded by the contour line of each color marking line and the region surrounded by the water surface contour line, obtaining the ratio of the area to the area of the region surrounded by the contour line of the marking line, and rejecting the marking lines with the area ratio larger than 0.5;
s52, respectively obtaining A, B, C minimum circumscribed rotating rectangles of the contour lines of the three color marking lines according to the segmentation result;
s53, respectively obtaining one long side of the minimum external rotation rectangle of A, B, C three color marking lines, and obtaining three sides in total;
s54, calculating the unit direction vectors of the three sides acquired in the step S53, wherein the signs of the components of the three unit direction vectors on the x axis are the same, calculating the mean value vector of the three unit vectors, taking the mean value vector as a horizontal vector, and taking the unit vector orthogonal to the horizontal vector as a vertical vector, and marking the unit vector as (x _ v, y _ v);
s55, obtaining the central point Ca of the minimum external rotation rectangle of the A color marking line, namely the central point of the A color marking line;
s56, obtaining a straight line L which passes through the center point Ca of the color A marking line and is parallel to the vertical vector v, solving the intersection point of the straight line L and the water surface contour line, and obtaining a point P with the minimum ordinate in all the intersection points;
s57, calculating the distance d (Ca, P) from the point Ca to the point P by adopting the following formula, namely the pixel distance d (Ca, P) from the water surface to the center point of the A color type marking line;
Figure FDA0003598431960000021
wherein, (x _ c, y _ c) are coordinates of the point Ca, and (x0, y0) are coordinates of the point P;
s58, calculating the physical distance corresponding to a single pixel in the real-time marker line and the image of the water area near the marker line by adopting the following formula;
d_per_pixel=A_w_cm/A_w_pixel
wherein d _ per _ pixel is a physical distance corresponding to a single pixel in an image, and the unit is cm; a _ w _ cm is the actual width of the A color marking ruler, the unit is cm, and the A _ w _ cm is obtained through actual measurement when a marking line is arranged on a river bank; a _ w _ pixel is the length of the minimum circumscribed rectangle short side of the A color marking line, and the unit is a pixel;
s59, calculating the physical distance d (water, A) from the center point Ca of the marking line of the color A to the water surface along the vertical direction by adopting the following formula:
d(water,A)=d(Ca,P)×d_per_pixel
s510, calculating the water level of the current water surface by adopting the following formula:
water_level=A_level-d(water,A)-A_w_cm/2
wherein, water _ level is the water level of the current water surface, a _ level is the water level of the upper edge of the mark line of color a, d (water, a) is the physical distance from the center point Ca of the mark line of color a to the water surface along the vertical direction, and a _ w _ cm is the actual width of the mark line of color a obtained by measurement.
5. The intelligent recognition algorithm for river water level based on deep learning of claim 4, wherein: the minimum external rotation rectangle is obtained by calling a minAreaRect function in a computer vision and machine learning software library opencv.
6. The intelligent recognition algorithm of river water level based on deep learning according to claim 4, characterized in that: the unit direction vector is calculated by the following method:
assuming that (x1, y1), (x2, y2) are two end points of an edge, the unit direction vector of the edge is:
Figure FDA0003598431960000031
the calculation method of the mean vector comprises the following steps:
let e1 (x)1,y1),e2:(x2,y2),e3:(x3,y3) For three unit vectors, the mean vector of the three unit vectors is:
((x1+x2+x3)/3,(y1+y2+y3)/3)
the calculation method of the vertical vector comprises the following steps:
the unit vector orthogonal to vector (x, y) is:
Figure FDA0003598431960000041
7. the intelligent recognition algorithm of river water level based on deep learning according to claim 4, characterized in that: the central point Ca of the minimum external rotation rectangle of the color A marking line is obtained by adopting the following method:
assuming that (x1 ', y 1'), (x2 ', y 2'), (x3 ', y 3'), (x4 ', y 4') are coordinates of four vertices of the rotated rectangle, and coordinates of the center point Ca of the rotated rectangle are (x _ c, y _ c), then
x_c=(x1’+x2’+x3’+x4’)/4;
y_c=(y1’+y2’+y3’+y4’)/4。
8. The intelligent recognition algorithm of river water level based on deep learning according to claim 4, characterized in that: the method comprises the steps of obtaining a straight line L which passes through a center point Ca of a color marking line A and is parallel to a vertical vector v, obtaining an intersection point of the straight line L and a water surface contour line, and obtaining a point P with the minimum ordinate in all the intersection points; ", the concrete steps are as follows:
s561, assuming that the coordinates of the point Ca are (x _ c, y _ c) and the vertical vector v is (x _ v, y _ v), the equation of the straight line L passing through the center point Ca and parallel to the vertical vector v is:
A*x+B*y+C=0
wherein the content of the first and second substances,
A=y_v
B=-x_v
C=x_v*y_c-y_v*x_c
s562, the water surface contour line is a group of points which are arranged along the clockwise direction or the anticlockwise direction and are marked as [ pt1, pt 2.., ptn ], wherein the first point is completely the same as the last point; traversing a line segment [ pti, pti +1] on the contour line of the water surface, wherein i is 1, 2.
S563, calculating the intersection point of the line segment on the water surface contour line and the straight line L, wherein the point with the minimum vertical coordinate in all the intersection points is the intersection point P;
a straight line L: the intersection of a × x + B × y + C ═ 0 and the line segment [ (xa, ya), (xb, yb) ] is calculated as follows:
step 1: judging whether the two end points (xa, ya), (xb, yb) of the line segment are on the straight line L, that is, checking whether the following two equations hold:
A*xa+B*ya+C=0
A*xb+B*yb+C=0
if the equation is established, the corresponding point is the intersection point of the straight line and the line segment, otherwise, the step 2 is switched;
step 2: judging whether two end points (xa, ya) and (xb, yb) of the line segment are on the same side of the straight line L, namely judging whether the following inequality is true:
(A*xa+B*ya+C)*(A*xb+B*yb+C)>0
if the inequality is true, two end points of the line segment are on the same side of the straight line L, and the straight line and the line segment do not have an intersection point; if the inequality is not true, the two end points are on different sides of the straight line, the straight line and the line segment have an intersection point, and the step 3 is switched;
and step 3: and calculating to obtain a functional relation of a straight line where the line segment is located: a2 x + B2 y + C2 ═ 0;
wherein, A2 ═ yb-ya, B2 ═ xa-xb, C2 ═ xb ═ ya-xa ═ yb;
the intersection (x0, y0) of the line L and the line segment is determined by the following formula:
x0=(C2*B-C*B2)/(A*B2-A2*B)
y0=(C*A2-C2*A)/(A*B2-A2*B)。
CN202210398388.XA 2022-04-15 2022-04-15 River water level intelligent identification algorithm based on deep learning Pending CN114677594A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210398388.XA CN114677594A (en) 2022-04-15 2022-04-15 River water level intelligent identification algorithm based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210398388.XA CN114677594A (en) 2022-04-15 2022-04-15 River water level intelligent identification algorithm based on deep learning

Publications (1)

Publication Number Publication Date
CN114677594A true CN114677594A (en) 2022-06-28

Family

ID=82078050

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210398388.XA Pending CN114677594A (en) 2022-04-15 2022-04-15 River water level intelligent identification algorithm based on deep learning

Country Status (1)

Country Link
CN (1) CN114677594A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115830460A (en) * 2023-02-16 2023-03-21 智洋创新科技股份有限公司 Real-time monitoring method and system for river flood prevention
CN116129430A (en) * 2023-01-28 2023-05-16 武汉大水云科技有限公司 Self-adaptive environment water level identification method, device and equipment
CN116935289A (en) * 2023-09-13 2023-10-24 长江信达软件技术(武汉)有限责任公司 Open channel embankment detection method based on video monitoring
CN117351647A (en) * 2023-12-06 2024-01-05 阳光学院 Tidal water environment monitoring and alarming device and monitoring and alarming method

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116129430A (en) * 2023-01-28 2023-05-16 武汉大水云科技有限公司 Self-adaptive environment water level identification method, device and equipment
CN115830460A (en) * 2023-02-16 2023-03-21 智洋创新科技股份有限公司 Real-time monitoring method and system for river flood prevention
CN116935289A (en) * 2023-09-13 2023-10-24 长江信达软件技术(武汉)有限责任公司 Open channel embankment detection method based on video monitoring
CN116935289B (en) * 2023-09-13 2023-12-19 长江信达软件技术(武汉)有限责任公司 Open channel embankment detection method based on video monitoring
CN117351647A (en) * 2023-12-06 2024-01-05 阳光学院 Tidal water environment monitoring and alarming device and monitoring and alarming method
CN117351647B (en) * 2023-12-06 2024-02-06 阳光学院 Tidal water environment monitoring and alarming device and monitoring and alarming method

Similar Documents

Publication Publication Date Title
CN114677594A (en) River water level intelligent identification algorithm based on deep learning
US20210374466A1 (en) Water level monitoring method based on cluster partition and scale recognition
CN112766274B (en) Water gauge image water level automatic reading method and system based on Mask RCNN algorithm
CN111563473B (en) Remote sensing ship identification method based on dense feature fusion and pixel level attention
CN109145830B (en) Intelligent water gauge identification method
CN102842045B (en) A kind of pedestrian detection method based on assemblage characteristic
CN109900706A (en) A kind of weld seam and weld defect detection method based on deep learning
CN109285139A (en) A kind of x-ray imaging weld inspection method based on deep learning
CN108960135B (en) Dense ship target accurate detection method based on high-resolution remote sensing image
CN108734143A (en) A kind of transmission line of electricity online test method based on binocular vision of crusing robot
CN110286126A (en) A kind of wafer surface defects subregion area detecting method of view-based access control model image
CN113409314B (en) Unmanned aerial vehicle visual detection and evaluation method and system for corrosion of high-altitude steel structure
CN108960229A (en) One kind is towards multidirectional character detecting method and device
CN106529537A (en) Digital meter reading image recognition method
CN107064170A (en) One kind detection phone housing profile tolerance defect method
CN109214308A (en) A kind of traffic abnormity image identification method based on focal loss function
CN111104860B (en) Unmanned aerial vehicle water quality chromaticity monitoring method based on machine vision
CN104268538A (en) Online visual inspection method for dot matrix sprayed code characters of beverage cans
CN108564077A (en) It is a kind of based on deep learning to detection and recognition methods digital in video or picture
CN112507865B (en) Smoke identification method and device
CN112906689B (en) Image detection method based on defect detection and segmentation depth convolutional neural network
CN111291684A (en) Ship board detection method in natural scene
CN114743119A (en) High-speed rail contact net dropper nut defect detection method based on unmanned aerial vehicle
CN111062383A (en) Image-based ship detection depth neural network algorithm
CN114331986A (en) Dam crack identification and measurement method based on unmanned aerial vehicle vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination