CN116486212A - Water gauge identification method, system and storage medium based on computer vision - Google Patents

Water gauge identification method, system and storage medium based on computer vision Download PDF

Info

Publication number
CN116486212A
CN116486212A CN202310568740.4A CN202310568740A CN116486212A CN 116486212 A CN116486212 A CN 116486212A CN 202310568740 A CN202310568740 A CN 202310568740A CN 116486212 A CN116486212 A CN 116486212A
Authority
CN
China
Prior art keywords
water gauge
target
targets
target water
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310568740.4A
Other languages
Chinese (zh)
Inventor
陈震东
李世强
吴子昊
宋倚天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Sifu Technology Co ltd
Original Assignee
Harbin Sifu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Sifu Technology Co ltd filed Critical Harbin Sifu Technology Co ltd
Priority to CN202310568740.4A priority Critical patent/CN116486212A/en
Publication of CN116486212A publication Critical patent/CN116486212A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • G06N5/041Abduction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/32Normalisation of the pattern dimensions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/02Recognising information on displays, dials, clocks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Processing (AREA)

Abstract

The invention belongs to the technical field of hydraulic engineering, and discloses a water gauge identification method, a system and a storage medium based on computer vision: acquiring a water gauge image, marking and selecting a frame for a target in the water gauge image according to a preset marking standard, and generating a water gauge data set; training a water gauge data set based on the pre-training model to generate an inference model; acquiring a target water gauge image, preprocessing the target water gauge image, and inputting an inference model to acquire a target in the target water gauge image; judging whether the picture of the target water gauge image meets a preset standard or not; detecting whether an obstacle shields the target water gauge when the preset standard is met; judging whether the target water gauge is inclined or not based on the inclination angle of the target water gauge when the target water gauge is not shielded by the obstacle; and when the inclination angle is smaller than a first preset value, taking the sum of the ruler surface data of the target water gauge and the zero point elevation data of the water gauge as the water level height. The invention solves the problem of low water level measurement precision and accuracy of the water gauge.

Description

Water gauge identification method, system and storage medium based on computer vision
Technical Field
The invention belongs to the technical field of hydraulic engineering, and particularly relates to a water gauge identification method, a system and a storage medium based on computer vision.
Background
At present, in the hydrologic field, the water level observation of water areas such as rivers and lakes, the observation points are distributed at different positions, mainly based on the inspection, manual reading and recording of hydrologic workers, and then data input is carried out at hydrologic stations through an information system; at part of observation points, network cameras are deployed, the pictures are returned to a hydrological station or an observation platform, and then water level data are read by manual recording or computer vision technology.
The reading of water level data is realized through the water gauge, and the general case, water level change can reach several meters throughout the year, so need install a plurality of water gauges in a certain distance range from bank, when the water level is in different height, observe different water gauges. As shown in fig. 1, a schematic diagram of a hydrological water gauge is shown, wherein the diagram comprises 3 water gauges, "P3", "P4" and "P5", wherein the effective water gauge is "P4", "P4" water gauge is currently read as 0.26m, and the current water level value is 100.26m assuming that the zero point elevation of the "P4" water gauge is 100 m.
In the prior art, the water level of the water gauge is measured mainly by setting a video monitoring system and configuring a standard water gauge so as to measure the water level through an image method. For example, chinese patent CN112907506a discloses a method, a device and a storage medium for detecting water level of an indefinite length water gauge based on color information of the water gauge, which projects the water gauge onto a standard water gauge image of a world coordinate system through a perspective projection matrix, and if the method is required to effectively read, a large number of standard water gauges need to be recorded, and the workload is very large; meanwhile, the problems that the water gauge is shielded by sundries, the water gauge is inclined, characters of the water gauge are unclear and the like, so that the recognition is inaccurate or cannot be performed are also solved. For example, chinese patent CN114067095a discloses a water level recognition method based on water gauge character detection recognition, which adopts a two-step deep learning method to extract the whole water gauge first and then extract E on the water gauge, the two-step recognition cannot be realized, the complexity of the step recognition is high, and the recognition reading results have large difference due to unidentified or repeated recognition of individual E caused by weather, scale surface stains and other reasons.
Therefore, the water gauge identification method, system and storage medium based on computer vision are provided to improve the water gauge water level measurement precision and accuracy, and are the problems to be solved urgently.
Disclosure of Invention
Aiming at the technical problems, the invention provides a water gauge identification method, a system and a storage medium based on computer vision, which aim to realize accurate reading of a water gauge with relatively low calculation power requirement through a target detection method and simultaneously solve the problems of poor universality of water gauge identification based on a template matching technology, high requirement on the definition of a gauge surface based on character identification and large error.
In a first aspect, the present invention provides a computer vision-based water gauge identification method, the method comprising the steps of:
step 1, acquiring a water gauge image, and marking and selecting frames of targets in the water gauge image one by one according to a preset marking standard to generate a water gauge data set, wherein the preset marking standard is that the water gauge is divided into a ruler body, E, And->11 targets, labeled waterfront, E, 1E, 2E, 3E, 4E, 5E, 6E, 7E, 8E, and 9E, respectively, in a one-to-one correspondence,meanwhile, dividing 11 targets into 3 classes, wherein the waterrule is 1 class, the E is 2 class, the mE is 3 class, and the m is a positive integer of 1-9;
step 2, training a water gauge data set based on a pre-training model with target detection capability to generate an inference model;
step 3, acquiring a target water gauge image of the water level to be detected, and preprocessing the target water gauge image;
step 4, inputting the preprocessed target water gauge image into an inference model to obtain a target in the target water gauge image;
step 5, judging whether the picture of the target water gauge image meets the preset standard or not based on the position of the target water gauge in the target water gauge image, generating an abnormal alarm when the picture does not meet the preset standard, and sending the abnormal alarm to the user terminal;
step 6, detecting whether an obstacle shields the target water gauge or not when the preset standard is met, generating an abnormal alarm when the obstacle shields the target water gauge, and sending the abnormal alarm to the user terminal;
step 7, judging whether the target water gauge is inclined or not based on the inclination angle of the target water gauge when no obstacle shields the target water gauge, generating an abnormal alarm when the inclination angle is larger than or equal to a first preset value, and sending the abnormal alarm to the user terminal;
and 8, when the inclination angle is smaller than a first preset value, taking the sum of the ruler surface data of the target water gauge and the zero point elevation data of the water gauge as the water level height.
Specifically, in step 3, the preprocessing includes converting the target water gauge image into a cv:mat data type, and adjusting the size of the target water gauge image to a preset size.
Specifically, step 4 further includes labeling coordinate information of the target in the target water gauge image by using a frame coordinate method, and obtaining confidence information of the target in the target water gauge image, wherein the coordinate system uses an upper left corner of a picture of the target water gauge image as a coordinate origin, an axis on the right side of the coordinate origin is a positive half axis of an X axis, and an axis below the coordinate origin is a positive half axis of a Y axis.
Specifically, step 5 includes:
step 51, acquiring first coordinate information of a blade of a target water gauge;
and 52, respectively calculating the distances between the four edges of the target water gauge body and the image edges of the target water gauge image based on the first coordinate information and the image height and the image width of the target water gauge image, and judging that the image of the target water gauge image meets the preset standard when the distances between the four edges of the target water gauge image and the image edges of the target water gauge image are all larger than or equal to the second preset value, or else judging that the image of the target water gauge image does not meet the preset standard.
Specifically, step 6 includes:
step 61, acquiring first coordinate information, and calculating a coordinate Pos of a bottom center point of the target water gauge bm The calculation formula is as follows:
Pos bm =[(X lt +X rb )/2,Y rb ],
wherein X is lt Is the X coordinate value, X of the upper left corner of the object positioned at the lowest position in the object water gauge image rb X coordinate value, Y, of the lower right corner of the lowermost target rb Is the Y coordinate value of the lower right corner of the lowermost target;
step 62, drawing a rectangular area by taking the bottom center point as the center, taking a third preset value as a length value and taking a fourth preset value as a width value, and cutting out the rectangular area to serve as a new target water gauge image;
and 63, inputting a new target water gauge image into a preset shielding detection classification model to detect whether an obstacle shields the target water gauge.
Specifically, step 7 includes:
step 71, acquiring all 2 types of targets and all 3 types of targets in a target water gauge image, and acquiring first coordinate information of each 2 types of targets and second coordinate information of each 3 types of targets;
step 72, construct set A ([ E) based on all class 2 objects and the first coordinate information 1 ,Pos 1 ],[E 2 ,Pos 2 ]…[E n ,Pos n ]) Constructing a set B ([ (i) E, pos) based on all 3 kinds of targets and the second coordinate information i ],[(i+1)E,Pos i+1 ]…[(i+p-1)E,Pos i+p-1 ]) Wherein n is the orderThe total number of 2 types of targets in the target water gauge image, p is the total number of 3 types of targets in the target water gauge image, and i is the number corresponding to the 3 types of targets positioned at the lowest part in the target water gauge image;
step 73, taking out the Y coordinates of the left upper corners of all the 2 types of targets in the set A, sorting all the 2 types of targets in the set A according to the sequence from big to small based on the Y coordinates of the left upper corners to obtain a new set A, fitting based on the XY coordinates of the left upper corners of all the 2 types of targets newly sorted in the new set A, and calculating to obtain a first fitting slope Ca;
step 74, the XY coordinates of the right lower corners of all 3 types of targets in the set B are taken out, fitting is carried out based on the XY coordinates of the right lower corners, and a second fitting slope Cb is obtained through calculation;
step 75, calculating a slope C and an oblique angle α of the target water gauge based on the first fitting slope Ca and the second fitting slope Cb, wherein the calculation formula is as follows:
C=(Ca+Cb)/2,
α=arctan(C)。
specifically, step 8 includes:
step 81, obtaining all 2 types of targets and all 3 types of targets in the target water gauge image, obtaining third coordinate information corresponding to each 2 types of targets and 3 types of targets, respectively calculating height pixel values of each 2 types of targets and 3 types of targets based on the third coordinate information, and averaging all obtained height pixel values to obtain average height D avg The calculation formula of the height pixel value is as follows:
D q =(Y rbq -Y ltq ),
wherein D is q For the height pixel value of the qth object in all class 2 and class 3 objects, Y rbq Y coordinate value Y for the lower right corner of the q-th target ltq Y-coordinate value of the upper left corner of the q-th target, q is a positive integer of 1-10;
average height D avg The calculation formula of (2) is as follows:
wherein Q is the total number of all class 2 targets and class 3 targets;
step 82, sorting all the class 2 targets and class 3 targets according to the Y coordinate value of the lower right corner in the third coordinate information, and taking the complete target closest to the water surface as a first target;
when the first target is a class 3 target, the total height G of the first target and all targets above the first target is:
G=1-K1*0.1,
wherein K1 is a number corresponding to the first target, and G is in meters;
when the first target is a class 2 target, the total height G of the first target and all targets above the first target is:
G=1-K2*0.1+0.05,
wherein K2 is a number corresponding to a category 3 object above the first object and immediately adjacent to the first object, and G is in meters;
step 83, obtaining Y coordinate value Y of the right lower corner of the 1-category target waterrule in the target water gauge image rbw The height difference Δh between the first target and the lower edge of the target water gauge is:
ΔH=Y rbw -Y rbE
wherein Y is rbE Is the Y coordinate value of the lower right corner of the first target, ΔH is in pixels;
the real world height corresponding to the height difference Δh is:
ΔH real =F*ΔH,
wherein F is E height D of the real world real And average height D avg Ratio of DeltaH real The unit of (2) is rice;
step 84, calculating the height J of the part above the water surface of the target water gauge based on the height of the real world corresponding to the total height G and the height difference Δh, wherein the calculation formula is as follows: j=g+Δh real
The scale surface data K of the target water scale is as follows: k=1-J, wherein the units of height J and face data K are meters;
step 85, calculating the water level height M based on the ruler surface data K and the water gauge zero point elevation data, wherein the calculation formula is as follows: m=k+l, where L is water gauge zero point elevation data corresponding to a preset position when a target water gauge image is taken, and M is a meter.
In a second aspect, the present invention also provides a water gauge identification system based on computer vision, the system comprising: the system comprises a model training module, a target acquisition module, an abnormality judgment module and a water level calculation module;
the model training module acquires a water gauge image, marks and frames targets in the water gauge image one by one according to a preset marking standard, and generates a water gauge data set, wherein the preset marking standard is to divide the water gauge into a ruler body, E, And->11 targets, labeled waterfront, E, 1E, 2E, 3E, 4E, 5E, 6E, 7E, 8E, and 9E, respectively; training a water gauge data set based on a pre-training model with target detection capability, and generating an inference model;
the target acquisition module acquires a target water gauge image of the water level to be detected and performs pretreatment on the target water gauge image; inputting the preprocessed target water gauge image into an inference model to obtain a target in the target water gauge image;
the abnormality judging module is used for judging whether the picture of the target water gauge image meets the preset standard or not based on the position of the target water gauge in the target water gauge image, generating an abnormality alarm when the picture does not meet the preset standard, and sending the abnormality alarm to the user terminal; detecting whether an obstacle shields the target water gauge or not when the preset standard is met, generating an abnormal alarm when the obstacle shields the target water gauge, and sending the abnormal alarm to the user terminal; judging whether the target water gauge is inclined or not based on the inclination angle of the target water gauge when no obstacle shields the target water gauge, generating an abnormal alarm when the inclination angle is larger than or equal to a first preset value, and sending the abnormal alarm to the user terminal;
and the water level calculating module is used for taking the sum of the ruler surface data of the target water gauge and the zero point elevation data of the water gauge as the water level height when the inclination angle is smaller than a first preset value.
In a third aspect, the present invention provides a computer storage medium, where program instructions are stored, where when the program instructions are executed, the device in which the computer storage medium is located is controlled to execute any one of the above-mentioned water gauge identification methods based on computer vision.
Compared with the prior art, the invention has the following beneficial effects:
1. the difficulty and the workload of the data set labeling are reduced, the calculation force requirement and the energy consumption during the operation are reduced, and the recognition speed during the operation can be improved;
2. the contour recognition is more accurate, the reading accuracy is higher, and the problem of larger reading error when the water gauge surface is unclear can be solved;
3. the water gauge has good compatibility, and under the condition of the same data set quantity, the water gauge identification method based on target identification can adapt and be compatible with water gauges with more appearance forms;
4. the saved calculation force can be used for richer water gauge detection, such as water gauge inclination detection, water gauge shielding detection and the like, so that the water gauge reading is more accurate.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic view of a hydrological scale;
FIG. 2 is a flow chart of a water gauge identification method based on computer vision according to the present invention;
FIG. 3 is a schematic illustration of labeling and framing targets in a water gauge image in an embodiment of the invention;
FIG. 4 is a schematic diagram of recognition results of an inference model in an embodiment of the present invention;
FIG. 5 is a schematic view of an obstacle occlusion in an embodiment of the invention;
FIG. 6A is a schematic diagram of a rectangular area with obstruction in an embodiment of the invention;
FIG. 6B is a schematic diagram of a rectangular area without obstruction in an embodiment of the invention;
FIG. 7 is a schematic diagram of a slope line for calculating a fit slope in an embodiment of the present invention;
FIG. 8 is a schematic diagram of a water gauge identification system based on computer vision according to the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be apparent that the particular embodiments described herein are merely illustrative of the present invention and are some, but not all embodiments of the present invention. All other embodiments, which can be made by one of ordinary skill in the art without undue burden on the person of ordinary skill in the art based on embodiments of the present invention, are within the scope of the present invention.
It should be noted that, if there is a description of "first", "second", etc. in the embodiments of the present invention, the description of "first", "second", etc. is only for descriptive purposes, and is not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In addition, the technical solutions of the embodiments may be combined with each other, but it is necessary to base that the technical solutions can be realized by those skilled in the art, and when the technical solutions are contradictory or cannot be realized, the combination of the technical solutions should be considered to be absent and not within the scope of protection claimed in the present invention.
Fig. 2 is a flowchart of an embodiment of a water gauge identification method based on computer vision, where the flowchart specifically includes the following steps:
step 1, acquiring a water gauge image, and marking and selecting frames of targets in the water gauge image one by one according to a preset marking standard to generate a water gauge data set, wherein the preset marking standard is that the water gauge is divided into a ruler body, E, And->The 11 targets are labeled waterrule, E, 1E, 2E, 3E, 4E, 5E, 6E, 7E, 8E and 9E respectively in a one-to-one correspondence, and meanwhile, the 11 targets are classified into 3 classes, the waterrule is 1 class, the E is 2 class, the mE is 3 class, and the m is a positive integer of 1-9.
Illustratively, the annotation format employs a coco dataset format.
And marking and selecting frames for the targets in the water gauge image one by one, enabling the marking frames to be just attached to the edges of all the targets during marking, and ensuring that four corners of the targets are in the marking frames for the targets with slight inclination. Illustratively, the label frame of the target "waterrule" should be the upper rim of the raised blade down to the boundary between the blade and water, and the left-right width should cover the target "E" and the target "number E"; the labeling frame of the target E should be the E pattern itself; the label box of the target "number E" should be a numberThe entirety of the pattern. A schematic diagram of labeling and framing targets in the water gauge image is shown in fig. 3.
And 2, training a water gauge data set based on a pre-training model with target detection capability, and generating an inference model.
The pre-training model is a training model for detecting targets in the water gauge, and is trained by using the water gauge data set generated in the step 1, so that the pre-training model is subjected to Finetune parameter fine adjustment to generate an inference model for water gauge identification. After the water gauge image is input into the reasoning model, the target in the water gauge image can be automatically identified according to the preset labeling standard.
In the embodiment of the invention, the adopted network is a RetinaNet full convolution neural network and is based on an Nvidia 3090Ti GPU (24G) GPU card. The setting of the super-parameters of the neural network model can greatly influence the actual performance of the algorithm, and is generally divided into super-parameters in a training stage and structured super-parameters. The super-parameter optimization strategy of the training stage of the reasoning model is as follows: 1. in order to balance the accuracy (precision) of target recognition and the convergence speed of the model, the Batch size (samples_per_gpu=8 and worker_per_gpu=10) is selected to fully utilize hardware parallelism to accelerate calculation and improve the convergence speed of the model; 2. in order to control the convergence amplitude of the model parameters, avoid unstable model caused by overlarge learning rate setting, set learning rate=0.0025, and dynamically adjust the learning rate to achieve higher model accuracy (precision); 3. aiming at the problem of sample imbalance, a few categories are better learned by optimizing ratio_list according to a better positive and negative sample proportion so as to improve the performance of target identification; 4. the confidence threshold is set to (waterrule_thre=0.75, e_thre=0.7, ne_thre=0.7) for balancing accuracy and recall.
And step 3, acquiring a target water gauge image of the water level to be detected, and preprocessing the target water gauge image.
Specifically, in step 3, the preprocessing includes converting the target water gauge image into a cv:mat data type, and adjusting the size of the target water gauge image to a preset size.
The preprocessing is to process the input content to make it meet the requirement of the subsequent detection step, and after preprocessing, the target water gauge image is stored in the variable and transferred to the inference model.
The pretreatment further comprises: when a video containing a target water gauge image is obtained, the video is subjected to frame extraction, the video frame size is standardized, and then converted into cv which is the mat data type, stored in a variable and transmitted to an inference model; when USB camera data containing a target water gauge image is obtained, a USB picture frame is read through a cv:cap method, the picture frame size is standardized, the picture frame is converted into a cv:mat data type, the cv:cap data type is stored in a variable, and the cv:cap data type is transmitted to an inference model; when RTSP video stream data containing a target water gauge image is obtained, video is read from an RTSP address, video frame size is standardized after frame extraction is carried out on the video, then the video frame size is converted into cv which is the data type, the data type is stored in a variable, and the variable is transmitted to an inference model;
the standardized picture or video frame sizes are focused on standardization, i.e., size uniformity, and in embodiments of the present invention, the sizes are configurable, i.e., standardized to configured sizes. Under the condition that the original picture is large enough and clear enough, the larger the input size is, the more is beneficial to improving the detection precision, but the too large input size also brings the improvement of the calculation force demand and the reduction of the calculation speed, so that the precision and the calculation force speed are simultaneously considered in the size configuration.
And step 4, inputting the preprocessed target water gauge image into an inference model to obtain a target in the target water gauge image.
Wherein the total number of targets is 11 or less.
The target in the image can be identified after the target water gauge image is input into the inference model, as shown in fig. 4. As shown in fig. 4, the identified targets are waterroller, E, 9E, E, 8E, E.
Specifically, step 4 further includes labeling coordinate information of the target in the target water gauge image by using a frame coordinate method, and obtaining confidence information of the target in the target water gauge image, wherein the coordinate system uses an upper left corner of a picture of the target water gauge image as a coordinate origin, an axis on the right side of the coordinate origin is a positive half axis of an X axis, and an axis below the coordinate origin is a positive half axis of a Y axis.
The frame coordinates are ([ Xlt, ylt ], [ Xrb, yrb ]), where [ Xlt, ylt ] is the coordinates of the upper left corner and [ Xrb, yrb ] is the coordinates of the lower right corner. In specific implementation, the confidence value of 0.86 is better, namely, when the confidence > =0.86, the confidence information is a decimal in the interval of 0-1, and the confidence information is considered as a valid target.
And 5, judging whether the picture of the target water gauge image meets the preset standard or not based on the position of the target water gauge in the target water gauge image, generating an abnormal alarm when the picture does not meet the preset standard, and sending the abnormal alarm to the user terminal.
Specifically, step 5 includes:
and 51, acquiring first coordinate information of the blade of the target water gauge.
And 52, respectively calculating the distances between the four edges of the target water gauge body and the image edges of the target water gauge image based on the first coordinate information and the image height and the image width of the target water gauge image, and judging that the image of the target water gauge image meets the preset standard when the distances between the four edges of the target water gauge image and the image edges of the target water gauge image are all larger than or equal to the second preset value, or else judging that the image of the target water gauge image does not meet the preset standard.
The magnitude of the second preset value is set according to experience of a person skilled in the art or according to an actual application scenario, which is not limited in the embodiment of the present application.
When the four edges of the water gauge are positioned at the edges of the picture, the water gauge can be identified. Illustratively, when the lower right-hand corner of the water gauge blade, yrb, is equal to the high of the picture, it is indicated that the lower edge of the water gauge is already at the edge of the picture; when the left upper corner coordinate Ylt =0 of the water gauge body, the upper edge of the water gauge is indicated to be positioned at the edge of the picture; when the lower right corner coordinate Xrb of the water gauge body is equal to the length of the picture, the right edge of the water gauge is indicated to be positioned at the edge of the picture; when the upper left corner of the water gauge body is Xlt =0, the left edge of the water gauge is already positioned at the edge of the picture. In order to make the readings more efficient and reliable, none of the four cases described above fit the readings. The edges of the water gauge and the edges of the picture should be left with a margin, i.e. the water gauge is inside the picture, at a certain distance from the edges, and can represent the picture around the water gauge, and the margin from the edges of the picture is, for example, 5% of the size of the corresponding edge of the picture, or 100 pixels, etc., including but not limited to the above-mentioned exemplary values.
And 6, detecting whether an obstacle shields the target water gauge or not when the preset standard is met, generating an abnormal alarm when the obstacle shields the target water gauge, and sending the abnormal alarm to the user terminal.
The water gauge is shielded by the obstacle, which means that the water gauge is an image in a picture, sundries such as waterweed, floaters, shore objects and the like are shielded, especially, the juncture of the water gauge and the water surface is shielded, which can lead to inaccurate reading (whether algorithm or manual), shielding abnormal alarm is generated to inform relevant personnel to process, and meanwhile, reading is not needed under the shielding state. The barrier shielding schematic diagram is shown in fig. 5.
Specifically, step 6 includes:
step 61, acquiring first coordinate information, and calculating a coordinate Pos of a bottom center point of the target water gauge bm The calculation formula is as follows:
Pos bm =[(X lt +X rb )/2,Y rb ],
wherein X is lt Is the X coordinate value, X of the upper left corner of the object positioned at the lowest position in the object water gauge image rb X coordinate value, Y, of the lower right corner of the lowermost target rb Is the Y-coordinate value of the lower right corner of the lowermost target.
And step 62, drawing a rectangular area by taking the bottom center point as the center, taking the third preset value as the length value and taking the fourth preset value as the width value, and cutting out the rectangular area to serve as a new target water gauge image.
And 63, inputting a new target water gauge image into a preset shielding detection classification model to detect whether an obstacle shields the target water gauge.
The preset shielding detection classification model is a preset detection model for judging an obstacle, and a new target water gauge image is input into the preset shielding detection classification model to obtain a boolean result, namely 1=shielding and 0=no shielding.
In the embodiment of the invention, a schematic diagram of a rectangular area under the shielding of an obstacle is shown in fig. 6A; a schematic diagram of a rectangular area without obstruction is shown in fig. 6B.
And 7, judging whether the target water gauge is inclined or not based on the inclination angle of the target water gauge when no obstacle shields the target water gauge, generating an abnormal alarm when the inclination angle is larger than or equal to a first preset value, and sending the abnormal alarm to the user terminal.
The inclination of the water gauge refers to the inclination of the water gauge caused by natural factors such as water flow impact, ice flotage impact, floater impact and the like or artificial factors such as ship impact, artificial damage and the like, the reading of the inclined water gauge can generate errors, and when the inclination exceeds a first preset value, the errors become unacceptable, and an inclination abnormality alarm is generated. The magnitude of the first preset value is set according to experience of a person skilled in the art or according to an actual application scenario, which is not limited in the embodiment of the present application.
Specifically, step 7 includes:
step 71, acquiring all 2 types of targets and all 3 types of targets in the target water gauge image, and acquiring first coordinate information of each 2 types of targets and second coordinate information of each 3 types of targets.
Step 72, construct set A ([ E) based on all class 2 objects and the first coordinate information 1 ,Pos 1 ],[E 2 ,Pos 2 ]…[E n ,Pos n ]) Constructing a set B ([ (i) E, pos) based on all 3 kinds of targets and the second coordinate information i ],[(i+1)E,Pos i+1 ]…[(i+p-1)E,Pos i+p-1 ]) Wherein n is the total number of 2 types of targets in the target water gauge image, p is the total number of 3 types of targets in the target water gauge image, and i is the number corresponding to the 3 types of targets positioned at the bottom in the target water gauge image.
And 73, taking out the Y coordinates of the left upper corners of all the 2 types of targets in the set A, sorting all the 2 types of targets in the set A according to the order from the top to the bottom based on the Y coordinates of the left upper corners to obtain a new set A, fitting based on the XY coordinates of the left upper corners of all the 2 types of targets newly sorted in the new set A, and calculating to obtain a first fitting slope Ca.
Fitting (i.e. linear regression) the coordinate values of the upper left corners of all the newly ordered 2-class targets in the new set A, and calculating to obtain curvature, wherein the curvature is the first fitting slope Ca.
And step 74, taking out XY coordinates of the right lower corners of all 3 types of targets in the set B, fitting based on the XY coordinates of the right lower corners, and calculating to obtain a second fitting slope Cb.
And fitting (i.e. linear regression) the coordinate values of the right lower corners of all 3 types of targets in the set B, and calculating to obtain curvature, wherein the curvature is the second fitting slope Cb.
Step 75, calculating a slope C and an oblique angle α of the target water gauge based on the first fitting slope Ca and the second fitting slope Cb, wherein the calculation formula is as follows:
C=(Ca+Cb)/2,
α=arctan(C)。
fig. 7 shows two examples of calculating the first fitting slope Ca and the second fitting slope Cb, wherein the left graph shows an example of the water gauge tilting to the left and the right graph shows an example of the water gauge tilting to the right. The slope line for calculating the first fitted slope Ca is shown as slope line a in fig. 7, and the slope line for calculating the second fitted slope Cb is shown as slope line b in fig. 7.
And 8, when the inclination angle is smaller than a first preset value, taking the sum of the ruler surface data of the target water gauge and the zero point elevation data of the water gauge as the water level height.
Preferably, the scale surface data is calculated based on the coordinates of the target in the target water scale image, and the water scale zero point elevation data is obtained according to the water scale ID corresponding to the cruising preset point position of the camera for obtaining the target water scale image.
Specifically, step 8 includes:
step 81, obtaining all 2 types of targets and all 3 types of targets in the target water gauge image, obtaining third coordinate information corresponding to each 2 types of targets and 3 types of targets, respectively calculating height pixel values of each 2 types of targets and 3 types of targets based on the third coordinate information, and averaging all obtained height pixel values to obtain average height D avg The calculation formula of the height pixel value is as follows:
D q =(Y rbq -Y ltq ),
wherein D is q For the height pixel value of the qth object in all class 2 and class 3 objects, Y rbq Y coordinate value Y for the lower right corner of the q-th target ltq Y-coordinate value of the upper left corner of the q-th target, q is a positive integer of 1-10;
average height D avg The calculation formula of (2) is as follows:
where Q is the total number of all class 2 targets and class 3 targets. Q is a positive integer of 10 or less.
Step 82, sorting all the class 2 targets and class 3 targets according to the Y coordinate value of the lower right corner in the third coordinate information, and taking the complete target closest to the water surface as a first target;
when the first target is a class 3 target, the total height G of the first target and all targets above the first target is:
G=1-K1*0.1,
wherein K1 is a number corresponding to the first target, and G is in meters;
when the first target is a class 2 target, the total height G of the first target and all targets above the first target is:
G=1-K2*0.1+0.05,
wherein K2 is a number corresponding to the category 3 object immediately adjacent to the first object above the first object, and G is in meters.
When a portion of the object is below the water surface and a portion is above the water surface, the object is an incomplete object. Here, the total height G of the first target and all targets above the first target is calculated only for the total height of complete class 2 targets and/or class 3 targets on the blade, excluding incomplete class 2 targets or class 3 targets immediately above the water surface.
Illustratively, when the target closest to the water surface is a complete target, the target corresponding to the maximum lower right angle Y value is the first target. When the object closest to the water surface is an incomplete object, the object corresponding to the maximum lower right angle or the second maximum lower right angle Y value is the first object.
Step 83, obtaining Y coordinate value Y of the right lower corner of the 1-category target waterrule in the target water gauge image rbw The height difference Δh between the first target and the lower edge of the target water gauge is:
ΔH=Y rbw -Y rbE
wherein, the liquid crystal display device comprises a liquid crystal display device,Y rbE is the Y coordinate value of the lower right corner of the first target, ΔH is in pixels;
the real world height corresponding to the height difference Δh is:
ΔH real =F*ΔH,
wherein F is E height D of the real world real And average height D avg Ratio of DeltaH real In meters.
f=dreal/Davg, preferably D real =0.05m。
Step 84, calculating the height J of the part above the water surface of the target water gauge based on the height of the real world corresponding to the total height G and the height difference Δh, wherein the calculation formula is as follows: j=g+Δh real
The scale surface data K of the target water scale is as follows: k=1-J, where the height J and the scale data K are in meters.
Step 85, calculating the water level height M based on the ruler surface data K and the water gauge zero point elevation data, wherein the calculation formula is as follows: m=k+l, where L is water gauge zero point elevation data corresponding to a preset position when a target water gauge image is taken, and M is a meter.
The camera of the shooting water gauge is provided with a plurality of preset positions, the water gauge in the center of each preset position can correspondingly record zero point elevation L data, and the corresponding water gauge zero point elevation data L can be obtained by inquiring the preset position of the current camera.
FIG. 8 is a schematic diagram showing the structure of an embodiment of a water gauge identification system based on computer vision. As shown in fig. 8, the system includes: the system comprises a model training module, a target acquisition module, an abnormality judgment module and a water level calculation module.
The model training module acquires a water gauge image, marks and frames targets in the water gauge image one by one according to a preset marking standard, and generates a water gauge data set, wherein the preset marking standard is to divide the water gauge into a ruler body, E, And->11 targets, labeled waterfront, E, 1E, 2E, 3E, 4E, 5E, 6E, 7E, 8E, and 9E, respectively; training a water gauge dataset based on a pre-training model with target detection capability, generating an inference model.
The target acquisition module acquires a target water gauge image of the water level to be detected and performs pretreatment on the target water gauge image; and inputting the preprocessed target water gauge image into an inference model to obtain a target in the target water gauge image.
The abnormality judging module is used for judging whether the picture of the target water gauge image meets the preset standard or not based on the position of the target water gauge in the target water gauge image, generating an abnormality alarm when the picture does not meet the preset standard, and sending the abnormality alarm to the user terminal; detecting whether an obstacle shields the target water gauge or not when the preset standard is met, generating an abnormal alarm when the obstacle shields the target water gauge, and sending the abnormal alarm to the user terminal; when no obstacle shields the target water gauge, judging whether the target water gauge is inclined based on the inclination angle of the target water gauge, generating an abnormal alarm when the inclination angle is larger than or equal to a first preset value, and sending the abnormal alarm to the user terminal.
And the water level calculating module is used for taking the sum of the ruler surface data of the target water gauge and the zero point elevation data of the water gauge as the water level height when the inclination angle is smaller than a first preset value.
According to another aspect of the embodiment of the present invention, there is provided a computer storage medium, where the computer storage medium stores program instructions, where the program instructions, when executed, control a device in which the computer storage medium is located to perform any one of the above-mentioned water gauge identification methods based on computer vision.
The foregoing examples have shown only the preferred embodiments of the invention, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of protection of the present invention is to be determined by the appended claims.

Claims (9)

1. The water gauge identification method based on computer vision is characterized by comprising the following steps of:
step 1, acquiring a water gauge image, and marking and selecting frames of targets in the water gauge image one by one according to a preset marking standard to generate a water gauge data set, wherein the preset marking standard is that the water gauge is divided into a ruler body and E, And->11 targets, namely respectively marking the 11 targets as waterfront, E, 1E, 2E, 3E, 4E, 5E, 6E, 7E, 8E and 9E in a one-to-one correspondence manner, and simultaneously classifying the 11 targets into 3 classes, wherein the waterfront is 1 class, the E is 2 class, the mE is 3 class, and m is a positive integer from 1 to 9;
step 2, training the water gauge data set based on a pre-training model with target detection capability to generate an inference model;
step 3, acquiring a target water gauge image of the water level to be detected, and preprocessing the target water gauge image;
step 4, inputting the preprocessed target water gauge image into the inference model to obtain a target in the target water gauge image;
step 5, judging whether the picture of the target water gauge image meets a preset standard or not based on the position of the target water gauge in the target water gauge image, generating an abnormal alarm when the picture does not meet the preset standard, and sending the abnormal alarm to a user terminal;
step 6, detecting whether an obstacle shields the target water gauge when the preset standard is met, generating an abnormal alarm when the obstacle shields the target water gauge, and sending the abnormal alarm to the user terminal;
step 7, judging whether the target water gauge is inclined or not based on the inclination angle of the target water gauge when no obstacle shields the target water gauge, generating an abnormal alarm when the inclination angle is larger than or equal to a first preset value, and sending the abnormal alarm to the user terminal;
and 8, when the inclination angle is smaller than a first preset value, taking the sum of the ruler surface data of the target water gauge and the zero point elevation data of the water gauge as the water level height.
2. A computer vision based water gauge identification method as defined in claim 1 wherein in said step 3 said preprocessing comprises converting said target water gauge image to a cv:mat data type and adjusting the size of said target water gauge image to a preset size.
3. The method for identifying a water gauge based on computer vision according to claim 1, wherein the step 4 further comprises labeling coordinate information of a target in the target water gauge image by a frame coordinate method, and obtaining confidence information of the target in the target water gauge image, wherein the coordinate system takes an upper left corner of a picture of the target water gauge image as a coordinate origin, an axis on the right side of the coordinate origin is a positive half axis of an X axis, and an axis below the coordinate origin is a positive half axis of a Y axis.
4. A method of identifying a water gauge based on computer vision as claimed in claim 3, wherein said step 5 comprises:
step 51, acquiring first coordinate information of a blade of the target water gauge;
and 52, respectively calculating the distances between the four edges of the target water gauge body and the image edge of the target water gauge image based on the first coordinate information and the image height and the image width of the target water gauge image, and judging that the image of the target water gauge image meets the preset standard when the distances between the four edges and the image edge of the target water gauge image are all greater than or equal to a second preset value, otherwise, judging that the image of the target water gauge image does not meet the preset standard.
5. The method for identifying a water gauge based on computer vision according to claim 4, wherein the step 6 comprises:
step 61, acquiring the first coordinate information, and calculating the coordinate Pos of the bottom center point of the target water gauge bm The calculation formula is as follows:
Pos bm =[(X lt +X rb )/2,Y rb ],
wherein X is lt Is the X coordinate value, X of the upper left corner of the object positioned at the lowest part in the object water gauge image rb Is the X coordinate value, Y of the right lower corner of the lowermost target rb Is the Y coordinate value of the lower right corner of the lowermost target;
step 62, drawing a rectangular area by taking the bottom center point as a center, taking a third preset value as a length value and taking a fourth preset value as a width value, and cutting out the rectangular area to serve as a new target water gauge image;
and 63, inputting the new target water gauge image into a preset shielding detection classification model to detect whether an obstacle shields the target water gauge.
6. A method of identifying a water gauge based on computer vision as claimed in claim 3, wherein said step 7 comprises:
step 71, acquiring all 2 types of targets and all 3 types of targets in the target water gauge image, and acquiring first coordinate information of each 2 types of targets and second coordinate information of each 3 types of targets;
step 72, constructing a set A ([ E) based on the all 2-class objects and the first coordinate information 1 ,Pos 1 ],[E 2 ,Pos 2 ]…[E n ,Pos n ]) Constructing a set B ([ (i) E, pos) based on the all 3-class objects and the second coordinate information i ],[(i+1)E,Pos i+1 ]…[(i+p-1)E,Pos i+p-1 ]) Wherein n is the total number of 2 types of targets in the target water gauge image, p is the total number of 3 types of targets in the target water gauge image, and i is the number corresponding to the 3 types of targets positioned at the lowest part in the target water gauge image;
step 73, taking out the Y coordinates of the upper left corners of all the 2 types of targets in the set A, sorting all the 2 types of targets in the set A according to the sequence from big to small based on the Y coordinates of the upper left corners to obtain a new set A, fitting based on the XY coordinates of the upper left corners of all the 2 types of targets newly sorted in the new set A, and calculating to obtain a first fitting slope Ca;
step 74, the XY coordinates of the right lower corners of all 3 types of targets in the set B are taken out, fitting is carried out based on the XY coordinates of the right lower corners, and a second fitting slope Cb is obtained through calculation;
step 75, calculating a slope C and an oblique angle α of the target water gauge based on the first fitting slope Ca and the second fitting slope Cb, where a calculation formula is as follows:
C=(Ca+Cb)/2,
α=arctan(C)。
7. a method of identifying a water gauge based on computer vision as claimed in claim 3, wherein said step 8 comprises:
step 81, obtaining all 2 types of targets and all 3 types of targets in the target water gauge image, obtaining third coordinate information corresponding to each 2 types of targets and 3 types of targets, respectively calculating height pixel values of each 2 types of targets and 3 types of targets based on the third coordinate information, and averaging all obtained height pixel values to obtain average height D avg The calculation formula of the height pixel value is as follows:
D q =(Y rbq -Y ltq ),
wherein D is q For the height pixel value of the q-th object in all the class 2 objects and class 3 objects, Y rbq Y is the Y coordinate value of the lower right corner of the q-th target, Y ltq For the Y coordinate value of the upper left corner of the q-th target, q is positive from 1 to 10An integer;
average height D avg The calculation formula of (2) is as follows:
wherein Q is the total number of all class 2 targets and class 3 targets;
step 82, sorting all the class 2 targets and class 3 targets according to the Y coordinate value of the lower right corner in the third coordinate information, and taking the complete target closest to the water surface as a first target;
when the first target is a class 3 target, the total height G of the first target and all targets above the first target is:
G=1-K1*0.1,
wherein K1 is a number corresponding to the first target, and G is in meters;
when the first target is a class 2 target, the total height G of the first target and all targets above the first target is:
G=1-K2*0.1+0.05,
wherein K2 is a number corresponding to a category 3 target immediately adjacent to the first target above the first target, and G is in meters;
step 83, obtaining Y coordinate value Y of the right lower corner of the 1-category target waterrule in the target water gauge image rbw And the height difference delta H between the first target and the lower edge of the target water gauge is as follows:
ΔH=Y rbw -Y rbE
wherein Y is rbE Is the Y coordinate value of the lower right corner of the first target, and ΔH is in pixels;
the real world height corresponding to the height difference Δh is:
ΔH real =F*ΔH,
wherein F is E height D of the real world real And the average height D avg Ratio of DeltaH real The unit of (2) is rice;
step 84. Calculating the height J of the part above the water surface of the target water gauge based on the total height G and the real world height corresponding to the height difference delta H, wherein the calculation formula is as follows: j=g+Δh real
The scale surface data K of the target water scale is as follows: k=1-J, wherein the units of the height J and the face data K are meters;
step 85, calculating the water level height M based on the scale surface data K and the water gauge zero point elevation data, wherein a calculation formula is as follows: m=k+l, where L is water gauge zero point elevation data corresponding to a preset position when the target water gauge image is captured, and M is a meter.
8. A computer vision based water gauge identification system for implementing a computer vision based water gauge identification method as defined in any one of claims 1-7, comprising: the system comprises a model training module, a target acquisition module, an abnormality judgment module and a water level calculation module;
the model training module acquires a water gauge image, marks and frames targets in the water gauge image one by one according to a preset marking standard to generate a water gauge data set, wherein the preset marking standard is to divide the water gauge into a ruler body, E, And->11 targets, labeled waterfront, E, 1E, 2E, 3E, 4E, 5E, 6E, 7E, 8E, and 9E, respectively; training the water gauge data set based on a pre-training model with target detection capability, and generating an inference model;
the target acquisition module acquires a target water gauge image of the water level to be detected, and performs pretreatment on the target water gauge image; inputting the preprocessed target water gauge image into the inference model to obtain a target in the target water gauge image;
the abnormality judging module judges whether the picture of the target water gauge image accords with a preset standard or not based on the position of the target water gauge in the target water gauge image, generates an abnormality alarm when the picture does not accord with the preset standard, and sends the abnormality alarm to a user terminal; detecting whether an obstacle shields the target water gauge when the preset standard is met, generating an abnormal alarm when the obstacle shields the target water gauge, and sending the abnormal alarm to the user terminal; judging whether the target water gauge is inclined or not based on the inclination angle of the target water gauge when no obstacle shields the target water gauge, generating an abnormal alarm when the inclination angle is larger than or equal to a first preset value, and sending the abnormal alarm to the user terminal;
and when the inclination angle is smaller than a first preset value, the water level calculation module takes the sum of the ruler surface data of the target water ruler and the zero point elevation data of the water ruler as the water level height.
9. A computer storage medium storing program instructions, wherein the program instructions, when executed, control a device in which the computer storage medium is located to perform the computer vision-based water gauge identification method of any one of claims 1 to 7.
CN202310568740.4A 2023-05-19 2023-05-19 Water gauge identification method, system and storage medium based on computer vision Pending CN116486212A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310568740.4A CN116486212A (en) 2023-05-19 2023-05-19 Water gauge identification method, system and storage medium based on computer vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310568740.4A CN116486212A (en) 2023-05-19 2023-05-19 Water gauge identification method, system and storage medium based on computer vision

Publications (1)

Publication Number Publication Date
CN116486212A true CN116486212A (en) 2023-07-25

Family

ID=87216390

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310568740.4A Pending CN116486212A (en) 2023-05-19 2023-05-19 Water gauge identification method, system and storage medium based on computer vision

Country Status (1)

Country Link
CN (1) CN116486212A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117251943A (en) * 2023-11-20 2023-12-19 力鸿检验集团有限公司 Waterline position fluctuation curve simulation method and device and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117251943A (en) * 2023-11-20 2023-12-19 力鸿检验集团有限公司 Waterline position fluctuation curve simulation method and device and electronic equipment
CN117251943B (en) * 2023-11-20 2024-02-06 力鸿检验集团有限公司 Waterline position fluctuation curve simulation method and device and electronic equipment

Similar Documents

Publication Publication Date Title
CN109443480B (en) Water level scale positioning and water level measuring method based on image processing
CN112766274B (en) Water gauge image water level automatic reading method and system based on Mask RCNN algorithm
CN110414334B (en) Intelligent water quality identification method based on unmanned aerial vehicle inspection
CN108759973B (en) Water level measuring method
US20210374466A1 (en) Water level monitoring method based on cluster partition and scale recognition
CN106557764B (en) A kind of water level recognition methods based on binary-coded character water gauge and image procossing
CN112818988B (en) Automatic identification reading method and system for pointer instrument
CN112699876B (en) Automatic reading method for various meters of gas collecting station
CN102975826A (en) Portable ship water gauge automatic detection and identification method based on machine vision
CN109376740A (en) A kind of water gauge reading detection method based on video
CN101751572A (en) Pattern detection method, device, equipment and system
CN105447859A (en) Field wheat aphid counting method
CN106971393A (en) The phenotype measuring method and system of a kind of corn kernel
CN116486212A (en) Water gauge identification method, system and storage medium based on computer vision
CN113688817A (en) Instrument identification method and system for automatic inspection
CN117036993A (en) Ship water gauge remote measurement method based on unmanned aerial vehicle
CN110309828B (en) Inclined license plate correction method
CN116740758A (en) Bird image recognition method and system for preventing misjudgment
CN115841633A (en) Power tower and power line associated correction power tower and power line detection method
CN115082776A (en) Electric energy meter automatic detection system and method based on image recognition
CN113902894A (en) Strip type level meter automatic reading identification method based on image processing
CN110059573A (en) Wild ginseng based on image recognition is classified calibration method
CN117037132A (en) Ship water gauge reading detection and identification method based on machine vision
CN115082509B (en) Method for tracking non-feature target
CN116612461A (en) Target detection-based pointer instrument whole-process automatic reading method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination