CN117315670A - Water meter reading area detection method based on computer vision - Google Patents
Water meter reading area detection method based on computer vision Download PDFInfo
- Publication number
- CN117315670A CN117315670A CN202311243717.4A CN202311243717A CN117315670A CN 117315670 A CN117315670 A CN 117315670A CN 202311243717 A CN202311243717 A CN 202311243717A CN 117315670 A CN117315670 A CN 117315670A
- Authority
- CN
- China
- Prior art keywords
- water meter
- frame
- reading area
- image
- detection function
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 title claims abstract description 64
- 238000001514 detection method Methods 0.000 title claims abstract description 55
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 20
- 238000012937 correction Methods 0.000 claims abstract description 12
- 238000007781 pre-processing Methods 0.000 claims abstract description 9
- 238000000034 method Methods 0.000 claims description 22
- 239000011159 matrix material Substances 0.000 claims description 6
- 238000012545 processing Methods 0.000 abstract description 8
- 238000003909 pattern recognition Methods 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 27
- 238000010586 diagram Methods 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 6
- 238000001914 filtration Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 4
- 238000005286 illumination Methods 0.000 description 3
- 238000010606 normalization Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000012163 sequencing technique Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000008054 signal transmission Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/14—Image acquisition
- G06V30/146—Aligning or centring of the image pick-up or image-field
- G06V30/147—Determination of region of interest
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/14—Image acquisition
- G06V30/146—Aligning or centring of the image pick-up or image-field
- G06V30/1475—Inclination or skew detection or correction of characters or of image to be recognised
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/14—Image acquisition
- G06V30/148—Segmentation of character regions
- G06V30/15—Cutting or merging image elements, e.g. region growing, watershed or clustering-based techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/19—Recognition using electronic means
- G06V30/191—Design or setup of recognition systems or techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
- G06V30/19173—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/02—Recognising information on displays, dials, clocks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Image Processing (AREA)
Abstract
The invention relates to a water meter reading area detection method based on computer vision, which belongs to the technical field of image processing and pattern recognition, and is characterized in that an image preprocessing operation is carried out on a complete water meter image, and an angle correction is carried out on an inclined water meter image; dividing a reading area in the complete water meter image by using a differential algorithm; meshing in the reading area for detecting the target; and (3) identifying the grids of each target number, comparing the obtained identification frame with the real frame, adjusting the identification model, and carrying out reading detection on the segmented reading area through the adjusted identification model to obtain the reading of the water meter.
Description
Technical Field
The invention belongs to the technical field of image processing and pattern recognition, and particularly relates to a water meter reading area detection method based on computer vision.
Background
The manual water meter reading counting mode has the defects of large meter reading workload, low working efficiency, high expense and the like. With the breakthrough progress of computer vision technology and research in the field of deep learning, a special image acquisition device can be used for acquiring a real-time monitoring image of a water meter dial, then the acquired image is transmitted to the background through a network, after the acquired water meter image is subjected to related preprocessing operation by utilizing a digital image processing technology, a reading result is obtained by identifying digital characters in the image by a deep learning target detection technology, and then the reading result is stored in a text form, so that remote copying of resident water meter reading is completed. Compared with the prior manual meter reading mode, the automatic meter reading process is rapid and efficient, large data are utilized, and technicians can also carry out statistical analysis on collected water consumption data, so that a follow-up water supply scheme is optimized.
Disclosure of Invention
In order to solve the technical problems, the invention provides a water meter reading area detection method based on computer vision, which comprises the following steps:
s1, performing image preprocessing operation on a complete water meter image, and performing inclination correction on the water meter image obtained after preprocessing;
s2, dividing a reading area in the complete water meter image after inclination correction through a trained identification model; embedding the target area frame into the identification model, outputting adjustment parameters of coordinates of a central point of the target area frame, adjusting the position of the target area frame according to the adjustment parameters, and dividing a reading area in the complete water meter image;
s3, reading and detecting the segmented reading area to obtain the reading of the water meter; dividing grids in a reading area, and presetting a plurality of anchor frames with different aspect ratios for each grid to detect a target; and (3) identifying the grids of each target number, comparing the obtained identification frame with a real frame, and adjusting an identification model.
Further, in step S3, when the center point of the target number falls in one grid, anchor frames in two grids near the center point of the target number also participate in detecting the target,
b x =2t x -0.5+c x ;
b y =2t y -0.5+c y ;
b w =p w ×(2t w ) 2 ;
b h =p h ×(2t h ) 2 ;
wherein b x 、b y Respectively representing the coordinates of the central points of the target numbers, b w 、b h Representing the width and height, c, of the target number, respectively x 、c y Respectively representing the upper left corner coordinates, t, of the grid where the target digital center point is located x 、t y Respectively representing the offset of the center point of the target number relative to the upper left corner coordinate of the grid, t w 、t h Scaling of width-height of target number relative to width-height of anchor frame, p w 、p h Representing the width and height of the a priori anchor boxes, respectively.
Further, in step S3,
comparing the obtained identification frame with the real frame by adopting a comparison detection function, and measuring the difference between the identification information and the real information;
the contrast detection function comprises: rectangular frame contrast detection function J b Class contrast detection function J c Accuracy contrast detection function J o The specific formula is as follows:
J=b g ×J b +c g ×J c +o g ×J o ;
wherein b g Weight coefficient of comparison detection function for rectangular frame c g Weight coefficient for classification comparison detection function, o g The weight coefficient of the detection function is compared for accuracy.
Further, rectangular box contrast detection function J b The formula is as follows:
;
wherein b and b g Respectively represent the center points of the identification frame and the real frame,representing the Euclidean distance between two center points, c representing the diagonal distance of the minimum closure area of the recognition box and the real box,/o>Is a weight parameter, v is used for measuring the width w of the identification frame g Height h g Similarity to the ratio of the width w to the height h of the real frame, and U is the overlapping degree of the identification frame and the real frame.
Further, the classification contrast detection function J c The formula is as follows:
;
where y is the class label corresponding to the input sample, and p is the probability that the input sample is a positive sample.
Further, accuracy contrast detection function J o The formula is as follows:
;
where Y is the accuracy tag matrix and P is the prediction accuracy matrix.
Further, a differential algorithm is used for acquiring the boundary of a reading area in a complete water meter image, and the differential algorithm in each frame of reading area is extracted, wherein the differential algorithm comprises two differential algorithms for respectively detecting a horizontal boundary differential algorithm and a vertical boundary:
;
;
wherein F represents the pixel at the boundaryGray value of the dot; x and Y represent the abscissa of the boundary pixel point;representing the gray value of the boundary pixel point in the abscissa direction after boundary detection; />Representing the gray value of the boundary pixel point in the ordinate direction after boundary detection;
and taking the convolution maximum value of gray values in the horizontal direction and the vertical direction of each boundary pixel point on the region after the graying treatment as the output value of the boundary pixel point, thereby obtaining a boundary line.
Further, in step S1, scaling of the image is achieved using bilinear interpolation, known as a 11 (x 1 ,y 1 ),A 12 (x 2 ,y 1 ),A 21 (x 1 ,y 2 ),A 22 (x 2 ,y 2 ) For point A 11 ,A 12 ,A 21 ,A 22 The point B is at the center of four known coordinate points, assuming the point B coordinates are (X, y), the following formula is obtained by linearly interpolating the X-axis of the horizontal axis:
;
m in the formula 1 ,M 2 Representation A 11 ,A 12 ,A 21 ,A 22 Assuming that the abscissa of the B point is unchanged, the result of linear interpolation on the X axis of the horizontal axis is f (M 1 ) And f (M) 2 ) Then, linear interpolation is carried out on the Y axis of the vertical axis, and the following formula is obtained:
;
the result of bilinear interpolation is expressed as:
。
compared with the prior art, the invention has the following beneficial technical effects:
the image preprocessing operation is carried out on the complete water meter image, so that the interference of external factors on the identification process is reduced; the inclination correction is carried out on the water meter image obtained after pretreatment, so that the subsequent identification work can be carried out more accurately; dividing a reading area in the complete water meter image after inclination correction through a trained recognition model; the method and the device can eliminate interference such as illumination, stains, black background and the like, and accurately identify the target area.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort to a person skilled in the art.
FIG. 1 is a flow chart of a computer vision based water meter reading area detection method of the present invention;
FIG. 2 is a schematic diagram of a bilinear interpolation method of the present invention for scaling an image;
FIG. 3 is a schematic diagram of the whole character of the water meter;
fig. 4 is a schematic diagram of a half character of a water meter.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
In the drawings of the specific embodiments of the present invention, in order to better and more clearly describe the working principle of each element in the system, the connection relationship of each part in the device is represented, but only the relative positional relationship between each element is clearly distinguished, and the limitations on the signal transmission direction, connection sequence and the structure size, dimension and shape of each part in the element or structure cannot be constructed.
Referring to fig. 1, a flow chart of a method for detecting a water meter reading area based on computer vision according to the present invention is shown, and the method comprises the following steps:
s1, performing image preprocessing operation on a complete water meter image, and performing inclination correction on the water meter image obtained after preprocessing.
In order to reduce interference of external factors to the recognition process, invalid information in the image needs to be filtered out as much as possible while the interested image information in the image is maintained, so that the accuracy of subsequent detection and recognition is improved.
The invention adopts the image preprocessing technology to enhance the image effect of the acquired water meter character wheel area image. Firstly, carrying out normalization processing on an input image, and uniformly adjusting the input image into a picture with a fixed size; then filtering the image to remove noise in the image; and secondly, carrying out equalization operation on the filtered image to complete image enhancement.
S11, image normalization processing.
The image normalization technique converts the original image into a corresponding unique standard form through a series of transformations. In this embodiment, the original image of the water meter image is normalized by adopting an image scaling manner, so as to obtain an image with a uniform pixel size of 440×150.
Scaling of the image is preferably achieved using bilinear interpolation. As shown in FIG. 2, A is known 11 (x 1 ,y 1 ),A 12 (x 2 ,y 1 ),A 21 (x 1 ,y 2 ),A 22 (x 2 ,y 2 ) For point A 11 ,A 12 ,A 21 ,A 22 The distribution of which in two-dimensional coordinates is shown in fig. 2.
The bilinear interpolation algorithm performs linear interpolation in one direction, in the figure, the point B is located at the center of four known coordinate points, and assuming that the coordinate of the point B is (X, y), the following formula can be obtained by performing linear interpolation on the X axis of the horizontal axis:
;
m in the formula 1 ,M 2 Representation A 11 ,A 12 ,A 21 ,A 22 Assuming that the abscissa of the B point is unchanged, the result of linear interpolation on the X axis of the horizontal axis is f (M 1 ) And f (M) 2 ) Then, linear interpolation is performed on the Y axis of the vertical axis, so that the following formula can be obtained:
;
the result of bilinear interpolation can be expressed as follows, combining the two formulas above:
;
in the actual calculation of the image, when a single pixel takes 1, the denominator of the above formula becomes 1, and then the information of the pixel point can be calculated according to the known adjacent coordinate information of the pixel point.
S12, median filtering processing of the image.
The acquired water meter image often has noise, if the acquired water meter image is not processed, the subsequent image processing operation is executed, and the ideal effect is not achieved, so that the acquired water meter image needs to be preprocessed by utilizing an image filtering technology, the noise in the image is removed, and the quality of the original image is improved. The median filtering is a nonlinear filtering method, so that noise in an image can be filtered under a certain condition, and boundary information of the image can be reserved.
And counting the pixel values around the target pixel according to the template frame with the fixed size, sorting according to the gray value, and replacing the pixel value of the target pixel by using the median obtained after sorting.
Assuming that there is a digital signal sequenceThe median filtering operation is performed on the sequence, first a template n of odd length is defined, then n samples are extracted from the sequence +.>Wherein i is the median value of the number of samples, and finally the extracted sample values are ordered from big to small or from small to big, so that the ordered median value g i Is the output value of the median filter, and the mathematical expression is as follows:
;
s13, image equalization processing
Under different illumination intensities, the brightness and definition of the same target image have great difference, so that the problem needs to be solved by adopting an image enhancement technology. The histogram equalization method is adopted in the embodiment, so that the original image is equalized, the dynamic range of gray value difference between pixels is increased, and meanwhile, the image contrast is enhanced.
S2, performing inclination correction on the water meter image obtained after pretreatment so as to more accurately perform subsequent identification work.
Finding out the characteristics which can be used as a horizontal reference in the dial, and correspondingly rotating the picture according to the characteristics to obtain an image of the right level of the dial. By observing the structure of the digital water meter image, it can be found that two fixed mark characters are arranged on the dial plate and are respectively positioned at the left side and the right side of the character wheel area responsible for digital recording, the left side is English character H, and the right side is unit symbol m 3 ". According to the characteristic, the invention adopts a template matching method to firstly identify the two fixed-position marked characters and then correct the image.
The specific correction steps are as follows:
s21, creating inclusion marksThe characters "H" and "m" are noted 3 Reference templates of gray value characteristics, and searching matching items of the templates in the water meter image to be detected by referring to the created reference templates.
Calculating the matching score of the water meter image area to be detected and the reference template, and finding out the matching object with the highest score, wherein the higher the score is, the higher the matching degree is.
And calculating the geometric center coordinates of the successfully matched object, and measuring the included angle between the geometric center connecting line and the horizontal line of the two marked characters.
S22, performing angle correction on the inclined water meter image by using geometric transformation according to the measured included angle to enable the character wheel area to be in a horizontal state, integrally rotating the image according to the measured angle, and simultaneously rotating the central connecting line according to the measured angle.
Step S21 is carried out again, the included angle between the geometric center connecting line and the horizontal line of the two marked characters is measured, and an included angle threshold A is set T In a preferred embodiment, the included angle threshold A T Assigned a value of 5, if the angle of re-measurement is at [ -5, +5 []And if so, represents that the correction is completed.
S3, dividing the reading area in the corrected water meter image through the trained recognition model. The recognition model has remarkable effect on recognizing the reading area, can eliminate interference such as illumination, stains and black background, and can accurately recognize the target area.
And embedding the target area frame into the identification model to form a difference adjusting unit, wherein the difference adjusting unit comprises 5 adjusting blocks. Inputting a characteristic diagram x, outputting a characteristic diagram f after two (1 multiplied by 1, 1) and (3 multiplied by 3, 1) downsampling convolution layers, outputting a difference value obtained by the attention module of the output characteristic diagram f, carrying out weighted multiplication on the difference value and the output characteristic diagram f, and superposing a weighted result and the input characteristic diagram x, wherein the repetition times of 5 adjusting blocks are sequentially 1, 2, 8 and 4; and (3) connecting the output of the last adjusting block with the output of the deformable convolution layer, carrying out feature fusion on the 3 rd adjusting block and the 4 th adjusting block in the identification model and the output of the deformable convolution layer, generating three feature graphs with different scales of 52 multiplied by 52, 26 multiplied by 26 and 13 multiplied by 13, respectively predicting the adjusting parameters of the central point coordinates (x and y) of the frame of the output target area on the three feature graphs, and adjusting the position of the frame of the target area according to the adjusting parameters, so as to divide the reading area in the complete water meter image.
And obtaining a boundary of a reading area in the complete water meter image by using a differential algorithm, wherein the differential algorithm can detect the boundary according to the phenomenon that the gray scale weighting difference of the adjacent points up and down and left and right of the pixel point reaches an extreme value at the boundary. Has smoothing effect on noise and provides more accurate boundary direction information.
The differential algorithm in the reading area of each frame is extracted, and the differential algorithm comprises two differential algorithms for respectively detecting a horizontal boundary and a vertical boundary, wherein the two differential algorithms are used for carrying out weighted average on the position influence of pixels and then carrying out differential operation, so that the blurring degree of the boundary is reduced. The gray values in the horizontal direction and the vertical direction are calculated as follows:
;
;
wherein F represents the gray value at the boundary pixel point; x and Y represent the abscissa of the boundary pixel point;representing the gray value of the boundary pixel point in the abscissa direction after boundary detection; />And the gray value of the boundary pixel point in the ordinate direction after boundary detection is represented.
And taking the convolution maximum value of gray values in the horizontal direction and the vertical direction of each boundary pixel point on the region after the graying treatment as the output value of the boundary pixel point based on the calculation results of the two differential algorithms, thereby obtaining a boundary line.
S4, reading and detecting the segmented reading area to obtain the reading of the water meter. The method comprises the following steps:
s41, dividing grids in a reading area by a detection function of the recognition model, and presetting a plurality of anchor frames with different aspect ratios for each grid to detect a target.
When the center point of the target number falls in one of the grids, anchor frames in two grids near the center point of the target number participate in detecting the target in the grids of the left, upper, right and lower 4 neighborhoods except the grid where the center point is located, and the specific formula is as follows:
b x =2t x -0.5+c x ;
b y =2t y -0.5+c y ;
b w =p w ×(2t w ) 2 ;
b h =p h ×(2t h ) 2 ;
wherein b x 、b y Respectively representing the coordinates of the central points of the target numbers, b w 、b h Representing the width and height, c, of the target number, respectively x 、c y Respectively representing the upper left corner coordinates, t, of the grid where the target digital center point is located x 、t y Respectively representing the offset of the center point of the target number relative to the upper left corner coordinate of the grid, t w 、t h Scaling of width-height of target number relative to width-height of anchor frame, p w 、p h Representing the width and height of the a priori anchor boxes, respectively.
S42, after the grids of each target number are identified, the obtained identification frame is compared with the real frame, and therefore the improvement direction of the identification model is adjusted.
In this embodiment, the obtained identification frame is compared with the real frame by using a comparison detection function, the comparison detection function can measure the difference between the identification information and the real information, and if the identification information is closer to the real information, the result of the comparison detection function value J is smaller.
The contrast detection function of the present invention mainly comprises three aspects: rectangular frame contrast detection function J b Class contrast detection function J c Accuracy contrast detection function J o The specific formula is as follows:
J=b g ×J b +c g ×J c +o g ×J o ;
wherein b g The weight coefficient of the rectangular frame contrast detection function is set to 0.05, c g For classifying the weight coefficient of the contrast detection function, the invention is set to 0.5, o g For accuracy versus weight coefficient of the detection function, the present invention is set to 1.0.
For the rectangular box contrast detection function, a measure based on overlap is used. The overlapping degree can measure the overlapping degree of the identification frame and the real frame, if the identification frame is A and the real frame is B, the specific formula of the overlapping degree U is as follows:
;
rectangular frame contrast detection function J b The formula is as follows:
;
wherein b and b g Respectively represent the center points of the identification frame and the real frame,representing the Euclidean distance between two center points, c representing the diagonal distance of the minimum closure area of the recognition box and the real box,/o>Is a weight parameter, v is used for measuring the width w of the identification frame g Height h g Similarity with the ratio of the real frame width w to the real frame height h is as follows:
;
;
for the classification contrast detection function J c The formula is as follows:
;
wherein y is a class label corresponding to an input sample, the positive sample is 1, the negative sample is 0, and p is the probability that the input sample is a positive sample.
Accuracy contrast detection function J o The formula is as follows:
;
where Y is the accuracy tag matrix and P is the prediction accuracy matrix.
In the practical application process, the water meter is divided into an electronic digital water meter and a character wheel type water meter. The method comprises the steps of inputting processed images into a recognition model to learn 1-9 ten categories, namely 0-9 ten digital characters, then sequencing according to a model prediction result and a predicted position size, and finally outputting a recognition result of the digital characters in the character wheel area of the digital water meter image, thereby completing a reading recognition task of the whole water meter image.
The character wheel type water meter character wheel is a digital gear counter, and the mechanical gear carries in a rotary mode. According to the rotation characteristics of the mechanical character wheel, the character wheel is divided into two cases of whole characters and half characters, wherein each case is divided into 10 types, the whole characters are respectively "0", "1", "2", "3", "4", "5", "6", "7", "8", "9" as shown in fig. 3, the half characters are respectively "0-1", "1-2", "2-3", "3-4", "4-5", "5-6", "6-7", "7-8", "8-9", "9-0" as shown in fig. 4, and the whole character detection and identification are the same as those of an electronic digital water meter.
And combining the half character recognition program and the complete character recognition program, and sequencing according to the predicted result and the predicted position size of the model to finally obtain the complete character wheel reading.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted across a computer-readable storage medium. The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a DVD), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
While the invention has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions of equivalents may be made and equivalents will be apparent to those skilled in the art without departing from the scope of the invention. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (8)
1. The water meter reading area detection method based on computer vision is characterized by comprising the following steps of:
s1, performing image preprocessing operation on a complete water meter image;
s2, selecting two marking characters on a dial as a reference template, calculating geometric center coordinates of two successfully matched objects in the preprocessed water meter image, measuring an included angle between a geometric center connecting line of the two objects and a horizontal line, and performing angle correction on the inclined water meter image according to the measured included angle;
s3, dividing a reading area in the complete water meter image after inclination correction through a trained identification model; embedding the target area frame into the identification model, outputting an adjustment parameter of the center point coordinate of the target area frame, adjusting the position of the target area frame according to the adjustment parameter, and dividing a reading area in the complete water meter image by utilizing a differential algorithm;
s4, dividing grids in the reading area, and presetting a plurality of anchor frames with different aspect ratios for each grid to detect a target; and (3) identifying the grids of each target number, comparing the obtained identification frame with the real frame, adjusting the identification model, and carrying out reading detection on the segmented reading area through the adjusted identification model to obtain the reading of the water meter.
2. The method for detecting a water meter reading area according to claim 1, wherein, in step S4,
comparing the obtained identification frame with the real frame by adopting a comparison detection function, and measuring the difference between the identification information and the real information;
the contrast detection function comprises: rectangular frame contrast detection function J b Class contrast detection function J c Accuracy contrast detection function J o The specific formula is as follows:
J=b g ×J b +c g ×J c +o g ×J o ;
wherein b g Weight coefficient of comparison detection function for rectangular frame c g Weight coefficient for classification comparison detection function, o g The weight coefficient of the detection function is compared for accuracy.
3. The method for detecting a water meter reading area according to claim 2, wherein,
rectangular frame contrast detection function J b The formula is as follows:
;
wherein b and b g Respectively represent the identification frame and the real frameThe heart point is a point at which,representing the Euclidean distance between two center points, c representing the diagonal distance of the minimum closure area of the recognition box and the real box,/o>Is a weight parameter, v is used for measuring the width w of the identification frame g Height h g Similarity to the ratio of the width w to the height h of the real frame, and U is the overlapping degree of the identification frame and the real frame.
4. The method for detecting a water meter reading area according to claim 2, wherein the classification comparison detection function J c The formula is as follows:
;
where y is the class label corresponding to the input sample, and p is the probability that the input sample is a positive sample.
5. The method of claim 2, wherein the accuracy is compared with a detection function J o The formula is as follows:
;
where Y is the accuracy tag matrix and P is the prediction accuracy matrix.
6. The method for detecting a water meter reading area according to claim 1, wherein, in step S3,
the method comprises the steps of obtaining the boundary of a reading area in a complete water meter image by using a differential algorithm, and extracting the differential algorithm in each frame of reading area, wherein the differential algorithm comprises two differential algorithms for respectively detecting a horizontal boundary differential algorithm and a vertical boundary:
;
;
wherein F represents the gray value at the boundary pixel point; x and Y represent the abscissa of the boundary pixel point;representing the gray value of the boundary pixel point in the abscissa direction after boundary detection; />Representing the gray value of the boundary pixel point in the ordinate direction after boundary detection;
and taking the convolution maximum value of gray values in the horizontal direction and the vertical direction of each boundary pixel point on the region after the graying treatment as the output value of the boundary pixel point, thereby obtaining a boundary line.
7. The method for detecting a water meter reading area according to claim 1, wherein, in step S4,
when the center point of the target number falls in one grid, the anchor boxes in the two grids near the center point of the target number also participate in detecting the target,
b x =2t x -0.5+c x ;
b y =2t y -0.5+c y ;
b w =p w ×(2t w ) 2 ;
b h =p h ×(2t h ) 2 ;
wherein b x 、b y Respectively representing the coordinates of the central points of the target numbers, b w 、b h Representing the width and height, c, of the target number, respectively x 、c y Respectively representing the upper left corner coordinates, t, of the grid where the target digital center point is located x 、t y Respectively represent the purposesOffset of center point of reference number relative to upper left corner coordinate of grid, t w 、t h Scaling of width-height of target number relative to width-height of anchor frame, p w 、p h Representing the width and height of the a priori anchor boxes, respectively.
8. The method according to claim 1, wherein in step S1, scaling of the image is achieved by bilinear interpolation, known as a 11 (x 1 ,y 1 ),A 12 (x 2 ,y 1 ),A 21 (x 1 ,y 2 ),A 22 (x 2 ,y 2 ) For point A 11 ,A 12 ,A 21 ,A 22 The point B is at the center of four known coordinate points, assuming the point B coordinates are (X, y), the following formula is obtained by linearly interpolating the X-axis of the horizontal axis:
;
m in the formula 1 ,M 2 Representation A 11 ,A 12 ,A 21 ,A 22 Assuming that the abscissa of the B point is unchanged, the result of linear interpolation on the X axis of the horizontal axis is f (M 1 ) And f (M) 2 ) Then, linear interpolation is carried out on the Y axis of the vertical axis, and the following formula is obtained:
;
the result of bilinear interpolation is expressed as:
。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311243717.4A CN117315670A (en) | 2023-09-26 | 2023-09-26 | Water meter reading area detection method based on computer vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311243717.4A CN117315670A (en) | 2023-09-26 | 2023-09-26 | Water meter reading area detection method based on computer vision |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117315670A true CN117315670A (en) | 2023-12-29 |
Family
ID=89245566
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311243717.4A Pending CN117315670A (en) | 2023-09-26 | 2023-09-26 | Water meter reading area detection method based on computer vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117315670A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117894032A (en) * | 2024-03-14 | 2024-04-16 | 上海巡智科技有限公司 | Water meter reading identification method, system, electronic equipment and storage medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112200160A (en) * | 2020-12-02 | 2021-01-08 | 成都信息工程大学 | Deep learning-based direct-reading water meter reading identification method |
CN113362330A (en) * | 2021-08-11 | 2021-09-07 | 昆山高新轨道交通智能装备有限公司 | Pantograph cavel real-time detection method, device, computer equipment and storage medium |
CN113390482A (en) * | 2021-06-09 | 2021-09-14 | 徐涛 | Camera direct-reading NB-TOT remote water meter |
CN113902035A (en) * | 2021-11-01 | 2022-01-07 | 桂林电子科技大学 | Omnidirectional and arbitrary digit water meter reading detection and identification method |
CN115082922A (en) * | 2022-08-24 | 2022-09-20 | 济南瑞泉电子有限公司 | Water meter digital picture processing method and system based on deep learning |
CN116343228A (en) * | 2023-03-27 | 2023-06-27 | 上海第二工业大学 | Intelligent reading method and system for water meter |
CN116343223A (en) * | 2023-05-31 | 2023-06-27 | 南京畅洋科技有限公司 | Character wheel type water meter reading method based on deep learning |
CN116612292A (en) * | 2023-05-29 | 2023-08-18 | 吉林大学 | Small target detection method based on deep learning |
-
2023
- 2023-09-26 CN CN202311243717.4A patent/CN117315670A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112200160A (en) * | 2020-12-02 | 2021-01-08 | 成都信息工程大学 | Deep learning-based direct-reading water meter reading identification method |
CN113390482A (en) * | 2021-06-09 | 2021-09-14 | 徐涛 | Camera direct-reading NB-TOT remote water meter |
CN113362330A (en) * | 2021-08-11 | 2021-09-07 | 昆山高新轨道交通智能装备有限公司 | Pantograph cavel real-time detection method, device, computer equipment and storage medium |
CN113902035A (en) * | 2021-11-01 | 2022-01-07 | 桂林电子科技大学 | Omnidirectional and arbitrary digit water meter reading detection and identification method |
CN115082922A (en) * | 2022-08-24 | 2022-09-20 | 济南瑞泉电子有限公司 | Water meter digital picture processing method and system based on deep learning |
CN116343228A (en) * | 2023-03-27 | 2023-06-27 | 上海第二工业大学 | Intelligent reading method and system for water meter |
CN116612292A (en) * | 2023-05-29 | 2023-08-18 | 吉林大学 | Small target detection method based on deep learning |
CN116343223A (en) * | 2023-05-31 | 2023-06-27 | 南京畅洋科技有限公司 | Character wheel type water meter reading method based on deep learning |
Non-Patent Citations (2)
Title |
---|
刘国华: "《机器视觉技术》", 30 November 2021, 华中科技大学出版社, pages: 2 * |
邵振峰: "《城市遥感原理方法和应用》", 28 February 2021, 武汉大学出版社, pages: 135 - 137 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117894032A (en) * | 2024-03-14 | 2024-04-16 | 上海巡智科技有限公司 | Water meter reading identification method, system, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106875381B (en) | Mobile phone shell defect detection method based on deep learning | |
CN113592845A (en) | Defect detection method and device for battery coating and storage medium | |
CN109635806B (en) | Ammeter value identification method based on residual error network | |
CN110648310B (en) | Weak supervision casting defect identification method based on attention mechanism | |
CN110781885A (en) | Text detection method, device, medium and electronic equipment based on image processing | |
CN114549981A (en) | Intelligent inspection pointer type instrument recognition and reading method based on deep learning | |
CN116664559B (en) | Machine vision-based memory bank damage rapid detection method | |
CN110706293B (en) | SURF feature matching-based electronic component positioning and detecting method | |
CN113808180B (en) | Heterologous image registration method, system and device | |
CN111539330B (en) | Transformer substation digital display instrument identification method based on double-SVM multi-classifier | |
CN117315670A (en) | Water meter reading area detection method based on computer vision | |
CN110659637A (en) | Electric energy meter number and label automatic identification method combining deep neural network and SIFT features | |
CN115775236A (en) | Surface tiny defect visual detection method and system based on multi-scale feature fusion | |
CN114241469A (en) | Information identification method and device for electricity meter rotation process | |
CN116188756A (en) | Instrument angle correction and indication recognition method based on deep learning | |
CN111950559A (en) | Pointer instrument automatic reading method based on radial gray scale | |
CN110276759B (en) | Mobile phone screen bad line defect diagnosis method based on machine vision | |
CN115841669A (en) | Pointer instrument detection and reading identification method based on deep learning technology | |
CN111814852A (en) | Image detection method, image detection device, electronic equipment and computer-readable storage medium | |
CN117037132A (en) | Ship water gauge reading detection and identification method based on machine vision | |
CN115546795A (en) | Automatic reading method of circular pointer instrument based on deep learning | |
CN113705564A (en) | Pointer type instrument identification reading method | |
CN115830514B (en) | Whole river reach surface flow velocity calculation method and system suitable for curved river channel | |
CN114742849B (en) | Leveling instrument distance measuring method based on image enhancement | |
CN116259008A (en) | Water level real-time monitoring method based on computer vision |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |