CN117115802B - Character wheel type water meter digital identification and processing method based on deep learning - Google Patents

Character wheel type water meter digital identification and processing method based on deep learning Download PDF

Info

Publication number
CN117115802B
CN117115802B CN202311385464.4A CN202311385464A CN117115802B CN 117115802 B CN117115802 B CN 117115802B CN 202311385464 A CN202311385464 A CN 202311385464A CN 117115802 B CN117115802 B CN 117115802B
Authority
CN
China
Prior art keywords
character
character frame
frame
word
group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311385464.4A
Other languages
Chinese (zh)
Other versions
CN117115802A (en
Inventor
吴鑫
王醒
左伟
刘学铸
刘鹏
王煦
翟恒涛
张天亮
韩明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHANDONG WEIWEI TECHNOLOGY CO LTD
Original Assignee
SHANDONG WEIWEI TECHNOLOGY CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHANDONG WEIWEI TECHNOLOGY CO LTD filed Critical SHANDONG WEIWEI TECHNOLOGY CO LTD
Priority to CN202311385464.4A priority Critical patent/CN117115802B/en
Publication of CN117115802A publication Critical patent/CN117115802A/en
Application granted granted Critical
Publication of CN117115802B publication Critical patent/CN117115802B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images
    • G06V20/63Scene text, e.g. street names
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/02Recognising information on displays, dials, clocks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Character Discrimination (AREA)
  • Character Input (AREA)

Abstract

The application discloses a character wheel type water meter digital identification and processing method based on deep learning, which is used for receiving water meter pictures sent by a client; preprocessing a water meter picture to obtain a standardized water meter input picture; processing the water meter input picture by adopting a frame point identification model to obtain a frame point identification result; according to the principle of consistency of frame points and the principle that one water meter picture has only one reading area, processing a frame point identification result, and extracting a reading area input picture with a digital character in a horizontal forward direction; processing the input picture of the reading area by adopting a reading identification model, and identifying to obtain a digital character frame and an auxiliary character frame; according to the non-maximum value suppression method, the auxiliary character frame category and the carry rule, processing the digital character frame and the auxiliary character frame which are obtained through recognition, and obtaining a final reading recognition result; and returning the reading identification result to the client.

Description

Character wheel type water meter digital identification and processing method based on deep learning
Technical Field
The invention relates to the technical field of character wheel type meter reading identification and deep learning, in particular to a character wheel type water meter digital identification and processing method based on deep learning.
Background
At present, deep learning is widely applied in a plurality of fields, the method has some application and achievement in the aspect of reading and identification of the character wheel type instrument, the character wheel type instrument generally belongs to a traditional mechanical instrument, manual meter reading is needed, time and labor are wasted, a camera shooting remote transmission device is simply additionally arranged for the character wheel type instrument, and the intelligent identification of the traditional character wheel type instrument can be realized by combining with the intelligent identification technology of deep learning and the like.
In the digital identification method of the character wheel type water meter in the prior art, some priori knowledge is needed, for example, digital characters in pictures need to be horizontally forward, which certainly increases the difficulty of installing a shooting remote transmission device, and if the water meter is installed in a narrow space, the horizontal forward direction of the digital characters can not be realized. Still other recognition methods employ deep learning semantic segmentation techniques, but require relatively complex post-processing operations to horizontally align the digital character reading regions, making the data sets required to train the semantic segmentation neural network difficult. And the traditional Hough transformation straight line detection algorithm is used for horizontally aligning the reading area, and the method has relatively high requirement on the picture quality and is not strong enough.
The wheel type meter is used for driving the wheel to rotate through gear connection to finish measurement, so that characters on the visible part of the wheel may not be complete digital characters, and the incomplete characters are commonly called half-character characters because the incomplete characters contain two incomplete digital characters. Therefore, in the field of character wheel type instrument recognition, OCR technology is difficult to be adequate, digital character recognition is generally performed by using a deep neural network or a conventional image processing algorithm, and digital characters are generally classified into 20 categories, namely, a complete digital character 10 category and a half-word character 10 category. However, this is simply to identify a single character, and when the final reading identification number sequence is formed, one of two incomplete number characters in the half-word character needs to be selected as the number of the reading identification number sequence, and if randomly selected, the number of the reading identification number sequence is likely to be wrong; and, there is an imbalance in the number of complete digital characters and half-word characters in practice, there are more complete digital characters than half-word characters, and half-word character recognition rate may be lower than that of complete digital characters. In addition, the CRNN network is adopted for reading and identifying, and the existence of half words still leads to lower identification accuracy
In the digital identification method of the character wheel type water meter in the prior art, some priori knowledge is needed, for example, digital characters in pictures need to be horizontally forward, which certainly increases the difficulty of installing a shooting remote transmission device, and if the water meter is installed in a narrow space, the horizontal forward direction of the digital characters can not be realized. The digital character reading area can be horizontally aligned by adopting a deep learning semantic segmentation technology, but the digital character reading area can be horizontally aligned through relatively complex post-processing operation, and the data set required by training the semantic segmentation neural network is difficult to manufacture. And the traditional Hough transformation straight line detection algorithm is used for horizontally aligning the reading area, and the method has relatively high requirement on the picture quality and is not strong enough.
Disclosure of Invention
Aiming at the defects, the invention provides a character wheel type water meter number identification and processing method based on deep learning, which realizes horizontal alignment of a number character reading area, correctly selects proper number characters from incomplete characters in half character to form final reading, overcomes unbalance of the number of complete number characters and half character in practice to a certain extent, can accurately identify water meter numbers, and has high identification accuracy.
In order to solve the technical problems, the invention adopts the following technical scheme:
a word wheel type water meter digital identification and processing method based on deep learning comprises the following steps:
step S1: receiving a water meter picture sent by a client;
step S2: preprocessing a water meter picture to obtain a standardized water meter input picture;
step S3: processing the water meter input picture by adopting a frame point identification model to obtain a frame point identification result;
step S4: according to the principle of consistency of frame points and the principle that one water meter picture has only one reading area, processing a frame point identification result, and extracting a reading area input picture with a digital character in a horizontal forward direction;
step S5: processing the input picture of the reading area by adopting a reading identification model, and identifying to obtain a digital character frame and an auxiliary character frame;
step S6: according to the non-maximum value suppression method, the auxiliary character frame category and the carry rule, processing the digital character frame and the auxiliary character frame which are obtained through recognition, and obtaining a final reading recognition result;
step S7: and returning the reading identification result to the client.
Further, in the step S3, the frame point identification model uses the yolov5 network as a main body, and modifies the head network to enable the head network to detect not only the horizontal target frame of the reading area, but also four vertices of the reading area, and specifically, the number of channels of the output feature map of yolov5 is 13= (4+1+8), where "13" represents the prediction vector Is a dimension of (2); "4" represents the horizontal target frame of the detection reading area +.>,/>Center point coordinates representing a horizontal target frame, +.>Representing the length of the horizontal direction and the length of the vertical direction of the central point of the horizontal target frame; "1" represents a predictive vector->Object confidence of middle level target frame +.>The method comprises the steps of carrying out a first treatment on the surface of the "8" represents four vertices +.>In the reading area level state, +.>Representing the upper left dot coordinates +.>Representing the coordinates of the upper right point>Representing the lower left dot coordinates->Representing the lower right dot coordinates.
Further, the principle of consistency of frame points in the step S4 refers to a horizontal target frame of the reading area and a horizontal envelope frame of four vertexes thereofThe consistency of the same box is evaluated by using the IOU, and the step S4 specifically comprises the following steps:
step S41: setting a frame point consistency thresholdSetting an object confidence threshold +.>
Step S42: filtering all frame point recognition model predictions to obtain all prediction vectorsConfidence->Less than the object confidence threshold->Is +.>
Step S43: calculate all pre-runs left in step S42Measuring vectorMiddle level target frame->And four vertexesHorizontal envelope frame->Is then filtered out that the coherence IOU is less than the coherence threshold +. >Is +.>
Step S44: on the basis of step S43, since one water meter picture has only one reading area, only the object confidence is preservedMaximum predictive vector->
Step S45: according to the predictive vector retained in step S44Four vertices +.>Calculating the inclination angle +.>
Step S46: inclination angle according to step S45Calculating affine matrix for picture rotation>Four vertexes of reading area after horizontally correcting picture and calculating horizontally correcting picture>160 is obtained for the picture side length 320/2:
step S47: calculate four verticesIs a horizontal envelope frame of (2)Wherein->For horizontal envelope frame->Upper left dot coordinates of (2), wherein>For horizontal envelope frame->Then extracting the region from the horizontally aligned picture as a reading region input picture:
further, the digital character frames in the step S5 are divided into two main types, namely a complete digital character frame and a half-word character frame; wherein the complete digital character frame 10 types are respectively 0, 1, 2, 3, 4, 5, 6, 7, 8 and 9; half-letter character boxes 10, 01, 12, 23, 34, 45, 56, 67, 78, 89, 90 respectively; the former number of the category name of the half-word character frame is called the upper word of the half-word character frame, and the latter number is called the lower word of the half-word character frame;
The auxiliary character frames are classified into 3 categories, namely u, d and h; wherein u represents that the upper digital character of the half character has a large ratio, d represents that the lower digital character of the half character has a large ratio, and h represents that the upper digital character and the lower digital character of the half character have a nearly same ratio; each half-word character frame matches one auxiliary word character frame.
Further, in the step S5, the reading identification model uses a yolov5 network as a main body, and shares a backbone network, and separates 3 neck networks and 3 head networks, which are respectively used for detecting a complete digital character frame, a half-word character frame and an auxiliary word character frame;
the character frame corresponds to a six-dimensional vectorWherein->Representing the abscissa of the center point of the character frame,center point ordinate of table character frame, +.>Horizontal axis direction length of table character frame, +.>Representing the length of the character frame in the longitudinal direction,confidence representing character frame,/-, for>Representing the category of the character frame.
Further, the carry rule in the step S6 specifically includes the following rules;
reading from right to left, and carrying from right to left;
when the water meter character wheel rotates forward, the right side of the half character is the half character 90;
when the water meter character wheel rotates forward, the rightmost one bit of the water meter reading can be set to be always in a carry state.
Further, the step S6 specifically includes the following steps:
step S61: firstly, preliminarily processing a predicted result of a reading identification model by adopting a non-maximum suppression method: a complete digital character frame, a half character frame and an auxiliary character frame, and removing redundant frames and low confidence frames;
step S62: on the basis of the step S61, all character frames are arranged from large to small according to the abscissa of the central point, namely from right to left, so as to form an ordered character frame list;
step S63: based on the step S62, all the character frames are grouped according to the distance on the horizontal axis, and the grouping rule is that if the distance on the horizontal axis of two adjacent character frames in the ordered character frame list is smaller than 16, the two character frames are grouped into the same group, and the horizontal abscissa of the former group of character frames is larger than the latter group of character frames, namely, the character frame groups are orderly arranged; theoretically, each group of character frames only contains one complete digital character frame, one half character frame and one auxiliary character frame at most;
step S64: based on step S63, each character frame group is cleaned according to the cleaning rule, so that each character frame group contains either only one complete digital character frame or one half-word character frame and one auxiliary word character frame;
Step S65: on the basis of the step S64, traversing each character frame group from the beginning, and distributing two flag bits to each character frame group according to a flag bit construction rule, wherein one flag bit is called a lower word flag bit, and the other flag bit is called a 90 carry flag bit;
step S66: on the basis of step S65, the representative digits of each character frame group are orderly determined according to the rule of the selected word, so as to obtain a group of ordered digits, and then the group of ordered digits are arranged in reverse order, so that the final recognition result can be obtained.
Further, the cleaning rules in step S64 specifically include the following rules:
if the character frame group only contains complete digital character frames, no processing is performed;
if the character frame group only contains half character frames, constructing a d auxiliary character frame, and adding the constructed auxiliary character frame into the character frame group;
if the character frame group only contains the auxiliary character frame, removing the group;
if the character frame group only contains complete digital character frames and auxiliary character frames, removing the auxiliary character frames in the character frame group;
if the character frame group only contains a complete digital character frame and a half-word character frame, and the confidence coefficient of the complete digital character frame in the character frame group is larger than that of the half-word character frame, removing the half-word character frame in the character frame group;
If the character frame group only contains a complete digital character frame and a half-word character frame, and the confidence coefficient of the complete digital character frame is smaller than that of the half-word character frame and the complete digital character frame class is contained in the half-word character frame class, constructing a u auxiliary character frame or a d auxiliary character frame according to whether the complete digital character class is an upper word or a lower word of the half-word character frame class, adding the constructed auxiliary character frame into the character frame group, and then removing the complete digital character frame;
if the character frame group only contains a complete digital character frame and a half-word character frame, and the confidence of the complete digital character frame is smaller than that of the half-word character frame and the complete digital character frame class is not contained in the half-word character frame class, removing the half-word character frame;
if the character frame group contains a complete digital character frame, a half-word character frame and an auxiliary word character frame, and the confidence coefficient of the complete digital character frame is larger than that of the half-word character frame, removing the half-word character frame and the auxiliary word character frame;
if the character frame group contains a complete digital character frame, a half-word character frame and an auxiliary character frame, and the complete digital character frame confidence is smaller than the half-word character frame confidence and the complete digital character frame class is contained in the half-word character frame class, resetting the auxiliary character frame to be a u auxiliary character frame or a d auxiliary character frame according to whether the complete digital character class is an upper word or a lower word of the half-word character frame class, and removing the complete digital character frame;
If the character frame group contains complete digital character frames, half character frames and auxiliary character frames, the confidence of the complete digital character frames is smaller than that of the half character frames, and the complete digital character frames are not contained in the half character frame categories, the half character frames and the auxiliary character frames are removed.
Further, the flag bit construction rule in step S65 includes:
the flag bit comprises two values, one true and one false, and the value of the flag bit can only be one of true or false;
the value of the lower word flag bit of the character frame group is true, which means that the character frame group only contains one complete digital character frame or the category of the auxiliary character frame contained in the complete digital character frame is d;
the 90 carry bit flag bit of the character frame group is true, which means that all character frame groups in front of the character frame group contain half character frames with the category of 90; meanwhile, the 90 carry flag bit of the first character frame group is set to true.
Further, the word selection rules in the step S66 specifically include the following rules:
rule 1: if the character frame group only contains one complete digital character frame, the number represented by the character frame group is the complete digital character category of the complete digital character frame;
rule 2: if the lower word flag bits of all the character frames of the character frame group are true, the character frame group with the last 90 carry flag bit being true is the first character frame group, and the category of the first character frame group is 90, the number represented by the character frame group containing the half-word character frame is the lower word of the half-word character frame;
Rule 3: if the lower word flag bits of all the character frames of the character frame group are true, the character frame group with the true last group of 90 carry flag bits is not the first character frame group, and the character frame group with the true last group of 90 carry flag bits only contains one complete number character frame, the number represented by the character frame group containing half-word character frames is the lower word of the half-word character frame;
rule 4: if the 90 carry bit flag bit of the character frame group containing the half character frame is true, the number represented by the character frame group containing the half character frame is the upper word of the half character frame;
rule 5: if the auxiliary character frame category of the character frame group containing the half character frame is not d, the number represented by the character frame group containing the half character frame is the upper word of the half character frame;
rule 6: the number represented by the character frame group containing the half character frame is the lower word of the half character frame;
rules 1 through 6 hold for only one rule and for the processed set of character boxes.
Compared with the prior art, the invention has the following technical effects:
in the aspect of the inclination angle of the picture, yolov5 is used as a network main body, four vertex recognition functions of a reading area are added, horizontal alignment of the picture is completed through four vertices obtained through recognition, recognition speed is high, alignment effect is good, digital characters in the picture do not need to be horizontally forward, a client can rotate at any horizontal angle to take pictures, and the client can be specifically used for a camera remote transmission device, and the camera remote transmission device can be installed at any horizontal rotation angle.
In the aspect of reading identification, yolov5 is taken as a network main body, a backbone network is shared, a complete digital character frame, a half-word character frame and an auxiliary word character frame are respectively detected by three branches, unbalance of the number of the complete digital character and the half-word character in reality is overcome to a certain extent, the identification accuracy of the half-word character can be improved, the auxiliary word character frame can be effectively identified, and finally the identification result of a reading identification model is processed through various rules, so that the correct digital character can be selected from two incomplete characters in the half-word character to form the reading of the instrument, the half-word problem is effectively processed, the water meter number can be accurately identified, and the identification accuracy is high.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. Like elements or portions are generally identified by like reference numerals throughout the several figures. In the drawings, elements or portions thereof are not necessarily drawn to scale.
FIG. 1 is a schematic flow chart of the method according to the present invention disclosed in the embodiment of the present invention;
FIG. 2 is a block diagram of a model for identifying points according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a frame point recognition model according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a reading identification model disclosed in an embodiment of the present invention;
FIG. 5 is a schematic flow chart of constructing a final recognition result according to an embodiment of the present invention;
fig. 6 is a diagram illustrating the results of various processes according to an example disclosed in an embodiment of the present invention.
Detailed Description
An embodiment, as shown in fig. 1, is a word wheel type water meter digital identification and processing method based on deep learning, comprising the following steps:
step S1: receiving a water meter picture sent by a client;
step S2: preprocessing a water meter picture to obtain a standardized water meter input picture;
step S3: processing the water meter input picture by adopting a frame point identification model to obtain a frame point identification result;
step S4: according to the principle of consistency of frame points and the principle that one water meter picture has only one reading area, processing a frame point identification result, and extracting a reading area input picture with a digital character in a horizontal forward direction;
step S5: processing the input picture of the reading area by adopting a reading identification model, and identifying to obtain a digital character frame and an auxiliary character frame;
step S6: according to the non-maximum value suppression method, the auxiliary character frame category and the carry rule, processing the digital character frame and the auxiliary character frame which are obtained through recognition, and obtaining a final reading recognition result;
Step S7: and returning the reading identification result to the client.
In this embodiment, the client in step S1 and step S7 is a software or hardware device with an internet connection function, such as an intelligent water service management system, a mobile phone app, etc., capable of sending the water meter picture to be identified to the identification server through an internet interface, and capable of receiving the identification result returned from the identification server.
The step S2 specifically includes the following steps:
step S21: adjusting the resolution of the water meter picture received from the step S1 to enable the longer side length of the water meter picture to be scaled to 320 pixels, and enabling the shorter side length to be scaled according to the scaling ratio of the longer side length;
step S22: filling gray pixels on two sides of a short side of the water meter picture adjusted in the step S21, so that the length of the water meter picture reaches 320 pixels;
step S23: the pixel values of the water meter picture adjusted in step S22 are divided by 255.
As shown in fig. 2 and 3, in the step S3, the frame point identification model uses the yolov5 network as a main body, and modifies the head network to enable the head network to detect four vertexes of the reading area in addition to the horizontal target frame of the reading area, and specifically, the number of channels of the output feature map of yolov5 is 13= (4+1+8), where "13" represents the prediction vector Is a dimension of (2); "4" represents the horizontal target frame of the detection reading area +.>,/>Center point coordinates representing a horizontal target frame, +.>The center point representing the horizontal target frame is long in the horizontal directionDegree and vertical length; "1" represents a predictive vectorObject confidence of middle level target frame +.>The method comprises the steps of carrying out a first treatment on the surface of the "8" represents four vertices of the detection reading areaIn the reading area level state, +.>The upper-left point coordinates are represented,representing the coordinates of the upper right point>Representing the lower left dot coordinates->Representing the lower right dot coordinates.
In this embodiment, the principle of consistency of frame points in step S4 refers to a horizontal target frame of the reading area and a horizontal envelope frame of four vertices thereofThe consistency of the same box is evaluated by using the IOU, and the step S4 specifically further comprises the following steps:
step S41: setting a frame point consistency thresholdSetting an object confidence threshold +.>
Step S42: filtering all frame point recognition model predictions to obtain all prediction vectorsConfidence->Less than the object confidence threshold->Is +.>
Step S43: calculate all of the residual prediction vectors of step S42Middle level target frame->And four vertexesHorizontal envelope frame->Is filtered out that the coherence IOU is less than the coherence threshold Is +.>
Step S44: on the basis of step S43, since one water meter picture has only one reading area, only the object confidence is preservedMaximum predictive vector->
Step S45: according to the predictive vector retained in step S44Four vertices +.>Calculating the inclination angle +.>
Step S46: inclination angle according to step S45Calculating affine matrix for picture rotation>Four vertexes of reading area after horizontally correcting picture and calculating horizontally correcting picture>160 is obtained for the picture side length 320/2:
step S47: calculate four verticesHorizontal envelope frame->Wherein->For horizontal envelope frame->Upper left dot coordinates of (2), wherein>For horizontal envelope frame->The lower right point coordinates of (2) and then extracting the region from the horizontally aligned pictureDomain reading area input picture:
the step S5 specifically includes:
the digital character frames are divided into two main types, namely a complete digital character frame and a half-word character frame; wherein the complete digital character frame 10 types are respectively 0, 1, 2, 3, 4, 5, 6, 7, 8 and 9; half-letter character boxes 10, 01, 12, 23, 34, 45, 56, 67, 78, 89, 90 respectively; the previous digit of the half-letter character frame category name is called the upper word of the half-letter character frame, and the following digit is called the lower word of the half-letter character frame, for example, the upper word of 01 is 0, and the lower word is 1.
The auxiliary character frames are classified into 3 categories, namely u, d and h; wherein u represents that the upper digital character of the half character has a large ratio, d represents that the lower digital character of the half character has a large ratio, and h represents that the upper digital character and the lower digital character of the half character have a nearly same ratio; each half-word character frame theoretically matches one auxiliary word character frame.
The reading identification model, as shown in fig. 4, takes yolov5 network as main body, and shares backbone network, and divides 3 neck networks and 3 head networks for detecting complete digital character frame, half-word character frame and auxiliary word character frame.
The character frames all correspond to one six-dimensional vectorWherein->Representing the abscissa of the center point of the character frame,center point ordinate of table character frame, +.>Horizontal axis direction length of table character frame, +.>Representing the length of the character frame in the longitudinal direction,confidence representing character frame,/-, for>Representing the category of the character frame.
The carry rule in the step S6 specifically includes the following rules;
reading from right to left, and carry from right to left.
When the water meter character wheel rotates forward, the right side of the half character is the half character 90.
When the water meter character wheel rotates forward, the rightmost one bit of the water meter reading can be set to be always in a carry state.
As shown in fig. 5, the step S6 specifically includes the following steps:
step S61: firstly, preliminarily processing a predicted result of a reading identification model by adopting a non-maximum suppression method: a complete digital character frame, a half character frame and an auxiliary character frame, and removing redundant frames and low confidence frames;
step S62: on the basis of the step S61, all character frames are arranged from large to small according to the abscissa of the central point, namely from right to left, so as to form an ordered character frame list;
step S63: based on the step S62, all the character frames are grouped according to the distance on the horizontal axis, and the grouping rule is that if the distance on the horizontal axis of two adjacent character frames in the ordered character frame list is smaller than 16, the two character frames are grouped into the same group, and the horizontal abscissa of the former group of character frames is larger than the latter group of character frames, namely, the character frame groups are orderly arranged; theoretically, each group of character frames only contains one complete digital character frame, one half character frame and one auxiliary character frame at most;
step S64: based on step S63, each character frame group is cleaned according to the cleaning rule, so that each character frame group contains either only one complete digital character frame or one half-word character frame and one auxiliary word character frame;
Step S65: on the basis of the step S64, traversing each character frame group from the beginning, and distributing two flag bits to each character frame group according to a flag bit construction rule, wherein one flag bit is called a lower word flag bit, and the other flag bit is called a 90 carry flag bit;
step S66: on the basis of step S65, the representative digits of each character frame group are orderly determined according to the rule of the selected word, so as to obtain a group of ordered digits, and then the group of ordered digits are arranged in reverse order, so that the final recognition result can be obtained.
The cleaning rules in step S64 specifically include the following rules:
if the character frame group only contains complete digital character frames, no processing is performed;
if the character frame group only contains half character frames, constructing a d auxiliary character frame, and adding the constructed auxiliary character frame into the character frame group;
if the character frame group only contains the auxiliary character frame, removing the group;
if the character frame group only contains complete digital character frames and auxiliary character frames, removing the auxiliary character frames in the character frame group;
if the character frame group only contains a complete digital character frame and a half-word character frame, and the confidence coefficient of the complete digital character frame in the character frame group is larger than that of the half-word character frame, removing the half-word character frame in the character frame group;
If the character frame group only contains a complete digital character frame and a half-word character frame, and the confidence coefficient of the complete digital character frame is smaller than that of the half-word character frame and the complete digital character frame class is contained in the half-word character frame class, constructing a u auxiliary character frame or a d auxiliary character frame according to whether the complete digital character class is an upper word or a lower word of the half-word character frame class, adding the constructed auxiliary character frame into the character frame group, and then removing the complete digital character frame;
if the character frame group only contains a complete digital character frame and a half-word character frame, and the confidence of the complete digital character frame is smaller than that of the half-word character frame and the complete digital character frame class is not contained in the half-word character frame class, removing the half-word character frame;
if the character frame group contains a complete digital character frame, a half-word character frame and an auxiliary word character frame, and the confidence coefficient of the complete digital character frame is larger than that of the half-word character frame, removing the half-word character frame and the auxiliary word character frame;
if the character frame group contains a complete digital character frame, a half-word character frame and an auxiliary character frame, and the complete digital character frame confidence is smaller than the half-word character frame confidence and the complete digital character frame class is contained in the half-word character frame class, resetting the auxiliary character frame to be a u auxiliary character frame or a d auxiliary character frame according to whether the complete digital character class is an upper word or a lower word of the half-word character frame class, and then removing the complete digital character frame;
If the character frame group contains complete digital character frames, half character frames and auxiliary character frames, and the confidence of the complete digital character frames is smaller than that of the half character frames and the complete digital character frame category is not contained in the half character frame category, the half character frames and the auxiliary character frames are removed.
The flag bit construction rule in step S65 includes:
the flag bit contains two values, one true and one false, and its value can only be one of true or false.
The value of the lower word flag bit of the character frame group is true, which means that the character frame group only contains one complete digital character frame or the category of the auxiliary character frame contained in the complete digital character frame is d.
The 90 carry bit flag bit of the character frame group is true, which means that all character frame groups in front of the character frame group contain half character frames with the category of 90; meanwhile, the 90 carry flag bit of the first character frame group is set to true.
The word selection rules in step S66 specifically include the following rules:
rule 1: if the character frame group only contains one complete digital character frame, the number represented by the character frame group is the complete digital character category of the complete digital character frame.
Rule 2: if the lower word flag bits of all the character frames of the character frame group are true, and the character frame group with the last 90 carry flag bit being true is the first character frame group, and the category of the first character frame group is 90, the number represented by the character frame group containing the half-word character frame is the lower word of the half-word character frame.
Rule 3: if the lower word flag bits of all the character frames of the character frame group are true, the character frame group with the last 90 carry flag bit being true is not the first character frame group, and the character frame group with the last 90 carry flag bit being true only contains one complete number character frame, the number represented by the character frame group containing the half-word character frame is the lower word of the half-word character frame.
Rule 4: if the 90 carry flag bit of the character frame group containing the half character frame is true, the number represented by the character frame group containing the half character frame is the upper word of the half character frame.
Rule 5: if the auxiliary character frame category of the character frame group containing the half character frame is not d, the number represented by the character frame group containing the half character frame is the upper word of the half character frame.
Rule 6: the number represented by the character frame group containing the half character frame is the lower word of the half character frame.
Rules 1 through 6 are true for one character box set and only one rule is true.
According to the embodiment, analysis and research are carried out on the horizontal correction reading area and the half character selection, four vertexes of the reading area are identified by utilizing a frame point identification model formed by a modified yolov5 network, the horizontal correction reading area is further realized, a complete digital character frame, a half character frame and an auxiliary character frame are identified by utilizing a reading identification model formed by the modified yolov5 network, and various rules such as a carry rule, a character selection rule and the like are summarized and formed, so that a final identification result is determined and constructed.
A specific example is shown in figure 6,
as shown in fig. 6 (a), the water meter image received from the client has the following characteristics: the reading area has inclination of about 90 degrees, 4 digits need to be identified, the length of the transverse axis direction is 240, and the length of the longitudinal axis direction is 320;
the water meter picture is subjected to picture preprocessing in the step S2 to obtain a standardized water meter input picture, the visualization of which is shown in the figure 6 (b), and gray is filled at two sides of the water meter picture in the transverse axis direction to expand the length of the water meter picture to 320;
after the standardized water meter input picture is identified by the frame point identification model in the step S3, a plurality of frame point identification predictive vectors are obtainedThen, step S4 is performed;
the step S4 specifically includes the following steps:
step S41: setting a frame point consistency thresholdSetting the confidence threshold of the object to 0.8 +.>0.6;
step S42: filtering all frame point recognition model predictions to obtain all prediction vectorsConfidence of medium objectLess than the object confidence threshold->Is +.>
Step S43: calculate all of the residual prediction vectors of step S42Middle level target frame->And four vertexesIs of the level of (2)Envelope frame->Is filtered out that the coherence IOU is less than the coherence thresholdIs +.>
Step S44: on the basis of step S43, since one water meter picture has only one reading area, only the object confidence is preserved Maximum predictive vector->The predictive vector->Visualization as shown in FIG. 6 (c), four vertices +.>The following are provided:
step S45: according to the predictive vector retained in step S44Four vertices +.>Calculating the inclination angle +.>
Step (a)S45: inclination angle according to step S45Calculating affine matrix for picture rotation>Four vertexes of reading area after horizontally correcting picture and calculating horizontally correcting picture>:
Step S46: calculate four verticesIs a horizontal envelope frame of (2)Wherein->For horizontal envelope frame->Upper left dot coordinates of (2), wherein>For horizontal envelope frame->And then extract the region from the horizontally aligned picture as shown in fig. 6 (d) as a read region input picture as shown in fig. 6 (e):
after the reading area input picture is identified by the reading identification model in the step S5, a plurality of complete digital character frames, half-character frames and auxiliary character frames are obtained, and then the step S6 is carried out;
in this embodiment, the step S6 specifically includes the following steps:
step S61: firstly, preliminarily processing a predicted result of a reading identification model by adopting a non-maximum suppression method: a complete digital character frame, a half character frame and an auxiliary character frame, and removing redundant frames and low confidence frames;
Step S62: on the basis of the step S61, all character frames are arranged from large to small according to the abscissa of the central point, namely from right to left, so as to form an ordered character frame list;
step S63: based on the step S62, all the character frames are grouped according to the distance on the horizontal axis, and the grouping rule is that if the distance on the horizontal axis of two adjacent character frames in the ordered character frame list is smaller than 16, the two character frames are grouped into the same group, and the horizontal abscissa of the former group of character frames is larger than the latter group of character frames, namely, the character frame groups are orderly arranged; each group of character frames only contains at most one complete digital character frame, half-word character frame and auxiliary word character frame;
step S64: based on step S63, each character frame group is cleaned according to the cleaning rule, so that each character frame group contains either only one complete digital character frame or one half-word character frame and one auxiliary word character frame; in this embodiment, as shown in the upper picture of fig. 6 (f), the cleaning result is visualized, and 4 groups of character frames are obtained in total, and the sequence is from right to left; the frame superscript is the category of the digital character frame, and the frame subscript is the category of the auxiliary character frame;
Step S65: on the basis of the step S64, traversing each character frame group from the beginning, and distributing two flag bits to each character frame group according to a flag bit construction rule, wherein one flag bit is called a lower word flag bit, and the other flag bit is called a 90 carry flag bit; in this embodiment, the sequence of the character frame groups is from right to left, the corresponding lower word flag bits are (false, true) in turn, and the corresponding 90 carry flag bits are (true, false) in turn;
step S66: on the basis of step S65, the representative numbers of each character frame group are orderly determined according to the rule of the selected word, so as to obtain a group of ordered numbers, the group of ordered numbers are (9,9,2,5), then the group of ordered numbers are arranged in reverse order, and the final recognition result 5299 of the water meter picture is obtained as shown in the lower part of fig. 6 (f), and then step S7 is entered;
the step S7 specifically includes: and returning the identification result 5299 to the client.
The example shown in fig. 6 verifies that key points of a reading area can be effectively identified, and pictures can be effectively aligned according to the identified key points, so that the client can take pictures after rotating at any horizontal angle and can be identified by the method; it is also verified that the present invention can effectively perform complete digital recognition, half-word recognition and auxiliary word recognition, and that various rules proposed according to the present invention can be effectively combined to construct correct readings.
The description of the present invention has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.

Claims (6)

1. A word wheel type water meter digital identification and processing method based on deep learning is characterized in that: the method comprises the following steps:
step S1: receiving a water meter picture sent by a client;
step S2: preprocessing a water meter picture to obtain a standardized water meter input picture;
step S3: processing the water meter input picture by adopting a frame point identification model to obtain a frame point identification result;
step S4: according to the principle of consistency of frame points and the principle that one water meter picture has only one reading area, processing a frame point identification result, and extracting a reading area input picture with a digital character in a horizontal forward direction;
step S5: processing the input picture of the reading area by adopting a reading identification model, and identifying to obtain a digital character frame and an auxiliary character frame;
Step S6: according to the non-maximum value suppression method, the auxiliary character frame category and the carry rule, processing the digital character frame and the auxiliary character frame which are obtained through recognition, and obtaining a final reading recognition result;
step S7: returning the reading identification result to the client;
in the step S3, the frame point identification model takes the yolov5 network as a main body, modifies the head network thereof, so that the head network can detect four vertexes of the reading area in addition to the horizontal target frame of the reading area, and specifically, the channel number of the output feature map of yolov5 is 13= (4+1+8), wherein '13' represents the prediction vectorIs a dimension of (2); "4" represents the horizontal target frame of the detection reading area +.>,/>Center point coordinates representing a horizontal target frame, +.>Representing the length of the horizontal direction and the length of the vertical direction of the central point of the horizontal target frame; "1" represents a predictive vector->Object confidence of middle level target frame +.>The method comprises the steps of carrying out a first treatment on the surface of the "8" represents four vertices +.>In the reading area level state, +.>Representing the upper left dot coordinates +.>Representing the coordinates of the upper right point>Representing the lower left dot coordinates->Representing the coordinates of the lower right point;
the principle of consistency of frame points in the step S4 refers to a horizontal target frame of a reading area and a horizontal envelope frame of four vertexes of the reading area For the same box, the consistency is evaluated by using the IOU, and the step S4 specifically comprises the following steps:
step S41: setting a frame point consistency thresholdSetting an object confidence threshold +.>
Step S42: filtering all frame point recognition model predictions to obtain all prediction vectorsConfidence of medium objectLess than the object confidence threshold->Is +.>
Step S43: calculate all of the residual prediction vectors of step S42Middle level target frame->And four vertexesHorizontal envelope frame->Is filtered out that the coherence IOU is less than the coherence thresholdIs +.>
Step S44: on the basis of step S43, since one water meter picture has only one reading area, only the object confidence is preservedMaximum predictive vector->
Step S45: according to the predictive vector retained in step S44Four vertices +.>Calculating the inclination angle +.>
Step S46: inclination angle according to step S45Calculating affine matrix for picture rotation>Four vertexes of reading area after horizontally correcting picture and calculating horizontally correcting picture>160 is obtained for the picture side length 320/2:
step S47: calculate four verticesHorizontal envelope frame->Wherein->For horizontal envelope frame->Upper left dot coordinates of (2), wherein >For horizontal envelope frame->Then extracting the region from the horizontally aligned picture as a reading region input picture:
the digital character frames in the step S5 are divided into two main types, namely a complete digital character frame and a half-word character frame; wherein the complete digital character frame 10 types are respectively 0, 1, 2, 3, 4, 5, 6, 7, 8 and 9; half-letter character boxes 10, 01, 12, 23, 34, 45, 56, 67, 78, 89, 90 respectively; the former number of the category name of the half-word character frame is called the upper word of the half-word character frame, and the latter number is called the lower word of the half-word character frame;
the auxiliary character frames are classified into 3 categories, namely u, d and h; wherein u represents that the upper digital character of the half character has a large ratio, d represents that the lower digital character of the half character has a large ratio, and h represents that the upper digital character and the lower digital character of the half character have the same ratio; each half-word character frame is matched with an auxiliary word character frame;
the step S6 specifically includes the following steps:
step S61: firstly, preliminarily processing a prediction result of a reading identification model by adopting a non-maximum suppression method, and removing a redundant frame and a low confidence frame;
step S62: on the basis of the step S61, all character frames are arranged from large to small according to the abscissa of the central point, namely from right to left, so as to form an ordered character frame list;
Step S63: based on the step S62, all the character frames are grouped according to the distance on the horizontal axis, and the grouping rule is that if the distance on the horizontal axis of two adjacent character frames in the ordered character frame list is smaller than 16, the two character frames are grouped into the same group, and the horizontal abscissa of the former group of character frames is larger than the latter group of character frames, namely, the character frame groups are orderly arranged; each group of character frames only contains at most one complete digital character frame, half-word character frame and auxiliary word character frame;
step S64: based on step S63, each character frame group is cleaned according to the cleaning rule, so that each character frame group contains either only one complete digital character frame or one half-word character frame and one auxiliary word character frame;
step S65: on the basis of the step S64, traversing each character frame group from the beginning, and distributing two flag bits to each character frame group according to a flag bit construction rule, wherein one flag bit is called a lower word flag bit, and the other flag bit is called a 90 carry flag bit;
step S66: on the basis of step S65, the representative digits of each character frame group are orderly determined according to the rule of the selected word, so as to obtain a group of ordered digits, and then the group of ordered digits are arranged in reverse order, so that the final recognition result can be obtained.
2. The character wheel type water meter digital identification and processing method based on deep learning as set forth in claim 1, wherein: in the step S5, the reading identification model takes a yolov5 network as a main body, and shares a backbone network, and separates 3 neck networks and 3 head networks, which are respectively used for detecting a complete digital character frame, a half-word character frame and an auxiliary word character frame;
the character frame corresponds to a six-dimensional vectorWherein->Represents the center point abscissa of the character frame, +.>Center point ordinate of table character frame, +.>Horizontal axis direction length of table character frame, +.>Representing the length of the character frame in the longitudinal axis direction, +.>Confidence representing character frame,/-, for>Representing the category of the character frame.
3. The character wheel type water meter digital identification and processing method based on deep learning as set forth in claim 1, wherein: the carry rule in the step S6 specifically includes the following rules;
reading from right to left, and carrying from right to left;
when the water meter character wheel rotates forward, the right side of the half character is a half character;
when the water meter character wheel rotates forward, the rightmost one bit of the water meter reading can be set to be always in a carry state.
4. The character wheel type water meter digital identification and processing method based on deep learning as set forth in claim 1, wherein: the cleaning rules in step S64 specifically include the following rules:
If the character frame group only contains complete digital character frames, no processing is performed;
if the character frame group only contains half character frames, constructing a d auxiliary character frame, and adding the constructed auxiliary character frame into the character frame group;
if the character frame group only contains the auxiliary character frame, removing the group only containing the auxiliary character frame;
if the character frame group only contains complete digital character frames and auxiliary character frames, removing the auxiliary character frames in the character frame group;
if the character frame group only contains a complete digital character frame and a half-word character frame, and the confidence coefficient of the complete digital character frame in the character frame group is larger than that of the half-word character frame, removing the half-word character frame in the character frame group;
if the character frame group only contains a complete digital character frame and a half-word character frame, and the confidence coefficient of the complete digital character frame is smaller than that of the half-word character frame and the complete digital character frame class is contained in the half-word character frame class, constructing a u auxiliary character frame or a d auxiliary character frame according to whether the complete digital character class is an upper word or a lower word of the half-word character frame class, adding the constructed auxiliary character frame into the character frame group, and then removing the complete digital character frame;
If the character frame group only contains a complete digital character frame and a half-word character frame, and the confidence of the complete digital character frame is smaller than that of the half-word character frame and the complete digital character frame class is not contained in the half-word character frame class, removing the half-word character frame;
if the character frame group contains a complete digital character frame, a half-word character frame and an auxiliary word character frame, and the confidence coefficient of the complete digital character frame is larger than that of the half-word character frame, removing the half-word character frame and the auxiliary word character frame;
if the character frame group contains a complete digital character frame, a half-word character frame and an auxiliary character frame, and the complete digital character frame confidence is smaller than the half-word character frame confidence and the complete digital character frame class is contained in the half-word character frame class, resetting the auxiliary character frame to be a u auxiliary character frame or a d auxiliary character frame according to whether the complete digital character class is an upper word or a lower word of the half-word character frame class, and then removing the complete digital character frame;
if the character frame group contains complete digital character frames, half character frames and auxiliary character frames, and the confidence of the complete digital character frames is smaller than that of the half character frames and the complete digital character frame category is not contained in the half character frame category, the half character frames and the auxiliary character frames are removed.
5. The character wheel type water meter digital identification and processing method based on deep learning as set forth in claim 1, wherein: the flag bit construction rule in step S65 includes:
the flag bit comprises two values, one true and one false, and the value of the flag bit can only be one of true or false;
the value of the lower word flag bit of the character frame group is true, which means that the character frame group only contains one complete digital character frame or the category of the auxiliary character frame contained in the complete digital character frame is d;
the 90 carry bit flag bit of the character frame group is true, which means that all character frame groups in front of the character frame group contain half character frames with the category of 90; meanwhile, the 90 carry flag bit of the first character frame group is set to true.
6. The character wheel type water meter digital identification and processing method based on deep learning as set forth in claim 1, wherein: the word selection rules in step S66 specifically include the following rules:
rule 1: if the character frame group only contains one complete digital character frame, the number represented by the character frame group is the complete digital character category of the complete digital character frame;
rule 2: if the lower word flag bits of all the character frames of the character frame group are true, the character frame group with the last 90 carry flag bit being true is the first character frame group, and the category of the first character frame group is 90, the number represented by the character frame group containing the half-word character frame is the lower word of the half-word character frame;
Rule 3: if the lower word flag bits of all the character frames of the character frame group are true, the character frame group with the true last group of 90 carry flag bits is not the first character frame group, and the character frame group with the true last group of 90 carry flag bits only contains one complete number character frame, the number represented by the character frame group containing half-word character frames is the lower word of the half-word character frame;
rule 4: if the 90 carry bit flag bit of the character frame group containing the half character frame is true, the number represented by the character frame group containing the half character frame is the upper word of the half character frame;
rule 5: if the auxiliary character frame category of the character frame group containing the half character frame is not d, the number represented by the character frame group containing the half character frame is the upper word of the half character frame;
rule 6: the number represented by the character frame group containing the half character frame is the lower word of the half character frame;
rules 1 through 6 hold for only one rule and for the processed set of character boxes.
CN202311385464.4A 2023-10-25 2023-10-25 Character wheel type water meter digital identification and processing method based on deep learning Active CN117115802B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311385464.4A CN117115802B (en) 2023-10-25 2023-10-25 Character wheel type water meter digital identification and processing method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311385464.4A CN117115802B (en) 2023-10-25 2023-10-25 Character wheel type water meter digital identification and processing method based on deep learning

Publications (2)

Publication Number Publication Date
CN117115802A CN117115802A (en) 2023-11-24
CN117115802B true CN117115802B (en) 2024-03-26

Family

ID=88807800

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311385464.4A Active CN117115802B (en) 2023-10-25 2023-10-25 Character wheel type water meter digital identification and processing method based on deep learning

Country Status (1)

Country Link
CN (1) CN117115802B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109635806A (en) * 2018-12-12 2019-04-16 国网重庆市电力公司信息通信分公司 Ammeter technique for partitioning based on residual error network
CN112200160A (en) * 2020-12-02 2021-01-08 成都信息工程大学 Deep learning-based direct-reading water meter reading identification method
CN115984862A (en) * 2023-01-06 2023-04-18 江苏科技大学 Deep learning-based remote water meter digital identification method
CN116343228A (en) * 2023-03-27 2023-06-27 上海第二工业大学 Intelligent reading method and system for water meter
CN116844172A (en) * 2023-07-08 2023-10-03 武汉轻工大学 Digital old water meter identification method based on deep learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11663837B2 (en) * 2020-12-31 2023-05-30 Itron, Inc. Meter text detection and recognition

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109635806A (en) * 2018-12-12 2019-04-16 国网重庆市电力公司信息通信分公司 Ammeter technique for partitioning based on residual error network
CN112200160A (en) * 2020-12-02 2021-01-08 成都信息工程大学 Deep learning-based direct-reading water meter reading identification method
CN115984862A (en) * 2023-01-06 2023-04-18 江苏科技大学 Deep learning-based remote water meter digital identification method
CN116343228A (en) * 2023-03-27 2023-06-27 上海第二工业大学 Intelligent reading method and system for water meter
CN116844172A (en) * 2023-07-08 2023-10-03 武汉轻工大学 Digital old water meter identification method based on deep learning

Also Published As

Publication number Publication date
CN117115802A (en) 2023-11-24

Similar Documents

Publication Publication Date Title
CN110136063B (en) Single image super-resolution reconstruction method based on condition generation countermeasure network
CN109753971B (en) Correction method and device for distorted text lines, character recognition method and device
CN107689050B (en) Depth image up-sampling method based on color image edge guide
CN112396640B (en) Image registration method, device, electronic equipment and storage medium
CN109376641B (en) Moving vehicle detection method based on unmanned aerial vehicle aerial video
CN106709870B (en) Close-range image straight-line segment matching method
CN112560753A (en) Face recognition method, device and equipment based on feature fusion and storage medium
CN115376024A (en) Semantic segmentation method for power accessory of power transmission line
CN116402976A (en) Training method and device for three-dimensional target detection model
CN117115802B (en) Character wheel type water meter digital identification and processing method based on deep learning
CN112085117B (en) Robot motion monitoring visual information fusion method based on MTLBP-Li-KAZE-R-RANSAC
CN113628349B (en) AR navigation method, device and readable storage medium based on scene content adaptation
CN116563104A (en) Image registration method and image stitching method based on particle swarm optimization
CN116030361A (en) CIM-T architecture-based high-resolution image change detection method
CN114445726B (en) Sample library establishing method and device based on deep learning
CN114387167A (en) Robust image steganography method for resisting interpolation scaling attack
CN115311145A (en) Image processing method and device, electronic device and storage medium
CN111931689B (en) Method for extracting video satellite data identification features on line
CN115205518A (en) Target detection method and system based on YOLO v5s network structure
CN115035193A (en) Bulk grain random sampling method based on binocular vision and image segmentation technology
CN114841870A (en) Image processing method, related device and system
CN113705358A (en) Multi-angle side face obverse method based on feature mapping
CN112991419A (en) Parallax data generation method and device, computer equipment and storage medium
CN111612745A (en) Curved chromosome image straightening method, system, storage medium and device based on BagPix2Pix self-learning model
CN111612744A (en) Curved chromosome image straightening model generation method, application of model, system, readable storage medium and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant