CN111008628B - Illumination-robust pointer instrument automatic reading method and device - Google Patents

Illumination-robust pointer instrument automatic reading method and device Download PDF

Info

Publication number
CN111008628B
CN111008628B CN201911239888.3A CN201911239888A CN111008628B CN 111008628 B CN111008628 B CN 111008628B CN 201911239888 A CN201911239888 A CN 201911239888A CN 111008628 B CN111008628 B CN 111008628B
Authority
CN
China
Prior art keywords
pointer
instrument
image
candidate
scale mark
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911239888.3A
Other languages
Chinese (zh)
Other versions
CN111008628A (en
Inventor
王磊
罗晟
章洁
刘熙尧
蔡汪洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University
Original Assignee
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central South University filed Critical Central South University
Priority to CN201911239888.3A priority Critical patent/CN111008628B/en
Publication of CN111008628A publication Critical patent/CN111008628A/en
Application granted granted Critical
Publication of CN111008628B publication Critical patent/CN111008628B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/02Recognising information on displays, dials, clocks

Abstract

The invention discloses an automatic reading method and device for a pointer instrument with illumination robustness, wherein the method comprises the following steps: step A: preprocessing an image; and (B) step (B): detecting the pointer outline of the instrument, and determining the direction of each pointer in the image; step C: detecting the direction of a 0 scale mark; step D: the final reading of the meter is completed. Under the condition that the illumination and the instrument posture are uncontrollable, the invention can still ensure the high-precision reading of the instrument image, and can be applied to the automatic reading of various instruments.

Description

Illumination-robust pointer instrument automatic reading method and device
Technical Field
The invention belongs to the field of computer vision and image processing learning, and particularly relates to an illumination-robust pointer instrument automatic reading method and device.
Background
Pointer type meters (such as pointer type water meters) are widely used in various industries due to the characteristics of simple structure, convenient use, low cost, electromagnetic interference resistance and the like. Traditional instrument reading is based on manual completion, and not only is inefficiency, but also the physiological response of personnel's visual fatigue easily influences the accurate rate of reading. Along with the development of the informatization age, in order to solve the problem brought by manual reading, the automatic reading of the instrument also becomes a research field. The automatic reading of the pointer instrument has important significance for improving the productivity of industrial production and civil facilities and reducing the labor cost. For example, the reading calibration is carried out on the instrument during production to ensure that the precision of the instrument meets the standard requirements of the industry; the patrol robot performs mobile measurement and monitoring on the instrument equipment.
Currently, most computer vision-based automatic reading methods for pointer instruments are to measure the relative angle between the direction of the pointer and the direction of the 0 tick mark to complete the reading. In the stage of detecting the direction of the pointer, the conventional research mostly adopts Huffman conversion, least square method and template matching method to detect the linear pointer with simple structure. However, for meters (such as most pointer water meters) that are more complex in structure and pointer shape, and that also contain multiple sub-dials, many documents typically use mathematical morphology and pointer structure information to extract the pointer profile and determine the direction of the pointer. However, the detection result of these algorithm pointers is affected by illumination conditions, and in complex situations (such as uneven light and poor camera angle, poor imaging quality), it is often difficult to accurately detect the pointer profile.
In the 0 tick mark direction stage on the identification gauge scale, existing research can be divided into two strategies. The first strategy is to estimate the 0 tick mark direction by aligning the meter image with the template image. The second strategy is to directly detect the 0 tick mark direction. The first type of strategy is to align the meter image with the template image by aligning the camera, considering that the angle between the horizontal line and the zero tick mark is constant when aligning the meter image. However, there are cases where the meter image and the template image cannot be easily aligned, for example, because the camera has an out-of-plane rotation, so many studies propose a method of directly detecting the 0 scale. Wherein Ma et al use a modified least squares method to detect the 0 tick mark after removing the interference image. Yi et al use K-means clusters to identify 0 tick marks. However, the method is affected by the image quality, and zero graduation lines which are poor in imaging quality and difficult to directly detect in the image with blurred images are difficult to detect. Furthermore, some methods can indirectly extract the 0 tick mark direction by exploiting the inherent relationship of the pointer rotation center, which is applicable to meters with multiple pointers. They obtain the centre of the meter from the structural features of the centre of rotation of the hands of each sub-dial on the same circle. The 0 scale mark is detected based on the angular relation between the connecting line between the instrument center and the pointer rotation center and the 0 scale mark, the reading result of the methods is greatly dependent on the detection result of the pointer rotation center, that is, the detection of the 0 scale mark direction in the method is dependent on the inherent relation between the pointers, the detection result of the pointers further influences the detection of the 0 scale mark direction, and the practicability and expansibility are lacking.
From the above method, it can be seen that the automatic meter reading method of the meter is limited in practical application, mainly due to the two disadvantages: 1) Under the influence of illumination conditions and instrument gestures, when certain interference information exists on the instrument, the pointer and the 0 scale mark cannot be accurately detected; 2) Existing meter reading methods rely on the unique structure of the meter, and no general method applicable to various meter readings exists at present.
Under the background, it is important to study a method which can well resist illumination conditions and accurately complete automatic reading of the instrument without depending on the specific structure of the instrument.
Disclosure of Invention
The invention aims to solve the technical problem of providing an illumination-robust pointer type instrument automatic reading method which can accurately finish instrument automatic reading aiming at the defects of the prior art.
The technical scheme adopted by the invention is as follows:
an illumination-robust automatic reading method for a pointer instrument, comprising the following steps:
step A, preprocessing an instrument image;
b, detecting the outline of a pointer in the instrument image preprocessed in the step A, and determining the direction of the pointer;
step C, detecting the 0 scale mark direction of the instrument image preprocessed in the step A;
and D, finishing reading.
The above-mentioned step B, C is not limited in the execution sequence, and step B may be executed first, then step C may be executed first, then step B may be executed, or step B, C may be executed simultaneously. The division into steps B, C is for convenience of description only, and does not limit the execution sequence of the steps in the technical solution of the present invention.
Further, the specific processing procedure of the step A is as follows:
a1, extracting an ROI (region of interest) of an instrument image by using a Huffman circle detection algorithm;
a2, firstly cutting out all areas except the minimum circumscribed matrix of the ROI area on the instrument image; judging whether a point q (i, j) in the minimum circumscribing matrix of the ROI area is in the ROI area or not by utilizing the characteristic of a circle, and if q (i, j) is out of the ROI area, setting a pixel value p (i, j) of the point as 255 to obtain an ROI image;
a3, enhancing pointer information in the ROI based on color difference transformation; the method comprises the following steps: firstly, carrying out graying treatment on an ROI image to obtain a gray scale image, and then carrying out R-Y operation on the gray scale image to obtain a R Y gray scale image; the pixel value Y of each pixel point in the gray scale map is:
Y=0.299*R+0.587*G+0.114*B
the pixel value R-Y of each pixel point in the R_Y gray scale map is as follows:
R-Y=0.701*R-0.587*G-0.114*B
where R, G and B are three components of the pixel values of the corresponding pixel points in the ROI image.
Further, the specific processing procedure of the step B is as follows:
step B1, detecting all Maximum Stable Extremum Regions (MSER) on a R Y gray level diagram by adopting an MSER algorithm, and taking each Maximum Stable Extremum Region (MSER) as a pointer candidate contour;
step B2, matching the corresponding relation between candidate outlines of each pointer and the pointer of the instrument based on an improved NMS algorithm (ANMS algorithm), and determining the final outline of each pointer on the instrument;
step B3, based on the final contour of each pointer on the instrument, determining the starting point s (x s ,y s ) And endpoint t (x t ,y t ) Fitting the straight line where each pointer is located to obtain the direction of each pointer.
Further, the step B2 specifically includes the following steps:
step B21, calculating the minimum circumscribed matrix area of all the candidate outlines of the pointers;
step B22, calculating the region overlapping degree IOU between the minimum circumscribed matrixes of any two pointer candidate outlines:
Figure BDA0002305913470000031
/>
wherein S is 1 And S is 2 Representing the area of the smallest circumscribing matrix of two pointer candidate contours, S representing twoThe area of the overlapping area of the minimum circumscribing matrix of the candidate outline of the pointer, and the calculation formula of S is as follows:
Figure BDA0002305913470000032
wherein x is right_min And y right_min Taking the smaller coordinate value in the right lower corner coordinate values of the minimum circumscribing matrix of the two pointer candidate outlines,
Figure BDA0002305913470000033
and->
Figure BDA0002305913470000034
Taking the larger coordinate value in the upper left corner coordinate values of the minimum circumscribed matrix of the two pointer candidate outlines;
step B23, setting a region overlapping degree threshold IOU th
Comparing the corresponding IOU and IOU of any two pointer candidate outlines th Size, if IOU is larger than IOU th They are considered to correspond to the same pointer on the meter;
and B24, calculating the average value of the areas (the areas of the pointer candidate contours can be the number of pixels in the contained area) of all the pointer candidate contours corresponding to the same pointer on the instrument, and selecting the pointer candidate contour with the area closest to the average value as the final contour of the corresponding pointer on the instrument.
Further, the step B3 specifically includes the following steps:
b31, selecting the region included in the final contour of the pointer on the ROI image, i.e. the center of gravity s (x) s ,y s ) As a starting point of the pointer, a gray-scale gravity center method is used to calculate the gravity center s (x s ,y s ) The formula is as follows:
Figure BDA0002305913470000041
Figure BDA0002305913470000042
wherein, f (u, v) formula represents the pixel value of the pixel point at the coordinate (u, v) in the pointer region omega on the ROI image;
b32, determining the end point of the pointer based on a minimum angle method;
setting a final contour of a pointer to be composed of n points; for any adjacent three points in the n points, calculating an angle value corresponding to the middle point, thereby obtaining a series of angle values; comparing all the calculated angle values, wherein the middle point corresponding to the minimum angle value is the end point t (x) t ,y t );
Setting three adjacent points as A, B, C, setting the middle point as B, and calculating the magnitude of the angle B in delta ABC, namely the angle value corresponding to the middle point B; the magnitude of the angle B is calculated by adopting a cosine theorem:
Figure BDA0002305913470000043
further, the specific processing procedure of the step C is as follows:
step C1, selecting a front image of an instrument as a template image, and manually marking the direction of a 0 scale mark on the template image;
step C2, utilizing an acceleration robust feature technology to detect feature points on the template image and the instrument image respectively;
step C3, characteristic points on the template image and the instrument image are matched, and the RANSC algorithm is adopted to reject the mismatching points, so that the matching accuracy is further ensured;
and C4, acquiring the 0 scale mark direction on the instrument image based on the 0 scale mark direction on the template image and the matching characteristic points of the template image and the instrument image.
Further, the step C4 specifically includes the following steps:
step C41, setting k characteristic points matched with the instrument image on the template image;
step C42, randomly selecting two feature points p (x) from k matched feature points on the template image p ,y p ) And w (x) w ,y w ),x p <x w Constructing a vector pw;
step C43, calculating an included angle theta between a vector pw on the template image and a 0 scale mark:
Figure BDA0002305913470000051
wherein N (x) N ,y N ) And M (x) M ,y M ) Respectively a starting point and an end point of a 0 scale mark on the template image;
step C44, setting p (x) on the instrument image and the template image p ,y p ) And w (x) w ,y w ) The corresponding matched feature point is p' (x) p ′,y p ') and w' (x) w ′,y w ′);
Finding a straight line p 'f' with an included angle theta with p 'w' in the instrument image, and taking the direction of the straight line p 'f' as a candidate 0 scale mark direction; let the coordinate of f' be (x f ′,y f '), the calculation formula is as follows:
Figure BDA0002305913470000052
step C45, repeating the operations from step C42 to step C44 for T times, and ensuring that two feature points randomly selected from k matched feature points on the template image are different each time; through the operation, T candidate 0 scale mark directions are found in the instrument image;
c46, determining a final 0 scale mark direction from the T candidate 0 scale mark directions by adopting a voting strategy;
in the voting strategy, an angle threshold gamma is set to evaluate the precision of each candidate 0 tick mark direction; for each candidate 0 scale mark direction, the score value is equal to the number of other candidate 0 scale mark directions with the included angles smaller than the angle threshold gamma; and selecting the candidate 0 scale mark direction with the highest score value as the 0 scale mark direction in the instrument image.
Further, the specific processing procedure of the step D is as follows:
step D1, calculating an included angle delta between the pointer direction and the 0 scale mark direction, wherein a calculation formula is as follows:
Figure BDA0002305913470000053
wherein v1 is the direction vector of the pointer, and v2 is the direction vector of the 0 scale mark;
and D2, determining the reading of the instrument according to the included angle delta between the pointer direction and the 0 scale mark direction.
The invention also provides an illumination-robust pointer instrument automatic reading device, which comprises a processor, wherein the processor adopts the pointer instrument automatic reading method to realize the pointer instrument automatic reading.
Further, the automatic pointer instrument reading device also comprises an image acquisition module for acquiring an instrument image; the processor completes automatic reading of the pointer type instrument based on the instrument image acquired by the image acquisition module.
Under the condition that the illumination and the instrument posture are uncontrollable, the invention can still ensure the high-precision reading of the instrument image and can be applied to the automatic reading of various instruments.
Advantageous effects
The invention discloses an automatic reading method and device for a pointer instrument with illumination robustness, which have the following advantages: 1) And the direction of the pointer and the 0 scale mark is determined based on the local constant characteristic and the structural information, so that the reading accuracy is improved. The automatic meter reading device can not only directly detect zero scale, but also not depend on the inherent structure between the pointers, and can be applied to automatic readings of other various meters; 2) Under different environments, the accuracy of pointer detection and reading results is obviously superior to that of the existing most advanced method. 3) The method can simultaneously meet the requirements of a large amount of auxiliary information such as illumination conditions, imaging angles and the like of image imaging without controlling the robustness and the accuracy, and has strong practicability.
Drawings
FIG. 1 is a flow chart of a method for automatically reading a light-robust pointer meter in an example of the invention.
FIG. 2 is a test set, wherein FIG. 2 (a) is an image of a meter with uniform illumination in an example of the invention; FIG. 2 (b) is an image of a meter with uneven illumination in an example of the invention; fig. 2 (c) is an image of a meter with blurred and dimmed imaging.
FIG. 3 is a flow chart of processing a sample (test image) at the image preprocessing stage in an example of the present invention.
FIG. 4 is a flow chart of a pointer detection stage in an example of the invention.
FIG. 5 is a diagram of candidate contours of a pointer detected by an MSER in an example of the present invention.
FIG. 6 is a final profile of the pointer detected in an example of the present invention, wherein the marked profile in FIG. 6 (a) is the only profile determined in an example of the present invention after the redundant profile is eliminated using the ANMS algorithm; the center of gravity of the profile in fig. 6 (b) is the start point of the meter pointer detected in the example of the present invention, the black point is the end point of the meter pointer detected in the example of the present invention, and the white straight line is the finally fitted pointer straight line.
FIG. 7 is a flow chart of determining the 0 tick mark orientation in an embodiment of the present invention.
FIG. 8 is a template image used in an example of the present invention and the 0 tick mark direction therein, wherein FIG. 8 (a) is a template image used in an example of the present invention; fig. 8 (b) shows the 0 tick mark direction marked in the template image.
Fig. 9 is a feature point matching result of a template image and a meter image in this example.
Detailed Description
The invention is further described with reference to the following description of the drawings:
the following examples are directed to automatic readings for a pointer water meter, and the complete flow is shown in fig. 1. The test set in the example consisted of 145 samples, the 145 samples being divided into three categories, 45 meter images imaged under uniform illumination [ as shown in fig. 2 (a) ], 45 meter images imaged under non-uniform illumination [ as shown in fig. 2 (b) ], and a blurred, dimmed meter image imaged at 45 [ as shown in fig. 2 (c) ].
Example 1:
the embodiment provides an automatic reading method of a pointer instrument with illumination robustness, which comprises the following steps:
step A, preprocessing an instrument image, and removing useless information in the instrument image;
step B, detecting the outline of the pointer in the instrument based on the image preprocessed in the step A, and determining the direction of the pointer;
step C, detecting the 0 scale mark direction in the instrument based on the image preprocessed in the step A;
and D, finishing reading.
Example 2:
in this embodiment, on the basis of embodiment 1, the implementation flow of step a is shown in fig. 3, and the specific processing procedure is as follows:
a1, detecting by using a Huffman circle, and extracting an ROI (region of interest) of an instrument image;
a2, operating the instrument image based on the characteristics of the circle, and eliminating useless information outside the ROI; the specific method comprises the following steps: firstly, cutting all areas except the minimum circumscribed matrix of the ROI area on the instrument image; judging whether a point q (i, j) in the minimum circumscribing matrix of the ROI area is in the ROI area or not by utilizing the characteristic of a circle, and if g (i, j) is out of the ROI area, setting a pixel value p (i, j) of the point as 255 to obtain an ROI image;
Figure BDA0002305913470000071
a3, enhancing pointer information in the ROI based on color difference transformation; the method comprises the following steps: firstly, carrying out graying treatment on an ROI image to obtain a gray scale image, and then carrying out R-Y operation on the gray scale image to obtain a R Y gray scale image; the pixel value Y of each pixel point in the gray scale map is:
Y=0.299*R+0.587*G+0.114*B
the pixel value R-Y of each pixel point in the R_Y gray scale map is as follows:
R-Y=0.701*R-0.587*G-0.114*B
where R, G and B are three components of the pixel values of the corresponding pixel points in the ROI image.
Example 3:
in this embodiment, on the basis of embodiment 2, the specific processing procedure of step B is as follows:
step B1, detecting all Maximum Stable Extremum Regions (MSER) on an R_Y gray scale graph by adopting an MSER algorithm, and taking each Maximum Stable Extremum Region (MSER) as a pointer candidate contour;
step B2, matching the corresponding relation between candidate outlines of each pointer and the pointer of the instrument based on an improved NMS algorithm (ANMS algorithm), and determining the final outline of each pointer on the instrument;
step B3, based on the final contour of each pointer on the instrument, determining the starting point s (x s ,y s ) And endpoint t (x t ,y t ) Fitting a straight line where each pointer is located to obtain the direction of each pointer;
example 4:
in this embodiment, on the basis of embodiment 3, the step B2 specifically includes the following steps:
step B21, calculating the minimum circumscribed matrix area of all the candidate outlines of the pointers;
step B22, calculating the region overlapping degree IOU between the minimum circumscribed matrixes of any two pointer candidate outlines:
Figure BDA0002305913470000081
wherein S is 1 And S is 2 The area of the minimum circumscribing matrix representing the two pointer candidate outlines, S represents the area of the overlapping area of the minimum circumscribing matrix of the two pointer candidate outlines, and the calculation formula of S is as follows:
Figure BDA0002305913470000082
wherein x is right_min And y right_min Taking the smaller coordinate value in the right lower corner coordinate values of the minimum circumscribing matrix of the two pointer candidate outlines,
Figure BDA0002305913470000083
and->
Figure BDA0002305913470000084
Taking the larger coordinate value in the upper left corner coordinate values of the minimum circumscribed matrix of the two pointer candidate outlines;
step B23, setting a region overlapping degree threshold IOU th (IOU th For empirical parameters, IOU is set in this embodiment th =0.1);
Comparing the corresponding IOU and IOU of any two pointer candidate outlines th Size, if IOU is larger than IOU th They are considered to correspond to the same pointer on the meter;
step B24, calculating the average value of the areas (the areas of the pointer candidate profiles can take the number of pixels in the area contained in the area) of all the pointer candidate profiles corresponding to the same pointer on the meter, and selecting the pointer candidate profile with the area closest to the average value as the final profile of the corresponding pointer on the meter [ as shown in fig. 6 (a) ].
Example 5:
in this embodiment, on the basis of embodiment 3, the step B3 specifically includes the following steps:
step B31, selecting the region included in the final contour of the pointer on the ROI image, i.e. the center of gravity s (x) s ,y s ) As a starting point of the pointer, a gray-scale gravity center method is used to calculate the gravity center s (x s ,y s ). As shown in FIG. 6 (b), wherein the white point is the starting point s (x s ,y s )]The formula is as follows:
Figure BDA0002305913470000091
Figure BDA0002305913470000092
wherein f (u, v) formula represents the pixel value of the pixel point at the coordinates (u, v) in the pointer region Ω on the ROI image.
Step B32, determining the end point of the pointer based on a minimum angle method;
setting a final contour of a pointer to be composed of n points; for any adjacent three points in the n points, calculating an angle value corresponding to the middle point, thereby obtaining a series of angle values; comparing all the calculated angle values, wherein the middle point corresponding to the minimum angle value is the end point t (x) t ,y t ) [ as shown in FIG. 6 (b), wherein the black dot is the end point t (x) t ,y t )]。
Setting three adjacent points as A, B, C, setting the middle point as B, and calculating the magnitude of the angle B in delta ABC, namely the angle value corresponding to the middle point B; the magnitude of the angle B is calculated by adopting a cosine theorem:
Figure BDA0002305913470000093
example 6:
in this embodiment, on the basis of embodiment 1, the implementation flow of step C is shown in fig. 7, and the specific processing procedure is as follows:
step C1, selecting a front image of the instrument as a template image [ as shown in fig. 8 (a) ], and manually marking the direction NM of the 0 scale mark on the template image [ as shown in fig. 8 (b) ];
step C2, utilizing an acceleration robust feature (SURF) technology to respectively detect feature points on a template image and a meter image (preprocessed meter image, ROI image);
and C3, matching characteristic points on the template image and the instrument image [ as shown in figure 9 ], and eliminating the mismatching points by adopting a RANSC algorithm to further ensure the matching accuracy.
And C4, acquiring the 0 scale mark direction on the instrument image based on the 0 scale mark direction on the template image and the matching characteristic points of the template image and the instrument image.
Example 7:
in this embodiment, on the basis of embodiment 6, the step C4 specifically includes the following steps:
step C41, setting k characteristic points matched with the instrument image on the template image;
step C42, randomly selecting two feature points p (x) from k matched feature points on the template image p ,y p ) And w (x) w ,y w ),x p <x w Constructing a vector pw;
step C43, calculating an included angle theta between a vector pw on the template image and a 0 scale mark:
Figure BDA0002305913470000101
/>
wherein N (x) N ,y N ) And M (x) M ,y M ) Respectively a starting point and an end point of a 0 scale mark on the template image;
step C44, setting p (x) on the instrument image and the template image p ,y p ) And w (x) w ,y w ) The corresponding matched feature point is p' (x) p ′,y p ') and w' (x) w ′,y w ′);
Finding a straight line p 'f' with an included angle theta with p 'w' in the instrument image, and taking the direction of the straight line p 'f' as a candidate 0 scale mark direction; let the coordinate of f' be (x f ′,y f '), the calculation formula is as follows:
Figure BDA0002305913470000102
step C45, repeating the operations from step C42 to step C44 for T times (T is the best parameter obtained by the experiment, in this embodiment, t=6 is set), and ensuring that two feature points randomly selected from k matching feature points on the template image are different each time; through the operation, T candidate 0 scale mark directions are found in the instrument image;
step C46, determining a final 0 scale mark direction from the T candidate 0 scale mark directions by adopting a voting strategy;
in the voting strategy, an angle threshold value gamma (gamma is an empirical parameter, and gamma=1 is set in the embodiment) is set to evaluate the accuracy of each candidate 0 tick mark direction; for each candidate 0 scale mark direction, the score value is equal to the number of other candidate 0 scale mark directions with the included angles smaller than the angle threshold gamma; and selecting the candidate 0 scale mark direction with the highest score value as the 0 scale mark direction in the instrument image.
Example 8:
in this embodiment, on the basis of embodiment 1, the specific processing procedure of step D is as follows:
step D1, calculating an included angle delta between the pointer direction and the 0 scale mark direction, wherein a calculation formula is as follows:
Figure BDA0002305913470000111
wherein v1 is the direction vector of the pointer, and v2 is the direction vector of the 0 scale mark;
and D2, determining the reading of the instrument according to the included angle delta between the pointer direction and the 0 scale mark direction.
In this embodiment, the pointer-type water meter has four sub-dials, one pointer in each sub-dial. The scale values in each sub-dial are of 10 levels, the distribution among the values is uniform, the angle among the scale values is 36 degrees, and the scale value units in the four sub-dials are respectively 0.1, 0.01, 0.001 and 0.0001. The final reading of the pointer can be determined by the scheme
Figure BDA0002305913470000112
Wherein delta 1 、δ 2 、δ 3 、δ 4 The included angles between the pointer direction and the 0 scale mark direction in the four sub-dials are respectively.
Example 9:
the embodiment provides an illumination-robust pointer instrument automatic reading device, which comprises a processor, wherein the processor adopts the pointer instrument automatic reading method to realize pointer instrument automatic reading.
Example 10:
in this embodiment, on the basis of embodiment 9, the automatic pointer meter reading device further includes an image acquisition module, configured to acquire a meter image; the processor completes automatic reading of the pointer type instrument based on the instrument image acquired by the image acquisition module.
The effect of the invention is verified by adopting a sample in a test set, and the result shows that the invention can accurately finish pointer detection and reading results for an instrument image imaged under the condition of uniform illumination, an instrument image imaged under the condition of nonuniform illumination and 45 instrument images imaged in a fuzzy and dim way, and the accuracy of the pointer detection and reading results is obviously superior to that of the existing most advanced method. The method can simultaneously meet the requirements of robustness, accuracy, no need of controlling a large amount of auxiliary information such as illumination conditions, imaging angles and the like of image imaging, and has strong practicability.

Claims (7)

1. An illumination-robust automatic reading method for a pointer instrument, which is characterized by comprising the following steps:
step A, preprocessing an instrument image;
b, detecting the outline of a pointer in the instrument image preprocessed in the step A, and determining the direction of the pointer;
step C, detecting the 0 scale mark direction of the instrument image preprocessed in the step A;
step D, finishing reading;
the specific treatment process of the step A is as follows:
a1, extracting an ROI (region of interest) of an instrument image by using a Huffman circle detection algorithm;
a2, firstly cutting out all areas except the minimum circumscribed matrix of the ROI area on the instrument image; then, judging whether each pixel point q (i, j) in the minimum circumscribed matrix of the ROI area is in the ROI area or not, and if q (i, j) is out of the ROI area, setting the pixel value p (i, j) of the pixel point q (i, j) to 255, so as to obtain an ROI image;
a3, performing graying treatment on the ROI image to obtain a gray scale image, and performing R-Y operation on the gray scale image to obtain an R-Y gray scale image; the pixel value Y of each pixel point in the gray scale map is:
Y=0.299*R+0.587*G+0.114*B
the pixel value R-Y of each pixel point in the RY gray scale map is as follows:
R-Y=0.701*R-0.587*G-0.114*B
wherein R, G and B are three components of pixel values of the corresponding pixel points in the ROI image;
the specific processing procedure of the step B is as follows:
step B1, detecting all maximum stable extremum regions on an R_Y gray scale map by adopting an MSER algorithm, and taking each maximum stable extremum region as a pointer candidate contour;
step B2, matching the corresponding relation between candidate outlines of the pointers and the pointers of the instrument based on an improved NMS algorithm, and determining the final outline of each pointer on the instrument;
step B3, determining a starting point and an ending point of each pointer respectively based on a final contour of each pointer on the instrument, fitting a straight line where each pointer is located, and obtaining a direction of each pointer;
the step B2 specifically comprises the following steps:
step B21, calculating the minimum circumscribed matrix area of all the candidate outlines of the pointers;
step B22, calculating the region overlapping degree IOU between the minimum circumscribed matrixes of any two pointer candidate outlines:
Figure QLYQS_1
wherein S is 1 And S is 2 The area of the minimum circumscribing matrix representing the two pointer candidate outlines, S represents the area of the overlapping area of the minimum circumscribing matrix of the two pointer candidate outlines, and the calculation formula of S is as follows:
S=(x right_min -x left_max )*(y right_min -y left_max )
wherein x is right_min And y right_min Taking the smaller coordinate value, x, in the right lower corner coordinate values of the minimum circumscribing matrix of the two pointer candidate outlines left_max And y left_max Taking the larger coordinate value in the upper left corner coordinate values of the minimum circumscribed matrix of the two pointer candidate outlines;
step B23, setting a region overlapping degree threshold IOU th
Comparing the corresponding IOU and IOU of any two pointer candidate outlines th Size, if IOU is larger than IOU th They are considered to correspond to the same pointer on the meter;
and step B24, calculating the average value of the areas of all pointer candidate contours corresponding to the same pointer on the instrument, and selecting the pointer candidate contour with the area closest to the average value as the final contour of the corresponding pointer on the instrument.
2. The method for automatically reading a pointer instrument robust to illumination according to claim 1, wherein said step B3 comprises the steps of:
step B31, for each pointer on the instrument, selecting a region included in the final contour of the pointer on the ROI image, namely, the center of gravity in the pointer region as the starting point of the pointer;
step B32, for each pointer on the instrument, determining the end point of the pointer based on a minimum angle method;
setting a final contour of a pointer to be composed of n pixel points; for any adjacent three pixel points in the n pixel points, calculating an angle value corresponding to the middle pixel point, thereby obtaining a series of angle values; comparing all the angle values, wherein the middle pixel point corresponding to the minimum angle value is the end point of the pointer; the method for calculating the angle value corresponding to the middle pixel point comprises the following steps: and (3) setting the adjacent three pixel points as A, B, C, setting the middle pixel point as B, and calculating the magnitude of the angle B in delta ABC, namely the angle value corresponding to the middle pixel point B.
3. The method for automatically reading a pointer instrument robust to illumination according to claim 1, wherein the specific processing procedure in the step C is as follows:
step C1, selecting a front image of an instrument as a template image, and manually marking the direction of a 0 scale mark on the template image;
step C2, utilizing an acceleration robust feature technology to detect feature points on the template image and the instrument image respectively;
step C3, characteristic points on the template image and the instrument image are matched, and the RANSC algorithm is adopted to reject the mismatching points;
and C4, acquiring the 0 scale mark direction on the instrument image based on the 0 scale mark direction on the template image and the matching characteristic points of the template image and the instrument image.
4. The method for automatically reading a pointer instrument robust to illumination according to claim 3, wherein said step C4 comprises the steps of:
step C41, setting k characteristic points matched with the instrument image on the template image;
step C42, randomly selecting two feature points p (x) from k matched feature points on the template image p ,y p ) And w (x) w ,y w ),x p <x w Constructing a vector pw;
step C43, calculating an included angle theta between a vector pw on the template image and a 0 scale mark:
Figure QLYQS_2
wherein N (x) N ,y N ) And M (x) M ,y M ) Respectively a starting point and an end point of a 0 scale mark on the template image;
step C44, setting p (x) on the instrument image and the template image p ,y p ) And w (x) w ,y w ) The corresponding matched feature point is p' (x) p ′,y p ') and w' (x) w ′,y w ′);
Finding a straight line p 'f' with an included angle theta with p 'w' in the instrument image, and taking the direction of the straight line p 'f' as a candidate 0 scale mark direction; let the coordinate of f' be (x f ′,y f '), the calculation formula is as follows:
Figure QLYQS_3
step C45, repeating the operations from step C42 to step C44 for T times, and ensuring that two feature points randomly selected from k matched feature points on the template image are different each time; through the operation, T candidate 0 scale mark directions are found in the instrument image;
step C46, determining a final 0 scale mark direction from the T candidate 0 scale mark directions by adopting a voting strategy;
in the voting strategy, an angle threshold gamma is set to evaluate the precision of each candidate 0 tick mark direction; for each candidate 0 scale mark direction, the score value is equal to the number of other candidate 0 scale mark directions with the included angles smaller than the angle threshold gamma; and selecting the candidate 0 scale mark direction with the highest score value as the 0 scale mark direction in the instrument image.
5. The method for automatically reading a pointer instrument robust to illumination according to claim 1, wherein the specific processing procedure of the step D is as follows:
step D1, calculating an included angle delta between the pointer direction and the 0 scale mark direction on the instrument image, wherein a calculation formula is as follows:
Figure QLYQS_4
wherein v1 is the direction vector of the pointer on the instrument image, and v2 is the direction vector of the 0 scale mark on the instrument image;
and D2, determining the reading of the instrument according to an included angle delta between the pointer direction and the 0 scale mark direction on the instrument image.
6. An illumination-robust pointer instrument automatic reading device, comprising a processor, wherein the processor adopts the method of any one of claims 1 to 5 to realize pointer instrument automatic reading.
7. The illumination-robust pointer instrument automatic reading device of claim 6, comprising a processor, further comprising an image acquisition module for acquiring an instrument image; the processor completes automatic reading of the pointer type instrument based on the instrument image acquired by the image acquisition module.
CN201911239888.3A 2019-12-06 2019-12-06 Illumination-robust pointer instrument automatic reading method and device Active CN111008628B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911239888.3A CN111008628B (en) 2019-12-06 2019-12-06 Illumination-robust pointer instrument automatic reading method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911239888.3A CN111008628B (en) 2019-12-06 2019-12-06 Illumination-robust pointer instrument automatic reading method and device

Publications (2)

Publication Number Publication Date
CN111008628A CN111008628A (en) 2020-04-14
CN111008628B true CN111008628B (en) 2023-04-21

Family

ID=70115048

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911239888.3A Active CN111008628B (en) 2019-12-06 2019-12-06 Illumination-robust pointer instrument automatic reading method and device

Country Status (1)

Country Link
CN (1) CN111008628B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104392206A (en) * 2014-10-24 2015-03-04 南京航空航天大学 Image processing method for automatic pointer-type instrument reading recognition
CN106529519A (en) * 2016-09-19 2017-03-22 国家电网公司 Automatic number identification method and system of power pointer type instrument
CN106960207A (en) * 2017-04-26 2017-07-18 佛山市南海区广工大数控装备协同创新研究院 A kind of car steering position gauge field multipointer instrument automatic recognition system and method based on template matches

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9508151B2 (en) * 2014-07-10 2016-11-29 Ditto Labs, Inc. Systems, methods, and devices for image matching and object recognition in images using image regions

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104392206A (en) * 2014-10-24 2015-03-04 南京航空航天大学 Image processing method for automatic pointer-type instrument reading recognition
CN106529519A (en) * 2016-09-19 2017-03-22 国家电网公司 Automatic number identification method and system of power pointer type instrument
CN106960207A (en) * 2017-04-26 2017-07-18 佛山市南海区广工大数控装备协同创新研究院 A kind of car steering position gauge field multipointer instrument automatic recognition system and method based on template matches

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
邢延超等.基于MSER和NMS的变形文档字符检测.《科学技术创新》.2018,(第32期),第101-102页. *

Also Published As

Publication number Publication date
CN111008628A (en) 2020-04-14

Similar Documents

Publication Publication Date Title
US11551341B2 (en) Method and device for automatically drawing structural cracks and precisely measuring widths thereof
CN107590498B (en) Self-adaptive automobile instrument detection method based on character segmentation cascade two classifiers
CN108918526B (en) Notch defect detection method for flexible IC packaging substrate circuit
CN112818988B (en) Automatic identification reading method and system for pointer instrument
CN107341802B (en) Corner sub-pixel positioning method based on curvature and gray scale compounding
CN109284718B (en) Inspection robot-oriented variable-view-angle multi-instrument simultaneous identification method
CN108960237B (en) Reading identification method for pointer type oil level indicator
CN114897864B (en) Workpiece detection and defect judgment method based on digital-analog information
CN101660932A (en) Automatic calibration method of pointer type automobile meter
CN107145890A (en) A kind of pointer dashboard automatic reading method under remote various visual angles environment
CN111161339B (en) Distance measuring method, device, equipment and computer readable medium
CN110260818B (en) Electronic connector robust detection method based on binocular vision
CN104075659A (en) Three-dimensional imaging recognition method based on RGB structure light source
CN109064481A (en) A kind of machine vision localization method
CN115096206A (en) Part size high-precision measurement method based on machine vision
CN116503316A (en) Chip defect measurement method and system based on image processing
CN108627103A (en) A kind of 2D laser measurement methods of parts height dimension
CN114022441A (en) Defect detection method for irregular hardware
CN111008628B (en) Illumination-robust pointer instrument automatic reading method and device
CN111815580A (en) Image edge identification method and small module gear module detection method
CN109784257B (en) Transformer thermometer detection and identification method
CN111539951A (en) Visual detection method for outline size of ceramic grinding wheel head
CN112508084B (en) Machine vision detection method based on fuzzy similarity measurement in complex industrial environment
CN113591875B (en) High-precision pointer type instrument identification method
CN113313122A (en) Pointer type instrument automatic reading identification method based on computer vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant