CN111191646A - Intelligent identification method for pointer instrument - Google Patents

Intelligent identification method for pointer instrument Download PDF

Info

Publication number
CN111191646A
CN111191646A CN201911336196.0A CN201911336196A CN111191646A CN 111191646 A CN111191646 A CN 111191646A CN 201911336196 A CN201911336196 A CN 201911336196A CN 111191646 A CN111191646 A CN 111191646A
Authority
CN
China
Prior art keywords
image
value
straight line
point
pointer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911336196.0A
Other languages
Chinese (zh)
Other versions
CN111191646B (en
Inventor
庄莉
梁懿
郑清风
潘进土
陈锴
王从
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Information and Telecommunication Co Ltd
Fujian Yirong Information Technology Co Ltd
Great Power Science and Technology Co of State Grid Information and Telecommunication Co Ltd
Original Assignee
State Grid Information and Telecommunication Co Ltd
Fujian Yirong Information Technology Co Ltd
Great Power Science and Technology Co of State Grid Information and Telecommunication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Information and Telecommunication Co Ltd, Fujian Yirong Information Technology Co Ltd, Great Power Science and Technology Co of State Grid Information and Telecommunication Co Ltd filed Critical State Grid Information and Telecommunication Co Ltd
Priority to CN201911336196.0A priority Critical patent/CN111191646B/en
Publication of CN111191646A publication Critical patent/CN111191646A/en
Application granted granted Critical
Publication of CN111191646B publication Critical patent/CN111191646B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/243Aligning, centring, orientation detection or correction of the image by compensating for image skew or non-uniform image deformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/02Recognising information on displays, dials, clocks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Abstract

An intelligent identification method of a pointer instrument comprises the following steps: clipping the input image according to the recognition range set by the instrument parameter model, and carrying out distortion correction on the deflected image according to the deflection angle set by the instrument parameter model; and (3) carrying out image preprocessing on the corrected image: carrying out hough straight line detection on the preprocessed image to obtain the position of each straight line in the image; judging and filtering each straight line according to the parameters of the instrument parameter model, and finding out a pointer straight line; determining the direction of the pointer straight line according to the found pointer straight line and the instrument parameter model; and calculating the reading of the instrument according to the instrument range parameters of the instrument parameter model and the direction of the straight line of the pointer, and outputting a result. The invention provides an intelligent identification method of a pointer instrument, which has high algorithm execution speed and good identification effect.

Description

Intelligent identification method for pointer instrument
[ technical field ] A method for producing a semiconductor device
The invention belongs to the technical field of artificial intelligence, and particularly relates to an intelligent identification method for a pointer instrument.
[ background of the invention ]
The intelligent auxiliary system of transformer substation is the demand of the modernization development of transformer substation, gather through the front end, combine platform intelligent video, artificial intelligence recognition algorithm and platform big data analysis model, realize the intelligent video monitoring to the transformer substation, patrol and examine, carry out the analysis to the reading of instrument, carry out analysis alarm to the equipment state, thereby reduce the artifical number of times of arriving at a station and routinely patrolling and unusually surveying, improve fortune dimension work efficiency and quality, replace the personnel of patrolling and examining to patrol and examine, guarantee power supply system's safe operation.
The traditional intelligent identification algorithm adopts methods such as picture comparison, color statistics and the like, the operation speed of the algorithm is slow, and the identification effect is poor.
[ summary of the invention ]
The invention aims to solve the technical problem of providing an intelligent identification method of a pointer instrument, which has high algorithm execution speed and good identification effect.
The invention is realized by the following steps:
an intelligent identification method for a pointer instrument comprises the following steps:
step 101: clipping the input image according to the recognition range set by the instrument parameter model, and carrying out distortion correction on the deflected image according to the deflection angle set by the instrument parameter model;
step 102: and (3) carrying out image preprocessing on the corrected image:
the method comprises the steps of image enhancement, image smoothing, image binarization, morphological processing to remove impurity points, communication domain reservation and attention area position searching;
step 103: carrying out hough straight line detection on the preprocessed image to obtain the position of each straight line in the image;
step 104: judging and filtering each straight line obtained in the step 103 according to the parameters of the instrument parameter model, and finding out a pointer straight line;
step 105: determining the direction of the pointer straight line according to the found pointer straight line and the instrument parameter model; the method specifically comprises the following steps:
after a pointer straight line is found out, determining the direction of the pointer according to the distance between two end points of the straight line and the pointer central point by combining the coordinates of the meter central point of the meter parameter model, wherein the end point close to the central point is the needle end of the pointer, and the end point far away from the central point is the needle point of the pointer;
step 106: and calculating the reading of the instrument according to the instrument range parameters of the instrument parameter model and the direction of the straight line of the pointer, and outputting a result.
Further, the distortion correction in step 101 specifically includes:
assuming that before distortion, the pixel coordinates of each point in the image can be obtained by equation (1):
Figure BDA0002330978880000021
Figure BDA0002330978880000022
wherein, Xc-camera coordinate system X value, Yc-camera coordinate system Y value, ZC-camera coordinate system Z value, X ', Y' -camera coordinate system Xc, Yc, intermediate product of ZC normalization, dimension is multiple, fx, fy-scale factor, cx, cy-internal parameter, u-image coordinate system X value, v-image coordinate system Y value;
if no distortion exists, then ideally the coordinate transformation of the camera image can be calculated according to equation (1);
in the case of distortion, the distorted coordinates:
Figure BDA0002330978880000023
wherein, X ', Y' -distorted camera coordinates X value, Y value, k1,k2,k3Tangential distortion parameter, p1,p2Radial distortion parameter, X ', y' -camera coordinate system Xc,Yc,ZcNormalized intermediate product, dimension being multiple, r2=x′2+y′2(3);
At the same time, the new coordinates of each pixel of the distorted image can be obtained as follows:
Figure BDA0002330978880000031
wherein u isd,vd-distorted image coordinates X, Y, fx,fyScale factor, cx,cy-internal parameter, X ", Y" -value of X, Y of coordinates of the distorted camera;
therefore, an image slave camera coordinate system is obtained, then the image slave camera coordinate system is distorted, and finally the mapping relation of each point in the whole coordinate transformation process of the distorted image is obtained;
the purpose of distortion correction is to find the pixel value relationship of the corresponding point pair, and assign the pixel value of the distorted position to the original position, that is:
f(u,v)=f(h(u,v))=f(ud,vd) (5),
the mapping relation from the undistorted image to the distorted image is as follows:
f(ud,vd)=f(u,v) (6),
due to distortion, coordinates before distortion are integers, coordinates after distortion are not necessarily integers, and in an image pixel coordinate system, the coordinates are all integers, so that rounding or interpolation operation often exists in the assignment process.
Further, the image preprocessing in step 102 specifically includes:
step 10201: carrying out image enhancement processing on the corrected image to increase the definition of the identification area;
the method for using the spatial domain-based algorithm for the gray level image specifically comprises the following steps: directly operating the gray level of the image, and dividing the operation into a point operation algorithm and a neighborhood enhancement algorithm; (1) the point operation, namely contrast enhancement, contrast stretching or gray scale transformation, is to process each point in the image individually in order to make the image imaged uniformly or expand the dynamic range of the image and expand the contrast. The gray value of each pixel point of the new image is only calculated by corresponding input image points, only the gray value of each point is changed, and the spatial relationship of the gray values is not changed; (2) the neighborhood enhancement algorithm is divided into two types of image smoothing and sharpening, wherein smoothing is generally used for eliminating image noise, but edge blurring is easily caused, and common algorithms comprise mean filtering and median filtering; the sharpening aims at highlighting the edge contour of an object and is convenient for target identification, and common algorithms comprise a gradient method, an operator, high-pass filtering, a mask matching method and a statistical difference method;
the method for using the frequency domain-based algorithm for the color image specifically comprises the following steps: the image is regarded as a two-dimensional signal, the enhancement processing of the image is realized by adopting image Fourier transform, the basis is convolution theorem, and the algorithm is an indirect enhancement algorithm; the low-pass filtering method is adopted, only the low-frequency signal passes through, and the noise in the image can be removed; by adopting a high-pass filtering method, high-frequency signals such as edges and the like can be enhanced, so that a blurred picture becomes clear;
step 10202: then, carrying out image smoothing treatment, wherein the image smoothing adopts a Gaussian filter;
the Gaussian filtering is to convolute each pixel point of the input array with a Gaussian kernel and take the convolution sum as an output pixel value; the two-dimensional gaussian function is as follows:
Figure BDA0002330978880000041
wherein G0(x, y) is the output pixel value, μ is the mean value, i.e. the peak value corresponds to the position, σ represents the standard deviation, and the variable x and the variable y each have a mean value and also a standard deviation;
step 10203: then, carrying out image binarization processing, and setting the gray value of a pixel point on the image to be 0 or 255, namely, the process of enabling the whole image to show an obvious black-and-white effect;
the image binarization processing adopts an OTSU algorithm;
the OTSU algorithm is also called maximum inter-class variance method, and the central idea is that the threshold value t should make the inter-class variance between the target and the background maximum;
for an image, when the segmentation threshold of the foreground and the background is t, the ratio of foreground points to the image is w0, the mean value is u0, the ratio of background points to the image is w1, and the mean value is u1, then the mean value u of the whole image is
u=w0*u0+w1*u1 (8);
Establishing an objective function g (t) ═ w0 (u0-u) ^2+ w1 (u1-u) ^2 (9);
wherein g (t) is the inter-class variance expression when the segmentation threshold is t;
the OTSU algorithm enables g (t) to obtain a global maximum value, and when g (t) is maximum, the corresponding t is called an optimal threshold value;
step 10203: then, carrying out morphological treatment to remove the miscellaneous points, adopting opening operation, wherein the process of firstly corroding and then expanding is called opening operation, and the opening operation is used for eliminating small objects, separating the objects at fine points, smoothing the boundary of a larger object and not obviously changing the area of the larger object;
step 10204: then, performing reserved connected domain operation, searching the maximum connected domain as an attention area, wherein the connected domain algorithm adopts a Two-Pass scanning method algorithm;
the two-pass scanning method is that all connected areas existing in an image can be found and marked by scanning the image twice; giving a label to each pixel position during the first scanning, wherein one or more different labels may be given to a pixel set in the same connected region during the scanning process, so that the labels belonging to the same connected region but having different values need to be merged, that is, the equality relationship between the labels is recorded; the second scanning pass is to classify the pixels marked by equal _ labels with equal relationship into a connected region and assign the same label, and the label is usually the minimum value in the equal _ labels;
the simple steps of the Two-Pass algorithm are given below:
(1) first scanning:
accessing the current pixel B (x, y), if B (x, y) is 1:
a. if the pixel values in the domain of B (x, y) are all 0, then B (x, y) is given a new label:
label+=1,B(x,y)=label;
b. if there are pixels Neighbors with pixel values >1 in the field of B (x, y):
1) assigning the minimum in Neighbors to B (x, y):
B(x,y)=min{Neighbors}
2) recording the equality relation among all values (label) in the neighbor, namely the values (label) belong to the same connected region;
labelSet [ i ] = { label _ m., label _ n }, and all labels in labelSet [ i ] belong to the same connected region;
(2) and (3) second scanning:
access the current pixel B (x, y), if B (x, y) > 1:
finding a minimum label value which is in the same generic relation with label ═ B (x, y), and assigning B (x, y); after the scanning is completed, the pixels with the same label value in the image form the same connected region.
Further, the step 103 specifically includes:
after the image preprocessing is finished, carrying out straight line detection to find out the position information of a straight line in the image, wherein the straight line detection adopts a Hough straight line detection algorithm;
hough transformation is a method for identifying geometric shapes from images in image processing, and a given curve in an original image space is changed into a point in a Hough parameter space through a curve expression form by utilizing duality of points and lines, so that the detection problem of the given curve of the original image is converted into the peak value problem in the Hough parameter space, namely the detection of the overall characteristic is converted into the detection of the local characteristic; a point on a straight line in the European space is a sine curve in the Hough parameter space; a plurality of points on the same straight line in the European space are a sinusoidal curve cluster in the Hough parameter space, and the curve cluster intersects at one point, which is called a peak point; the peak point in the Hough parameter space corresponds to a straight line in the European space;
the equation of the straight line is:
xcosθ+ysinθ=ρ (10)
wherein, x, y-x value and y value of data points in the Euclidean space, rho-distance from the straight line of polar coordinates to the original point, and theta-angle of points on the straight line of polar coordinates;
the data points are represented as (x, y) in Euclidean space and (ρ, θ) in Hough parameter space;
the specific flow of the transformation algorithm is as follows:
(1) the method comprises the steps of (1) initializing a grid-connected Hough parameter space;
(2) performing step (3) for each (x, y) in euclidean space;
(3) performing ρ ═ xcos θ + ysin θ and H (ρ, θ) ═ H (ρ, θ) +1 for θ ═ 90 ° to 180 °;
(4) setting a threshold value, and searching peak points of a parameter space, wherein each peak point of the parameter space corresponds to a straight line in an Euclidean space;
further, step 106 specifically includes:
after the direction of the pointer is determined, the angle theta of the pointer can be calculated, the reading of the meter is calculated by combining the meter measuring range parameters of the meter parameter model, the calculation formula is as follows,
Figure BDA0002330978880000061
wherein, V- -measuring range value of instrument, thetaminMinimum amount ofDistance angle, θmax- -maximum range angle, Vmin- -minimum value of range, Vmax- -maximum span.
The invention has the advantages that: the method breaks through the traditional image comparison and identification algorithm, adopts the object modeling, the image preprocessing technology and the image analysis algorithm, overcomes the problems of slow identification process and inaccurate identification result, achieves the purposes of quick identification of the identified object and accurate identification result, and achieves the effect of real-time online identification.
[ description of the drawings ]
The invention will be further described with reference to the following examples with reference to the accompanying drawings.
FIG. 1 is a schematic flow diagram of the present invention.
FIG. 2 is a schematic diagram showing the contrast between the front and the back of the pointer instrument image enhancement in the present invention.
Fig. 3 is a schematic diagram of the european style parameter space conversion into the Hough parameter space in the present invention.
[ detailed description ] embodiments
An intelligent identification method for a pointer instrument is shown in fig. 1, and comprises the following steps:
step 101: and carrying out image cutting on the input image according to the recognition range set by the instrument parameter model, and carrying out distortion correction on the deflected image according to the deflection angle set by the instrument parameter model.
The method specifically comprises the following steps:
step 10101: and (4) image clipping is carried out on the input image according to the recognition range set by the instrument parameter model.
Step 10102: and the deflected image is subjected to distortion correction according to the deflection angle set by the instrument parameter model.
In the image preprocessing process, distortion correction processing is carried out on an image:
distortion is the distortion that tends to generate during the acquisition or display of an image: geometric distortion, grayscale distortion, color distortion. The reasons for image distortion are: aberrations, distortion, bandwidth limitations, camera pose, scanning non-linearity, relative motion, etc. of the imaging system; the non-uniformity of the sensing device causes the response inconsistency, the working state of the sensing device, the non-uniform illumination condition or the point light source illumination and the like; the photoelectric characteristics of the display devices are not uniform; the existence of image distortion affects the visual effect and is one of the important factors affecting the shape detection and the geometric dimension measurement accuracy of the image detection system.
The creation of imaging distortions from the camera is "natural" and unavoidable, primarily due to the lens imaging principle. The principle of its distortion can be referred to the camera model. Its distortion can be decomposed into tangential distortion and radial distortion in principle.
Figure BDA0002330978880000081
Wherein [ x ', y' ] [ x ', y' ] is a post-distortion position, [ x, y ] [ x, y ] is a pre-distortion position, and [ ki, pi ] [ ki, pi ] is a distortion coefficient. Of course, there are many more than these four distortion coefficients, but in general only these four can be considered.
The key point of the distortion correction is to find the corresponding relation of the point positions before and after the distortion.
Assuming that before distortion, the pixel coordinates of each point in the image can be obtained by the formula:
Figure BDA0002330978880000082
Figure BDA0002330978880000083
xc-the value of the coordinate system X of the camera,
yc-the value of Y in the coordinate system of the camera,
zc-the Z value of the coordinate system of the camera,
x ', y' -intermediate products normalized with the camera coordinate system Xc, Yc, Zc, the dimension being a multiple,
fx, fy-the scale factor,
cx, cy-the internal parameter,
u-the values of the image coordinate system X,
v- -image coordinate system Y values;
if no distortion exists, then ideally the coordinate transformation of the camera image can be calculated according to equation (1);
in the case of distortion, the distorted coordinates:
Figure BDA0002330978880000091
x ", Y" -distorted camera coordinates X-value, Y-value,
k1,k2,k3-a tangential distortion parameter which is a function of the tangential distortion,
p1,p2-a parameter of radial distortion,
x ', y' -camera coordinate system Xc,Yc,ZcThe normalized intermediate product, dimension is a multiple,
wherein r is2=x′2+y′2(3);
At the same time, the new coordinates of each pixel of the distorted image can be obtained as follows:
Figure BDA0002330978880000092
ud,vd-the X value and Y value of the distorted image coordinates,
fx,fy-a scale factor which is a function of the phase,
cx,cy-an internal parameter of the device,
x ", Y" -distorted camera coordinates X-value, Y-value,
therefore, an image slave camera coordinate system is obtained, then the image slave camera coordinate system is distorted, and finally the mapping relation of each point in the whole coordinate transformation process of the distorted image is obtained;
the purpose of distortion correction is to find the pixel value relationship of the corresponding point pair, and assign the pixel value of the distorted position to the original position, that is:
f(u,v)=f(h(u,v))=f(ud,vd) (5),
the mapping relation from the undistorted image to the distorted image is as follows:
f(ud,vd)=f(u,v) (6),
due to distortion, coordinates before distortion are integers, coordinates after distortion are not necessarily integers, and in an image pixel coordinate system, the coordinates are all integers, so that rounding or interpolation operation often exists in the assignment process.
Step 102: and (3) carrying out image preprocessing on the corrected image: the method comprises the steps of image enhancement, image smoothing, image binarization, morphological processing to remove impurity points, communication domain reservation and attention area position searching;
the image preprocessing is carried out on the corrected image, and the method mainly comprises the following steps:
step 10201: and carrying out image enhancement processing on the corrected image to increase the definition of the identification area.
A spatial domain based algorithm is used for grayscale images and a frequency domain based algorithm is used for color images.
1. Airspace-based algorithm
The method is used for directly operating the gray level of an image and comprises a point operation algorithm and a neighborhood enhancement algorithm.
(1) The point operation, namely contrast enhancement, contrast stretching or gray scale transformation, is to process each point in the image individually in order to make the image imaged uniformly or expand the dynamic range of the image and expand the contrast. The gray value of each pixel point of the new image is only calculated by the corresponding input image point, only the gray value of each point is changed, and the spatial relationship of the points is not changed.
(2) The neighborhood enhancement algorithm is divided into two types, namely image smoothing and sharpening. Smoothing is generally used for eliminating image noise, but edge blurring is also easily caused, and common algorithms include mean filtering and median filtering; the sharpening aims at highlighting the edge contour of an object and facilitating target identification, and common algorithms include a gradient method, an operator, high-pass filtering, a mask matching method, a statistical difference method and the like.
2. Frequency domain-based algorithm
The image is regarded as a two-dimensional signal, the image is enhanced by adopting image Fourier transform, and the enhancement processing is based on the convolution theorem and is an indirect enhancement algorithm. The low-pass filtering method is adopted, only the low-frequency signal passes through, and the noise in the image can be removed; by adopting a high-pass filtering method, high-frequency signals such as edges and the like can be enhanced, so that a blurred picture becomes clear. The specific enhancement effect is shown in fig. 2.
Step 10202: then, image smoothing processing is performed, and gaussian filter (gaussian filter) is used for image smoothing.
Gaussian filtering is the convolution of each pixel point of the input array with a gaussian kernel to convolve the sum as the output pixel value. The two-dimensional gaussian function is as follows:
Figure BDA0002330978880000111
wherein G0(x, y) is the output pixel value, μ is the mean value, i.e. the peak value corresponds to the position, σ represents the standard deviation, and the variable χ and the variable y each have a mean value and also each have a standard deviation;
step 10203: then, carrying out image binarization processing, and setting the gray value of a pixel point on the image to be 0 or 255, namely, the process of enabling the whole image to show an obvious black-and-white effect;
the image binarization processing adopts an OTSU algorithm;
the OTSU algorithm is also called maximum inter-class variance method, and the central idea is that the threshold value t should make the inter-class variance between the target and the background maximum;
for an image, when the segmentation threshold of the foreground and the background is t, the ratio of foreground points to the image is w0, the mean value is u0, the ratio of background points to the image is w1, and the mean value is u1, then the mean value u of the whole image is
u=w0*u0+w1*u1 (8);
Establishing an objective function g (t) ═ w0 (u0-u) ^2+ w1 (u1-u) ^2 (9);
wherein g (t) is the inter-class variance expression when the segmentation threshold is t;
the OTSU algorithm enables g (t) to obtain a global maximum value, and when g (t) is maximum, the corresponding t is called an optimal threshold value;
step 10203: then, carrying out morphological treatment to remove the miscellaneous points, adopting opening operation, wherein the process of firstly corroding and then expanding is called opening operation, and the opening operation is used for eliminating small objects, separating the objects at fine points, smoothing the boundary of a larger object and not obviously changing the area of the larger object;
step 10204: then, performing reserved connected domain operation, searching the maximum connected domain as an attention area, wherein the connected domain algorithm adopts a Two-Pass scanning method algorithm;
the two-pass scanning method is that all connected areas existing in an image can be found and marked by scanning the image twice; giving a label to each pixel position during the first scanning, wherein one or more different labels may be given to a pixel set in the same connected region during the scanning process, so that the labels belonging to the same connected region but having different values need to be merged, that is, the equality relationship between the labels is recorded; the second scanning pass is to classify the pixels marked by equal _ labels with equal relationship into a connected region and assign the same label, and the label is usually the minimum value in the equal _ labels;
the simple steps of the Two-Pass algorithm are given below:
(1) first scanning:
accessing the current pixel B (x, y), if B (x, y) is 1:
a. if the pixel values in the domain of B (x, y) are all 0, then B (x, y) is given a new label:
label+=1,B(x,y)=label;
b. if there are pixels Neighbors with pixel values >1 in the field of B (x, y):
1) assigning the minimum in Neighbors to B (x, y):
B(x,y)=min{Neighbors}
2) recording the equality relation among all values (label) in the neighbor, namely the values (label) belong to the same connected region;
labelSet [ i ] = { label _ m., label _ n }, and all labels in labelSet [ i ] belong to the same connected region;
(2) and (3) second scanning:
access the current pixel B (x, y), if B (x, y) > 1:
finding a minimum label value which is in the same generic relation with label ═ B (x, y), and assigning B (x, y); after the scanning is completed, the pixels with the same label value in the image form the same connected region.
Step 103: after the image preprocessing is finished, carrying out straight line detection to find out the position information of a straight line in the image, wherein the straight line detection adopts a Hough straight line detection algorithm;
hough transformation is a method for identifying geometric shapes from images in image processing, and a given curve in an original image space is changed into a point in a Hough parameter space through a curve expression form by utilizing duality of points and lines, so that the detection problem of the given curve of the original image is converted into the peak value problem in the Hough parameter space, namely the detection of the overall characteristic is converted into the detection of the local characteristic; a point on a straight line in the European space is a sine curve in the Hough parameter space; a plurality of points on the same straight line in the European space are a sinusoidal curve cluster in the Hough parameter space, and the curve cluster intersects at one point, which is called a peak point; the peak point in the Hough parameter space corresponds to a straight line in the European space; as shown in figure 3 of the drawings,
the equation for the line l is:
xcosθ+ysinθ=ρ (10)
x, y-the x value and y value of a data point in Euclidean space, rho-the distance from the straight line of polar coordinates to the origin, and theta-the angle of the point on the straight line of polar coordinates.
The data point is expressed as (x, y) in the Euclidean space, and is expressed as (rho, theta) in the Hough parameter space, the P point is the peak value point in the Hough parameter space, and the meaning expressed by the P point is the l straight line of the Euclidean space.
The specific flow of the transformation algorithm is as follows:
(1) the method comprises the steps of (1) initializing a grid-connected Hough parameter space;
(2) performing step (3) for each (x, y) in euclidean space;
(3) performing ρ ═ xcos θ + ysin θ and H (ρ, θ) ═ H (ρ, θ) +1 for θ ═ 90 ° to 180 °;
(4) setting a threshold value, and searching peak points of a parameter space, wherein each peak point of the parameter space corresponds to a straight line in an Euclidean space;
step 105: after a pointer straight line is found out, determining the direction of the pointer according to the distance between two end points of the straight line and the pointer central point by combining the coordinates of the meter central point of the meter parameter model, wherein the end point close to the central point is the needle end of the pointer, and the end point far away from the central point is the needle point of the pointer;
step 106: after the direction of the pointer is determined, the angle theta of the pointer can be calculated, the reading of the meter is calculated by combining the meter measuring range parameters of the meter parameter model, the calculation formula is as follows,
Figure BDA0002330978880000131
wherein, V- -measuring range value of instrument, thetaminAngle of minimum range, θmax- -maximum range angle, Vmin- -minimum value of range, Vmax- -maximum span.
The invention breaks through the traditional image comparison and identification algorithm, adopts the object modeling, the image preprocessing technology and the image analysis algorithm, overcomes the problems of slow identification process and inaccurate identification result, achieves the purposes of quick identification of the identified object and accurate identification result, and achieves the effect of real-time online identification.
The above description is only an example of the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (5)

1. An intelligent identification method of a pointer instrument is characterized in that: the method comprises the following steps:
step 101: clipping the input image according to the recognition range set by the instrument parameter model, and carrying out distortion correction on the deflected image according to the deflection angle set by the instrument parameter model;
step 102: and (3) carrying out image preprocessing on the corrected image:
the method comprises the steps of image enhancement, image smoothing, image binarization, morphological processing to remove impurity points, communication domain reservation and attention area position searching;
step 103: carrying out hough straight line detection on the preprocessed image to obtain the position of each straight line in the image;
step 104: judging and filtering each straight line obtained in the step 103 according to the parameters of the instrument parameter model, and finding out a pointer straight line;
step 105: determining the direction of the pointer straight line according to the found pointer straight line and the instrument parameter model; the method specifically comprises the following steps:
after a pointer straight line is found out, determining the direction of the pointer according to the distance between two end points of the straight line and the pointer central point by combining the coordinates of the meter central point of the meter parameter model, wherein the end point close to the central point is the needle end of the pointer, and the end point far away from the central point is the needle point of the pointer;
step 106: and calculating the reading of the instrument according to the instrument range parameters of the instrument parameter model and the direction of the straight line of the pointer, and outputting a result.
2. The intelligent identification method of the pointer instrument as claimed in claim 1, characterized in that: the distortion correction in step 101 specifically includes:
assuming that before distortion, the pixel coordinates of each point in the image can be obtained by equation (1):
Figure FDA0002330978870000011
Figure FDA0002330978870000012
wherein, Xc-camera coordinate system X value, Yc-camera coordinate system Y value, ZC-camera coordinate system Z value, X ', Y' -camera coordinate system Xc, Yc, intermediate product of ZC normalization, dimension is multiple, fx, fy-scale factor, cx, cy-internal parameter, u-image coordinate system X value, v-image coordinate system Y value;
if no distortion exists, then ideally the coordinate transformation of the camera image can be calculated according to equation (1);
in the case of distortion, the distorted coordinates:
Figure FDA0002330978870000021
wherein, X ', Y' -distorted camera coordinates X value, Y value, k1,k2,k3-a tangential distortion parameter which is a function of the tangential distortion,
p1,p2radial distortion parameter, X ', y' -camera coordinate system Xc,Yc,ZcNormalized intermediate product, dimension being multiple, r2=x′2+y′2(3);
At the same time, the new coordinates of each pixel of the distorted image can be obtained as follows:
Figure FDA0002330978870000022
wherein u isd,vd-distorted image coordinates X, Y, fx,fyScale factor, cx,cy-internal parameter, X ", Y" -value of X, Y of coordinates of the distorted camera;
therefore, an image slave camera coordinate system is obtained, then the image slave camera coordinate system is distorted, and finally the mapping relation of each point in the whole coordinate transformation process of the distorted image is obtained;
the purpose of distortion correction is to find the pixel value relationship of the corresponding point pair, and assign the pixel value of the distorted position to the original position, that is:
f(u,v)=f(h(u,v))=f(ud,vd) (5),
the mapping relation from the undistorted image to the distorted image is as follows:
f(ud,vd)=f(u,v) (6),
due to distortion, coordinates before distortion are integers, coordinates after distortion are not necessarily integers, and in an image pixel coordinate system, the coordinates are all integers, so that rounding or interpolation operation often exists in the assignment process.
3. The intelligent identification method of the pointer instrument as claimed in claim 1, characterized in that: the image preprocessing in step 102 specifically includes:
step 10201: carrying out image enhancement processing on the corrected image to increase the definition of the identification area;
the method for using the spatial domain-based algorithm for the gray level image specifically comprises the following steps: directly operating the gray level of the image, and dividing the operation into a point operation algorithm and a neighborhood enhancement algorithm; (1) the point operation, namely contrast enhancement, contrast stretching or gray scale transformation, is to process each point in the image individually in order to make the image imaged uniformly or expand the dynamic range of the image and expand the contrast. The gray value of each pixel point of the new image is only calculated by corresponding input image points, only the gray value of each point is changed, and the spatial relationship of the gray values is not changed; (2) the neighborhood enhancement algorithm is divided into two types of image smoothing and sharpening, wherein smoothing is generally used for eliminating image noise, but edge blurring is easily caused, and common algorithms comprise mean filtering and median filtering; the sharpening aims at highlighting the edge contour of an object and is convenient for target identification, and common algorithms comprise a gradient method, an operator, high-pass filtering, a mask matching method and a statistical difference method;
the method for using the frequency domain-based algorithm for the color image specifically comprises the following steps: the image is regarded as a two-dimensional signal, the enhancement processing of the image is realized by adopting image Fourier transform, the basis is convolution theorem, and the algorithm is an indirect enhancement algorithm; the low-pass filtering method is adopted, only the low-frequency signal passes through, and the noise in the image can be removed; by adopting a high-pass filtering method, high-frequency signals such as edges and the like can be enhanced, so that a blurred picture becomes clear;
step 10202: then, carrying out image smoothing treatment, wherein the image smoothing adopts a Gaussian filter;
the Gaussian filtering is to convolute each pixel point of the input array with a Gaussian kernel and take the convolution sum as an output pixel value; the two-dimensional gaussian function is as follows:
Figure FDA0002330978870000031
wherein G0(x, y) is the output pixel value, μ is the mean value, i.e. the peak value corresponds to the position, σ represents the standard deviation, and the variable x and the variable y each have a mean value and also a standard deviation;
step 10203: then, carrying out image binarization processing, and setting the gray value of a pixel point on the image to be 0 or 255, namely, the process of enabling the whole image to show an obvious black-and-white effect;
the image binarization processing adopts an OTSU algorithm;
the OTSU algorithm is also called maximum inter-class variance method, and the central idea is that the threshold value t should make the inter-class variance between the target and the background maximum;
for an image, when the segmentation threshold of the foreground and the background is t, the ratio of foreground points to the image is w0, the mean value is u0, the ratio of background points to the image is w1, and the mean value is u1, then the mean value u of the whole image is
u=w0*u0+w1*u1 (8);
Establishing an objective function g (t) ═ w0 (u0-u) ^2+ w1 (u1-u) ^2 (9);
wherein g (t) is the inter-class variance expression when the segmentation threshold is t;
the OTSU algorithm enables g (t) to obtain a global maximum value, and when g (t) is maximum, the corresponding t is called an optimal threshold value;
step 10203: then, carrying out morphological treatment to remove the miscellaneous points, adopting opening operation, wherein the process of firstly corroding and then expanding is called opening operation, and the opening operation is used for eliminating small objects, separating the objects at fine points, smoothing the boundary of a larger object and not obviously changing the area of the larger object;
step 10204: then, performing reserved connected domain operation, searching the maximum connected domain as an attention area, wherein the connected domain algorithm adopts a Two-Pass scanning method algorithm;
the two-pass scanning method is that all connected areas existing in an image can be found and marked by scanning the image twice; giving a label to each pixel position during the first scanning, wherein one or more different labels may be given to a pixel set in the same connected region during the scanning process, so that the labels belonging to the same connected region but having different values need to be merged, that is, the equality relationship between the labels is recorded; the second scanning pass is to classify the pixels marked by equal _ labels with equal relationship into a connected region and assign the same label, and the label is usually the minimum value in the equal _ labels;
the simple steps of the Two-Pass algorithm are given below:
(1) first scanning:
accessing the current pixel B (x, y), if B (x, y) is 1:
a. if the pixel values in the domain of B (x, y) are all 0, then B (x, y) is given a new label:
label+=1,B(x,y)=label;
b. if there are pixels Neighbors with pixel values >1 in the field of B (x, y):
1) assigning the minimum in Neighbors to B (x, y):
B(x,y)=min{Neighbors}
2) recording the equality relation among all values (label) in the neighbor, namely the values (label) belong to the same connected region;
labelSet [ i ] = { label _ m., label _ n }, and all labels in labelSet [ i ] belong to the same connected region;
(2) and (3) second scanning:
access the current pixel B (x, y), if B (x, y) > 1:
finding a minimum label value which is in the same generic relation with label ═ B (x, y), and assigning B (x, y); after the scanning is completed, the pixels with the same label value in the image form the same connected region.
4. The intelligent identification method of the pointer instrument as claimed in claim 1, characterized in that: the step 103 specifically includes:
after the image preprocessing is finished, carrying out straight line detection to find out the position information of a straight line in the image, wherein the straight line detection adopts a Hough straight line detection algorithm;
hough transformation is a method for identifying geometric shapes from images in image processing, and a given curve in an original image space is changed into a point in a Hough parameter space through a curve expression form by utilizing duality of points and lines, so that the detection problem of the given curve of the original image is converted into the peak value problem in the Hough parameter space, namely the detection of the overall characteristic is converted into the detection of the local characteristic; a point on a straight line in the European space is a sine curve in the Hough parameter space; a plurality of points on the same straight line in the European space are a sinusoidal curve cluster in the Hough parameter space, and the curve cluster intersects at one point, which is called a peak point; the peak point in the Hough parameter space corresponds to a straight line in the European space;
the equation of the straight line is:
xcosθ+ysinθ=ρ (10)
wherein, x, y-x value and y value of data points in the Euclidean space, rho-distance from the straight line of polar coordinates to the original point, and theta-angle of points on the straight line of polar coordinates;
the data points are represented as (x, y) in Euclidean space and (ρ, θ) in Hough parameter space;
the specific flow of the transformation algorithm is as follows:
(1) the method comprises the steps of (1) initializing a grid-connected Hough parameter space;
(2) performing step (3) for each (x, y) in euclidean space;
(3) performing ρ ═ xcos θ + ysin θ and H (ρ, θ) ═ H (ρ, θ) +1 for θ ═ 90 ° to 180 °;
(4) setting a threshold value, and searching a peak point of a parameter space, wherein each peak point of the parameter space corresponds to a straight line in the Euclidean space.
5. The intelligent identification method of the pointer instrument as claimed in claim 1, characterized in that: the step 106 specifically includes:
after the direction of the pointer is determined, the angle theta of the pointer can be calculated, the reading of the meter is calculated by combining the meter measuring range parameters of the meter parameter model, the calculation formula is as follows,
Figure FDA0002330978870000061
wherein, V- -measuring range value of instrument, thetaminAngle of minimum range, θmax- -maximum range angle, Vmin- -minimum value of range, Vmax- -maximum span.
CN201911336196.0A 2019-12-23 2019-12-23 Intelligent identification method for pointer instrument Active CN111191646B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911336196.0A CN111191646B (en) 2019-12-23 2019-12-23 Intelligent identification method for pointer instrument

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911336196.0A CN111191646B (en) 2019-12-23 2019-12-23 Intelligent identification method for pointer instrument

Publications (2)

Publication Number Publication Date
CN111191646A true CN111191646A (en) 2020-05-22
CN111191646B CN111191646B (en) 2023-04-18

Family

ID=70707508

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911336196.0A Active CN111191646B (en) 2019-12-23 2019-12-23 Intelligent identification method for pointer instrument

Country Status (1)

Country Link
CN (1) CN111191646B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111931620A (en) * 2020-07-31 2020-11-13 北京奈伦机器人科技有限公司 Instrument panel positioning and identifying method and device, electronic equipment and storage medium
CN112183369A (en) * 2020-09-29 2021-01-05 国网上海市电力公司 Pointer instrument reading identification method for transformer substation unmanned inspection
CN112488030A (en) * 2020-12-11 2021-03-12 华能华家岭风力发电有限公司 Pointer instrument meter reading method based on machine vision
CN112836726A (en) * 2021-01-12 2021-05-25 云南电网有限责任公司电力科学研究院 Pointer instrument indication reading method and device based on video information
CN113450373A (en) * 2020-08-18 2021-09-28 中国人民解放军63729部队 Optical live image-based real-time discrimination method for characteristic events in carrier rocket flight process
CN117351557A (en) * 2023-08-17 2024-01-05 中国矿业大学 Vehicle-mounted gesture recognition method for deep learning

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120134597A1 (en) * 2010-11-26 2012-05-31 Microsoft Corporation Reconstruction of sparse data
CN103994786A (en) * 2014-06-04 2014-08-20 湖南大学 Image detecting method for arc ruler lines of pointer instrument scale
CN109035320A (en) * 2018-08-12 2018-12-18 浙江农林大学 Depth extraction method based on monocular vision
CN109308447A (en) * 2018-07-29 2019-02-05 国网上海市电力公司 The method of equipment operating parameter and operating status is automatically extracted in remote monitoriong of electric power
CN109993166A (en) * 2019-04-03 2019-07-09 同济大学 The readings of pointer type meters automatic identifying method searched based on scale

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120134597A1 (en) * 2010-11-26 2012-05-31 Microsoft Corporation Reconstruction of sparse data
CN103994786A (en) * 2014-06-04 2014-08-20 湖南大学 Image detecting method for arc ruler lines of pointer instrument scale
CN109308447A (en) * 2018-07-29 2019-02-05 国网上海市电力公司 The method of equipment operating parameter and operating status is automatically extracted in remote monitoriong of electric power
CN109035320A (en) * 2018-08-12 2018-12-18 浙江农林大学 Depth extraction method based on monocular vision
CN109993166A (en) * 2019-04-03 2019-07-09 同济大学 The readings of pointer type meters automatic identifying method searched based on scale

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111931620A (en) * 2020-07-31 2020-11-13 北京奈伦机器人科技有限公司 Instrument panel positioning and identifying method and device, electronic equipment and storage medium
CN113450373A (en) * 2020-08-18 2021-09-28 中国人民解放军63729部队 Optical live image-based real-time discrimination method for characteristic events in carrier rocket flight process
CN112183369A (en) * 2020-09-29 2021-01-05 国网上海市电力公司 Pointer instrument reading identification method for transformer substation unmanned inspection
CN112488030A (en) * 2020-12-11 2021-03-12 华能华家岭风力发电有限公司 Pointer instrument meter reading method based on machine vision
CN112836726A (en) * 2021-01-12 2021-05-25 云南电网有限责任公司电力科学研究院 Pointer instrument indication reading method and device based on video information
CN112836726B (en) * 2021-01-12 2022-06-07 云南电网有限责任公司电力科学研究院 Pointer instrument indication reading method and device based on video information
CN117351557A (en) * 2023-08-17 2024-01-05 中国矿业大学 Vehicle-mounted gesture recognition method for deep learning

Also Published As

Publication number Publication date
CN111191646B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
CN111191646B (en) Intelligent identification method for pointer instrument
CN108921176B (en) Pointer instrument positioning and identifying method based on machine vision
CN108898610B (en) Object contour extraction method based on mask-RCNN
CN102426649B (en) Simple steel seal digital automatic identification method with high accuracy rate
CN107657606B (en) Method and device for detecting brightness defect of display device
CN105067638A (en) Tire fetal-membrane surface character defect detection method based on machine vision
CN108186051B (en) Image processing method and system for automatically measuring double-apical-diameter length of fetus from ultrasonic image
CN108537099A (en) A kind of licence plate recognition method of complex background
CN114926839B (en) Image identification method based on RPA and AI and electronic equipment
WO2021109697A1 (en) Character segmentation method and apparatus, and computer-readable storage medium
Krishnan et al. A survey on different edge detection techniques for image segmentation
CN110070545B (en) Method for automatically extracting urban built-up area by urban texture feature density
CN112330561B (en) Medical image segmentation method based on interactive foreground extraction and information entropy watershed
CN110298344A (en) A kind of positioning of instrument knob and detection method based on machine vision
CN104077775A (en) Shape matching method and device combined with framework feature points and shape contexts
CN111476758A (en) Defect detection method and device for AMO L ED display screen, computer equipment and storage medium
CN116188468B (en) HDMI cable transmission letter sorting intelligent control system
CN113837037A (en) Plant species identification method and system, electronic equipment and storage medium
CN110414521A (en) Oil level gauge for transformer registration recognition methods in a kind of substation
CN113408519A (en) Method and system for reading pointer instrument based on template rotation matching
CN115994870B (en) Image processing method for enhancing denoising
CN111488811A (en) Face recognition method and device, terminal equipment and computer readable medium
CN116228780A (en) Silicon wafer defect detection method and system based on computer vision
CN113940704A (en) Thyroid-based muscle and fascia detection device
CN113674197B (en) Method for dividing back electrode of solar cell

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant