CN117315635A - Automatic reading method for inclined pointer type instrument - Google Patents

Automatic reading method for inclined pointer type instrument Download PDF

Info

Publication number
CN117315635A
CN117315635A CN202311181600.8A CN202311181600A CN117315635A CN 117315635 A CN117315635 A CN 117315635A CN 202311181600 A CN202311181600 A CN 202311181600A CN 117315635 A CN117315635 A CN 117315635A
Authority
CN
China
Prior art keywords
instrument
view
rotm
pointer
net
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311181600.8A
Other languages
Chinese (zh)
Inventor
曾国奇
李�杰
贾惠雯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202311181600.8A priority Critical patent/CN117315635A/en
Publication of CN117315635A publication Critical patent/CN117315635A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/243Aligning, centring, orientation detection or correction of the image by compensating for image skew or non-uniform image deformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an automatic reading method of an inclined pointer instrument, which comprises the following steps: identifying an instrument inclined graph to be identified based on a pre-trained deep convolutional neural network, obtaining an angle value I of identification output of the deep convolutional neural network, obtaining a three-dimensional coordinate to which the angle value I belongs, selecting at least three characteristic points to conduct rotation identification mode based on the pre-trained instrument front graph and the instrument inclined graph to be identified, and obtaining a rotation angle and a rotation matrix when the instrument front graph rotates to a state of the instrument inclined graph; acquiring an angle value when an instrument pointer in the instrument inclined diagram is positioned on the front according to a three-dimensional coordinate to which the angle value belongs, a rotation angle and a rotation matrix when the instrument front diagram rotates to a state of the instrument inclined diagram; and obtaining the meter reading in the meter inclination diagram to be identified according to the basic parameters of the meter in the front view of the meter and the angle value of the meter pointer in the front view. The method solves the problem of instrument identification in the inclined state.

Description

Automatic reading method for inclined pointer type instrument
Technical Field
The invention relates to an automatic identification technology of an intelligent instrument, in particular to an automatic reading method of an inclined pointer instrument.
Background
With the development of deep learning and pattern recognition, the deep learning algorithm is more and more widely applied to the field of industrial instrument image recognition processing. When the instrument images are acquired in the industrial scene, the preset positions of the cameras are adjusted, so that one camera shoots the pictures of a plurality of instruments as much as possible, the pointers are extracted in a mode of identifying the subsequent deep convolutional neural network, the angles of the pointers are calculated, and the reading of the instruments is obtained in a mode of identifying the reading images of the pointers. Due to the camera mounting and the need to shoot multiple meters, the shot meters often have some tilt. In the existing method for correcting the inclination meter image, transmission transformation is the most dominant mode. After the transmission transformation acts on the pointer extraction and before the pointer angle is calculated, the principle is that the coordinate transformation is carried out on the image based on the two-dimensional space, the transmission matrix is constructed by utilizing the matching of the characteristic points and the corresponding coordinate relation, and when the transmission transformation is carried out to a large extent, the problems of stretching, deformation or cutting of the image can be generated, so that deviation can be generated when the pointer angle is calculated, errors are caused, and the meter reading is inaccurate.
When correcting an oblique image by transmission transformation, stretching, compression, or cropping of the image may be required so that the oblique object becomes vertical or horizontal. This may cause certain areas in the image to be cropped or distorted, resulting in loss of detail or reduced image quality. Particularly in the case of performing large-angle inclination correction, since the transmission transformation is a transformation based on two-dimensional coordinates, a certain degree of image distortion is inevitably introduced, particularly at the edge portion of the image. Meanwhile, at least 4 pairs of matched feature points are required to estimate the transmission matrix when performing transmission transformation, however, in some cases, it may be difficult to find a sufficient number and quality of matched points, especially in complex scenes or images, which may cause the transmission transformation to be not accurately estimated, thereby affecting the correction effect.
Therefore, how to solve the problem of recognizing the meter in the inclined state is a technical problem to be solved.
Disclosure of Invention
First, the technical problem to be solved
Aiming at the defects of the prior art, the embodiment of the invention provides an automatic reading method of an inclined pointer type instrument.
(II) technical scheme
In order to achieve the above purpose, the main technical scheme adopted by the invention comprises the following steps:
in a first aspect, an embodiment of the present invention provides an automatic reading method for an inclined pointer instrument, including:
step 101, identifying an instrument inclination diagram to be identified based on a pre-trained deep convolutional neural network, obtaining an angle value of an ang_Net which is identified and output by the deep convolutional neural network, and obtaining a three-dimensional coordinate [ x_Net y_Net z_Net ] to which the angle value of the ang_Net belongs;
102, selecting at least three characteristic points to perform rotation recognition mode based on a preset instrument front view and an instrument inclination view to be recognized, and obtaining a rotation angle (theta) when the instrument front view rotates to a state of the instrument inclination view x ,θ y ,θ z ) And rotation matrix (rotM X ,rotM Y ,rotM Z );
Step 103, according to the three-dimensional coordinate [ x_Net y_Net z_Net ] to which the angle value of one ang_Net belongs]Rotation angle (θ) when the meter front view is rotated to the state of the meter inclination view x ,θ y ,θ z ) And rotation matrix (rotM X ,rotM Y ,rotM Z ) Acquiring an angle value theta when an instrument pointer is positioned at the front in an instrument oblique diagram point
104, according to the basic parameters of the instrument when the instrument is in the front view and the angle value theta when the instrument pointer is in the front point And obtaining a meter reading R in the meter inclination diagram to be identified.
Optionally, the step 104 includes:
obtaining the meter reading R according to the following formula;
wherein R is the meter reading, V min ,V max Minimum and maximum readings, θ, of the meter, respectively minmax The minimum reading and the horizontal line included angle of the instrument are respectively, and the maximum reading and the horizontal line included angle of the instrument are respectively, theta point The angle value of the instrument pointer when the instrument pointer is at the front is the angle between the instrument pointer and the horizontal wire when the instrument pointer is at the front.
Optionally, the step 103 includes:
obtaining the angle value theta of the instrument pointer when the instrument pointer is at the front according to the following formula point
front_x=x n *rotM x *rotM Y *rotM Z
front_y=y n *rotM x *rotM Y *rotM Z
front_z=0
The coordinates front_x, front_y and front_z of the pointer vertex of the inclined instrument in the instrument inclined graph are the front;
[x_Net y_Net z_Net]is the three-dimensional coordinates of the vertex of the pointer of the inclined instrument in the instrument inclined chart, [ x ] n y n z n ]Is the coordinate of the pointer vertex above the rotating inclined plane;
the rotation inclined plane is a rotation angle (theta) x ,θ y ,θ z ) A plane formed by the two layers;
nl is the normal of the rotating ramp obtained by [ x_Net y_Netz_Net ].
Optionally, the step 103 includes:
acquiring a normal nl of a rotating surface according to the following formula;
up=[0 1 0]
r=[1 0 0]
up_R=rotM X ×rotM Y ×rotM Z ×up
r_R=rotM X ×rotM Y ×rotM Z ×r
nl=-(r_R×up_R)
wherein up and R are respectively the positive unit coordinates on the y axis of the instrument front view, the positive unit coordinates on the x axis, and up_R and r_R are respectively the positive unit coordinates on the y axis and the positive unit coordinates on the x axis after the instrument inclinometer rotates.
Optionally, before the step 101, the method further includes:
acquiring a training data set, and training a deep neural network model by adopting the training data set;
the training data set comprises a plurality of instrument front diagrams marked with characteristic points and instrument inclined diagrams marked with the characteristic points in the corresponding instrument front diagrams;
the loss function of the training process is:
Loss=λ1*Angle Loss+λ2*lbox Loss+λ3*lobj Loss+λ4*lcls Loss;
the Angle Loss is used for measuring the difference between the pointer Angle predicted by the deep neural network model and the actual pointer Angle; the lbox Loss is used for measuring the position error of the deep neural network model; the lobj Loss is used for measuring the confidence coefficient error of the deep neural network model, and the lcls Loss is used for measuring the class error of the deep neural network model; λ1, λ2, λ3, λ4 are four weight parameters λ1, λ2, λ3, λ4, respectively;
and/or the number of the groups of groups,
the training process for training the deep neural network model by adopting the training data set comprises the following steps:
preprocessing each image, carrying out n times of convolution on the preprocessed 640 x3 meter image, carrying out depth convolution, point-by-point convolution and depth separable convolution processing to obtain a characteristic image with the size of 40 x 64, carrying out Softmax operation on the characteristic image with the size of 40 x 64 to obtain the probability that each pixel point in the characteristic image belongs to each category respectively, converting the probability into the category to which each pixel specifically belongs through argmax operation, finally restoring the characteristic image with the size of 40 x 64 to obtain the characteristic image with the size of 640 x3, predicting pixel coordinates of a plurality of characteristic points, calculating the slope of a pointer based on the image bottom edge according to the pixel coordinates of the predicted characteristic points, and obtaining the pointer angle according to the slope.
Optionally, the step 101 includes:
obtaining three-dimensional coordinates of an angle value of ang_Net according to the following formula
[x_Net y_Net z_Net];
z_net= -1, z_net represents that the meter tilt map is generated by projection of the meter front map after rotation, and z=0 represents the meter front map.
Optionally, the step 102 includes:
102-1, respectively marking three corresponding characteristic points on a front view of the instrument and an inclined view of the instrument; acquiring respective first angle values based on the following formulas aiming at two-dimensional coordinate values of the feature points in the front image and coordinates of the feature points in the instrument oblique image;
wherein ang is the angle value of the feature point, x, y is the coordinate value of the feature point;
102-2, acquiring three-dimensional coordinates of each characteristic point in the instrument front view and the instrument inclined view based on the following formula according to the respective first angle values;
z=0 (formula 3);
step 102-3, obtaining a rotation matrix (rotM) from the front view of the instrument to the oblique view of the instrument based on the three-dimensional coordinates of each feature point in the front view of the instrument and the three-dimensional coordinates of each feature point in the oblique view of the instrument X ,rotM Y ,rotM Z );
Wherein: θ x ,θ y ,θ z Traversing and taking values from 0-180 degrees.
Optionally, the step 102 further includes:
acquiring theta by adopting a traversing mode x ,θ y ,θ z The method comprises the steps of carrying out a first treatment on the surface of the The method specifically comprises the following steps:
the coordinates (x_ R, y _ R, z _r) of the three feature points in the instrument front view after rotation are obtained according to the following formula 5, the three rotated coordinate points are projected onto a plane with z=0, and the angles rotAng1, rotAng2 and rotAng3 of the three feature points on the plane with z=0 are calculated according to the formula 6;
wherein rotAng1 corresponds to (x_R1, y_R1), rotAng2 corresponds to (x_R2, y_R2), rotAng3 corresponds to (x_R3, y_R3);
if the angles of the three characteristic points after rotation are within the error range with the first angle values of the three characteristic points on the instrument inclined graph, obtaining three rotation angles;
namely: (x 1, y1, z 1) -, (x_R1, y_R1, z_R1),
(x2,y2,z2)-》(x_R2,y_R2,z_R2),
(x3,y3,z3)-》(x_R3,y_R3,z_R3),
r1, R2 and R3 respectively represent three rotating characteristic points, x_R1, y_R1 and z_R1 respectively represent coordinates of the three characteristic points of the instrument front view after being respectively rotated, namely an x-axis coordinate, a y-axis coordinate and a z-axis coordinate,
x_R=x*rotM z *rotM Y *rotM X
y_R=y*rotM z *rotM Y *rotM X (equation 5)
z_R=z*rotM z *rotM Y *rotM X
In the formula 5, x_r, y_r, z_r respectively represent x coordinates, y coordinates, z coordinates, x, y, z after the feature points of the front view of the instrument pass through rotation, respectively represent feature point coordinates of the front view of the instrument, rotMz, rotMy, rotMx respectively represent a rotation matrix around the z axis, a rotation matrix around the y axis, and a rotation matrix around the x axis;
theta-thr < rotAng < theta + thr (equation 7)
If ang4-thr<rotAng1<ang4+thr,ang5-thr<rotAng2<ang5+thr,ang6-thr<rotAng3<ang6+thr, θ at this time x ,θ y ,θ z The three rotation angles that result.
In a second aspect, an embodiment of the present invention further provides an electronic device, including: the automatic meter reading device comprises a memory and a processor, wherein the memory stores a computer program, and the processor executes the computer program stored in the memory and performs the steps of the automatic meter reading method of any one of the inclined pointer type meters.
(III) beneficial effects
Aiming at the problem of instrument inclination, the invention aims to improve the existing deep convolutional neural network, improves the identification precision by a method of superposing a feature extraction layer and embedding a rotation angle into a loss function of the network, extracts a pointer by using the improved deep convolutional neural network, then uses the inclination correction algorithm of the invention to restore a three-dimensional image, calculates the rotation angle under the three-dimensional, realizes the restoration of the three-dimensional image, solves the instrument identification problem under the inclined state, and improves the identification precision.
Drawings
FIG. 1 is a diagram of an example of any one of the labeled inclination meters in the training dataset according to an embodiment of the present invention;
figure 2a is a schematic diagram of marked feature points in a front view of the meter,
FIG. 2b is a schematic illustration of labeled feature points in an oblique view of the meter;
FIG. 3a is a schematic view of three feature point angles and a horizontal axis in a front view of the instrument;
FIG. 3b is a schematic view of three feature point angles and a horizontal axis in an instrument tilt diagram;
fig. 4 is a flow chart of an automatic reading method of an inclined pointer type meter according to an embodiment of the invention.
Detailed Description
The invention will be better explained by the following detailed description of the embodiments with reference to the drawings.
Example 1
Referring to fig. 1, fig. 1 is a schematic flow chart of an automatic reading method of an inclined pointer instrument according to an embodiment of the present invention, wherein an execution subject of the method of the present invention is a control device, and the method of the embodiment may include the following steps:
step 101, identifying an instrument inclination diagram to be identified based on a pre-trained deep convolutional neural network, obtaining an angle value of an ang_Net which is identified and output by the deep convolutional neural network, and obtaining a three-dimensional coordinate [ x_Net y_Net z_Net ] to which the angle value of the ang_Net belongs;
102, based on a preset front view of the instrument and an inclination view of the instrument to be identified, selecting at least three feature points to perform rotation identification, and obtaining a rotation angle (θ) when the front view of the instrument rotates to a state of the inclination view of the instrument x ,θ y ,θ z ) And rotation matrix (rotM X ,rotM Y ,rotM Z );
Step 103, according to the three-dimensional coordinate [ x_Net y_Net z_Net ] to which the angle value of one ang_Net belongs]Rotation angle (θ) when the meter front view is rotated to the state of the meter inclination view x ,θ y ,θ z ) And rotation matrix (rotM X ,rotM Y ,rotM Z ) Acquiring an angle value theta when an instrument pointer is positioned at the front in an instrument oblique diagram point
104, according to the basic parameters of the instrument when the instrument is in the front view and the angle value theta when the instrument pointer is in the front point And obtaining a meter reading R in the meter inclination diagram to be identified.
Aiming at the problem of instrument inclination, the embodiment improves the existing depth convolution neural network, extracts pointers by using the improved depth convolution neural network, then restores a three-dimensional image by using a rotation identification mode by selecting at least three characteristic points, calculates the rotation angle under three dimensions, realizes the restoration of the three-dimensional image, and solves the problem of instrument identification under an inclination state.
Example two
An automatic reading method for an inclined pointer meter according to an embodiment of the present invention will be described in detail with reference to fig. 2 to 4.
201. And respectively labeling three corresponding characteristic points on a front image of the instrument, namely a front image of the instrument (abbreviated as the front image of the instrument) and an inclined image of the instrument to be identified (abbreviated as the inclined image of the instrument).
Specifically, when the camera for the transformer substation shoots the instrument in advance, the preset position of the camera is adjusted to shoot a video when the instrument is positioned on the front surface, and an image is extracted from the video, so that a front surface diagram of the instrument is obtained. And marking three characteristic points on the instrument front view and the instrument inclined view respectively, wherein the three characteristic points are in one-to-one correspondence, and the characteristic points are selected arbitrarily. As shown in fig. 2, fig. 2a is a front view of the meter, and fig. 2b is an oblique view of the meter.
For example, the labeling of the feature points in this embodiment may be performed by a labelme tool, which forms a json file after labeling, where the two-dimensional coordinates (x, y) of the three points, that is, the pixel coordinates of the three points, are the pixel coordinates of the three points, and each feature point corresponds to one (x, y).
202. And acquiring respective first angle values according to the two-dimensional coordinate values of the feature points in the front image and the coordinates of the feature points in the instrument oblique image.
In the present embodiment, the first angle values ang1, ang2, ang3, ang4, ang5, ang6 are obtained for each feature point by the formula 2; as shown in fig. 3a and 3b, three feature points of the front view of the instrument are marked as a, b and c, corresponding first angle values are marked as ang1, ang2 and ang3, and three feature points of the inclined view of the instrument are marked as d, e and f, corresponding first angle values are marked as ang4, ang5 and ang6;
the first angle value corresponding to the first characteristic point a of the instrument front view is ang1;
the first angle value corresponding to the second characteristic point b of the instrument front view is ang2;
the first angle value corresponding to the third characteristic point c of the instrument front view is ang3;
the first angle value corresponding to the first characteristic point d of the instrument inclined chart is ang4;
the first angle value corresponding to the second characteristic point e of the instrument inclined chart is ang5;
the first angle value corresponding to the third characteristic point f of the instrument inclination chart is ang6.
thr is a preset error.
The calculation formulas of ang1 to ang6 are all calculated by using the method of formula 2:
where ang is the angle value of the feature point, and x, y is the coordinate value of the feature point. At this time, the first angle values of the three feature points a, b, c in the front view of the meter and the first angle values of the three feature points d, e, f in the oblique view of the meter are obtained.
203. And acquiring three-dimensional coordinates of each characteristic point in the instrument front view and the instrument inclined view according to the respective first angle values.
In practical application, the three-dimensional coordinates of three characteristic points of the inclinometer graph do not need to be solved, and the three-dimensional coordinates are compared at uniform angles in the subsequent comparison, so that the angles of the three characteristic points of the inclinometer graph only need to be solved.
For example, three-dimensional coordinates (x 1, y1, z 1), (x 2, y2, z 2), (x 3, y3, z 3) corresponding to three feature points a, b, c in the front view of the meter are obtained by the following formula 3, wherein (x 1, y1, z 1) corresponds to ang1, (x 2, y2, z 2) corresponds to ang2, (x 3, y3, z 3) corresponds to ang3, and formula 3 is a general formula:
z=0 (formula 3).
204. And acquiring a rotation matrix from the instrument front view to the instrument inclined view based on the three-dimensional coordinates of each characteristic point in the instrument front view and the three-dimensional coordinates of each characteristic point in the instrument inclined view.
Specifically, based on the (x 1, y1, z 1), (x 2, y2, z 2), (x 3, y3, z 3) values, three feature points are rotated in the order of the z-axis, the y-axis, and the x-axis (rotated are three feature points of the meter front view, by which the three feature points of the meter front view are rotated to approach the feature points of the meter tilt view), the rotation matrix is as follows:
wherein: θ x ,θ y ,θ z Traversing and taking values from 0-180 degrees.
Specifically, θ x ,θ y ,θ z Is achieved by the following means.
According to general formula 5, the coordinates (x_ R, y _ R, z _r) of the three feature points in the front view of the instrument after rotation can be obtained, all the three points are projected onto a plane with z=0, and the angles rotAng1, rotAng2 and rotAng3 of the three feature points on the plane are calculated according to formula 6, wherein rotAng1 corresponds to (x_r1, y_r1), rotAng2 corresponds to (x_r2, y_r2) and rotAng3 corresponds to (x_r3, y_r3); if the angles of the three feature points after rotation are all close to the angles of the three feature points on the meter's oblique chart (equation 2 solves the angles of the three feature points of the meter's front chart and the angles of the three feature points of the meter's oblique chart), then the three rotation angles that are needed are found.
Namely:
(x1,y1,z1)-》(x_R1,y_R1,z_R1),
(x2,y2,z2)-》(x_R2,y_R2,z_R2),
(x3,y3,z3)-》(x_R3,y_R3,z_R3),
in the above formula, R1, R2, R3 represents rotation, 1,2,3 is a number of feature points, x_R1, y_R1, z_R1 represents coordinates x-axis coordinates, y-axis coordinates and z-axis coordinates of three feature points of the instrument front view after rotation, 1 represents a first feature point, 2 represents a second feature point, and 3 represents a third feature point
x_R=x*rotM z *rotM Y *rotM X
y_R=y*rotM z *rotM Y *rotM X (equation 5)
z_R=z*rotM z *rotM Y *rotM X
x_r, y_r, z_r represent the x-coordinate, y-coordinate, z-coordinate, x, y, z represent the x-coordinate, y-coordinate, z-coordinate, rotMz, rotMy, rotMx represent the rotation matrix around the z-axis, the rotation matrix around the y-axis, the rotation matrix around the x-axis, respectively, after the feature points of the instrument front map are rotated.
Projecting all three points onto a plane with z=0, calculating the angles rotAng1, rotAng2, rotAng3 of the three feature points on the plane by equation 6, wherein rotAng1 corresponds to (x_R1, y_R1), rotAng2 corresponds to (x_R2, y_R2), rotAng3 corresponds to (x_R3, y_R3). If the angles of all three feature points after rotation are close to the angles of three feature points on the meter inclinations, then the three rotation angles needed are found:
theta-thr < rotAng < theta + thr (equation 7)
If ang4-thr<rotAng1<ang4+thr,ang5-thr<rotAng2<ang5+thr,ang6-thr<rotAng3<ang6+thr, θ at this time x ,θ y ,θ z The three rotation angles that result.
205. After the angle value of the pointer in the inclination instrument is identified through the improved deep convolution neural network, the angle identified by the neural network is set as ang_Net, and corresponding three-dimensional coordinates x_Net and y_Net are calculated through a formula 8, wherein z_Net= -1.
That is, the instrument inclination map is sent to the neural network to be identified, and the identification result is the angle of the pointer in the instrument inclination map, and the three-dimensional coordinate of the pointer of the instrument inclination map can be calculated according to the angle.
z_Net=-1
The oblique view of the meter in this embodiment is that, because the camera is not facing the meter when shooting, and the oblique view is the oblique view when the final shot meter is resulted, the oblique view is essentially a corresponding front view of the meter, the rotated projection referred to in this section means that the front view of the meter corresponding to the oblique view is obtained after rotation, when the front view of the meter is obtained, the z-axis coordinate of the meter corresponds to z=0, because no tilt (rotation) occurs, and the oblique view is obtained after rotation of the front view of the meter, and then the z-axis coordinate of the oblique meter corresponds to z=0, so it is assumed to be equal to-1 here, and essentially a normalization is performed.
The front view of the meter lies on the z=0 plane, which is generated on the z= -1 plane, assuming that the meter is generated by the rotated projection. Thus, its z_Net coordinate is set to-1. By three rotation angles theta deduced before x ,θ y ,θ z The normal vector of the rotation plane can be inversely deduced, and according to the properties of the rotation matrix, the normal vector of the rotation plane can be calculated using the following equation 9:
up=[0 1 0]
r=[1 0 0]
up_R=rotM X ×rotM Y ×rotM Z ×up
r_R=rotM X ×rotM Y ×rotM Z ×r
nl= - (r_r×up_r) (formula 9)
Wherein up and R represent points of one unit on the upper end of the y-axis and points of one unit on the right end of the x-axis, up_R and r_R are coordinates after rotation, respectively, and nl is a normal line of a rotation surface.
206. The three-dimensional coordinates of the pointer and the normal of the rotation surface in the tilting instrument calculate the coordinates of the pointer on the rotation inclined plane, and the formula 10 is as follows:
wherein [ x_Net y_Net z_Net ]]Is the three-dimensional coordinates of the pointer in the inclinometer, [ x ] n y n z n ]Is the coordinate of the pointer above the rotating ramp.
The coordinate of [ x_Net y_Net z_Net ] is calculated according to the pointer angle value recognized by the neural network, the rotating inclined plane is a plane formed by three rotation angles, the coordinate of [ x_Net y_Net z_Net ] is required to be mapped onto the rotating inclined plane, and then inverse operation is carried out, so that the coordinate of the pointer when the pointer is in the front is obtained.
207. Based on the coordinates of the pointer on the rotating inclined plane, reversely calculating the rotating matrix and the rotating angle to obtain the coordinates when the pointer in the inclined instrument is positioned at the front
front_x,front_y,front_z,
As shown in equation 11, in which the matrix rotM is rotated X ,rotM Y ,rotM Z The angle in (a) is θ obtained by the above equation 7 x ,θ y ,θ z
(rotM X ,rotM Y ,rotM Z Time rotation matrix, θ x ,θ y ,θ z The rotation angle is calculated by the rotation matrix
front_x=x n *rotM x *rotM Y *rotM Z
front_y=y n *rotM x *rotM Y *rotM Z
front_z=0 (formula 11)
Since the coordinates are coordinates in the front direction, front_z=0, and the angle value in the front direction of the pointer can be obtained from equation 12.
208. Through the calculation, the coordinate and the corresponding angle value of the pointer in the tilting instrument when the pointer is at the front can be obtained. Finally, the meter reading is calculated by an angle method with equation 13.
Wherein R is the reading result, V min ,V max Representing the minimum and maximum readings, θ, of the meter, respectively minmax The instrument minimum reading and the horizontal line included angle are respectively, and the instrument maximum reading and the horizontal line included angle are respectively.
Here, the rotation inclined plane means three rotation angles θ x ,θ y ,θ z The resulting plane, which approximates the feature points of the oblique image by rotating the three feature points of the frontal image, results in three rotation angles θ x ,θ y ,θ z The three angles may form a plane, which is called a rotation slope, and it is necessary to convert the three-dimensional coordinates of the pointer of the tilting instrument into coordinates on the rotation slope, and then perform inverse calculation of the rotation matrix and the rotation angle to obtain coordinates of the pointer of the tilting instrument when the pointer is in front, where the coordinates refer to coordinates of the vertex, and in this embodiment, the coordinates of all the pointers refer to the vertex.
The method is characterized in that the inclination correction algorithm is utilized to restore the three-dimensional image, the rotation angle under the three-dimensional state is calculated, the restoration of the three-dimensional image is realized, the instrument identification problem under the inclination state is solved, and the identification precision is improved.
Example III
The present embodiment describes the training process of the deep convolutional neural network mentioned in the first embodiment and the second embodiment, and the deep convolutional neural network of the present embodiment may be a modified deep convolutional neural network.
The embodiment mainly comprises two parts, namely an improvement on training of an improved deep convolutional neural network instrument recognition network and an inclination correction and pointer reading calculation based on a three-dimensional space by using a model trained by the network.
Training phase:
1. a training dataset is acquired. The camera of the transformer substation can be used for carrying out video acquisition on the instrument image, the acquired video is converted into an image, the video frame is taken as a unit when the image is converted, one video frame is one image, and N images are finally obtained. The N images are labeled using labelme labeling software, and a labeling sample is shown in fig. 1, for example, and the labeled N images are finally used as a dataset.
2. The training data set is partitioned. Dividing the N detected object samples into data sets, and dividing 0.8N samples into training sets for training phases; the 0.2N samples are divided into test sets for the prediction phase.
3. These 0.8N samples were fed into a modified deep convolutional neural network for training.
The specific process is that the preprocessed 640 x3 meter image is convolved for 1 time to obtain a feature map with the size of 320 x 16; convolving the 320 x 16 feature map to a 160 x 32 feature map; convolving the 160×160×32 signature to a 80×80×64 signature; convolving the 80 x 64 signature to a 40 x 64 signature; the feature map of 40×40×64 is downsampled to become a feature map of 20×20×64, and then depth separable convolution processing is performed on the feature map.
Depth separable volume integration is divided into two processes, depth convolution and point-by-point convolution, respectively. The process of the depth convolution is that firstly, for the 20 x 64 characteristic diagram of the input, 64 convolution kernels with the size of 3 x1 are used, in the depth convolution, each convolution kernel only carries out convolution operation on one channel of the input, and a characteristic diagram with the depth of 1 is generated. Thus, for each input channel, there will be a corresponding depth 1 profile.
After the deep convolution operation, the size of the output feature map will become 20×20×64. Next, a point-by-point convolution operation is performed on the result of the depth convolution. Point-by-point convolution refers to a convolution operation using the result of the convolution kernel depth convolution of 1x 1. This converts each depth-1 feature map to a depth-64 feature map. After the point-by-point convolution operation, the obtained feature map is still 20×20×64; the method comprises the steps of performing downsampling on a 20 x 64 feature map to obtain a 10 x 64 feature map, and performing depth separable convolution processing on the feature map, wherein the depth separable convolution processing is still performed by firstly performing convolution operation on an input 10 x 64 feature map in the depth convolution process by using 64 convolution kernels with the size of 3 x1, and each convolution kernel only performs convolution operation on an input channel in the depth convolution process to generate a feature map with the depth of 1. Thus, for each input channel, there will be a corresponding depth 1 profile. After the deep convolution operation, the size of the output feature map will become 10×10×64. Next, a point-by-point convolution operation is performed on the result of the depth convolution. Point-by-point convolution refers to a convolution operation using the result of the convolution kernel depth convolution of 1x 1. This converts each depth-1 feature map to a depth-64 feature map. After the point-by-point convolution operation, the obtained feature map is still 10×10×64; the method includes the steps of downsampling 10 x 64 feature images to form 5 x 64 feature images, and performing depth separable convolution processing on the feature images by firstly performing convolution operation on one channel of an input in the depth convolution process by using 64 convolution kernels with the size of 3 x1 to generate a feature image with the depth of 1. Thus, for each input channel, there will be a corresponding depth 1 profile. After the deep convolution operation, the size of the output feature map will become 5×5×64. Next, a point-by-point convolution operation is performed on the result of the depth convolution. Point-by-point convolution refers to a convolution operation using the result of the convolution kernel depth convolution of 1x 1. This converts each depth-1 feature map to a depth-64 feature map. After the point-by-point convolution operation, the obtained feature map is still 5×5×64; the method comprises the steps of performing downsampling on a characteristic diagram of 5 x 64 to form a characteristic diagram of 3 x 64, and performing depth separable convolution processing on the characteristic diagram, wherein in the process of depth convolution, firstly, the characteristic diagram of 3 x 64 is input, 64 convolution kernels with the size of 3 x1 are used, and in the process of depth convolution, each convolution kernel only performs convolution operation on one channel of the input to generate the characteristic diagram with the depth of 1. Thus, for each input channel, there will be a corresponding depth 1 profile. After the deep convolution operation, the size of the output feature map will become 3×3×64. Next, a point-by-point convolution operation is performed on the result of the depth convolution. Point-by-point convolution refers to a convolution operation using the result of the convolution kernel depth convolution of 1x 1. This converts each depth-1 feature map to a depth-64 feature map. After the point-by-point convolution operation, the obtained feature map is still 3×3×64; the method comprises the steps of performing downsampling on a 3 x 64 feature map to form a 2 x 64 feature map, and performing depth separable convolution processing on the feature map, wherein in the depth convolution process, firstly, the input 2 x 64 feature map uses 64 convolution kernels with the size of 3 x1, and in the depth convolution, each convolution kernel only performs convolution operation on one input channel to generate a feature map with the depth of 1. Thus, for each input channel, there will be a corresponding depth 1 profile. After the deep convolution operation, the size of the output feature map will become 2×2×64. Next, a point-by-point convolution operation is performed on the result of the depth convolution. Point-by-point convolution refers to a convolution operation using the result of the convolution kernel depth convolution of 1x 1. This converts each depth-1 feature map to a depth-64 feature map. After the point-by-point convolution operation, the obtained feature map is still 2 x 64; upsampling the feature map of 2×64 to obtain a feature map of 4×4×64; here, the features of the shallow network need to be extracted, so that the 4×4×64 and the 3×3×64 feature map obtained above are routed and connected; after up-sampling to generate a 5*5 feature map, performing routing connection with the shallow 5 x 64 feature map; then up-sampling is carried out again to generate a 10 x 10 characteristic diagram, and routing connection is carried out on the 10 x 64 characteristic diagram of the shallow layer; then up-sampling is carried out again to generate a 20 x 20 characteristic diagram, and routing connection is carried out on the 20 x 64 characteristic diagram of the shallow layer; then up-sampling is carried out again to generate a 40 x 40 characteristic diagram, and route connection is carried out on the characteristic diagram and the shallow 40 x 64 characteristic diagram; the final result is a feature map of 40 x 64 size.
5. Performing Softmax operation on the feature map of 40×40×64 to obtain probabilities that each pixel point in the feature map belongs to each category, converting the probabilities into the category to which each pixel specifically belongs through argmax operation, and finally reducing the feature map of 40×40×64 size to the size of 640×640×3 of the input image, wherein the method comprises the following steps: up-sampling the 40×40×64 feature map by bilinear interpolation to expand the size to 640×640, where the output is 640×640×64 feature map, then reducing the channel number by using a convolution kernel of 1×1, and converting the 640×640×64 feature map to a 640×640×3 feature map.
Because the sample is marked with three points, the final prediction output of the network is the pixel coordinates of the three points, the slope of the pointer based on the bottom edge of the image is calculated according to the predicted pixel coordinates of the three points, and the angle is obtained according to the slope, and is the pointer angle.
To evaluate the fitting ability of the algorithm, the angle of the meter pointer needs to be added as a penalty to the penalty function to force the model to learn the angle information better. The angle information and lbox losses are combined and the resulting loss function is shown below.
Loss=λ1×angle loss+λ2×lbox loss+λ3×lobj loss+λ4×lcls Loss (formula 1)
Wherein Angle Loss is used to measure the difference between the model predicted pointer Angle and the actual pointer Angle. This loss function is defined based on cosine similarity, since cosine similarity can capture the similarity between angles, while also penalizing the accuracy of model predictions. lbox Loss is then used to measure the position error of the model, which can be calculated by calculating the distance between the center point of the model's predicted box and the center point of the actual box. The lobj Loss is used to measure the confidence error of the model, which can be calculated by calculating the error between the confidence value predicted by the model and the actual confidence value. Finally, lcls Loss is used to measure the class error of the model, which can be calculated by calculating the difference between the class label predicted by the model and the actual label. In order to balance the effects of these different loss functions, four weight parameters λ1, λ2, λ3, λ4 need to be used. These parameters will be adjusted according to the actual data and task to obtain the best results. For example, in some cases, the class error may be more important than the position error, so the value of λ4 may be increased to amplify the effect of the class error. During training, the model will constantly adjust its own parameters to reduce the value of the loss function as much as possible.
Stage of use
The angle value ang_net of the pointer of the tilting instrument in the instrument tilting graph is identified through the improved deep convolutional neural network.
Example IV
The embodiment of the invention also provides an electronic device, which can be connected with the monitoring device of the instrument, and comprises: the automatic reading device comprises a memory and a processor, wherein the memory stores instructions, and the processor executes the instructions stored in the memory and specifically executes the steps of the automatic reading method of the inclined pointer instrument.
In order that the above-described aspects may be better understood, exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present invention are shown in the drawings, it should be understood that the present invention may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions.
It should be noted that in the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the terms first, second, third, etc. are for convenience of description only and do not denote any order. These terms may be understood as part of the component name.
Furthermore, it should be noted that in the description of the present specification, the terms "one embodiment," "some embodiments," "example," "specific example," or "some examples," etc., refer to a specific feature, structure, material, or characteristic described in connection with the embodiment or example being included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art upon learning the basic inventive concepts. Therefore, the appended claims should be construed to include preferred embodiments and all such variations and modifications as fall within the scope of the invention. It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, the present invention should also include such modifications and variations provided that they come within the scope of the following claims and their equivalents.

Claims (9)

1. An automatic reading method of an inclined pointer instrument is characterized by comprising the following steps:
step 101, identifying an instrument inclination diagram to be identified based on a pre-trained deep convolutional neural network, obtaining an angle value of an ang_Net which is identified and output by the deep convolutional neural network, and obtaining a three-dimensional coordinate [ x_Net y_Net z_Net ] to which the angle value of the ang_Net belongs;
102, selecting at least three characteristic points to perform rotation recognition mode based on a preset instrument front view and an instrument inclination view to be recognized, and obtaining a rotation angle (theta) when the instrument front view rotates to a state of the instrument inclination view x ,θ y ,θ z ) And rotation matrix (rotM X ,rotM Y ,rotM Z );
Step 103, according to the three-dimensional coordinate [ x_Net y_Net z_Net ] to which the angle value of one ang_Net belongs]Rotation angle (θ) when the meter front view is rotated to the state of the meter inclination view x ,θ y ,θ z ) And rotation matrix (rotM X ,rotM Y ,rotM Z ) Acquiring an angle value theta when an instrument pointer is positioned at the front in an instrument oblique diagram point
104, according to the basic parameters of the instrument when the instrument is in the front view and the angle value theta when the instrument pointer is in the front point And obtaining a meter reading R in the meter inclination diagram to be identified.
2. The method according to claim 1, wherein the step 104 comprises:
obtaining the meter reading R according to the following formula;
wherein R is the meter reading, V min ,V max Minimum and maximum readings, θ, of the meter, respectively minmax The minimum reading and the horizontal line included angle of the instrument are respectively, and the maximum reading and the horizontal line included angle of the instrument are respectively, theta point The angle value of the instrument pointer when the instrument pointer is at the front is the angle between the instrument pointer and the horizontal wire when the instrument pointer is at the front.
3. The method according to claim 1, wherein said step 103 comprises:
obtaining the angle value theta of the instrument pointer when the instrument pointer is at the front according to the following formula point
front_x=x n *rotM x *rotM Y *rotM Z
front_y=y n *rotM x *rotM Y *rotM Z
front_z=0
The coordinates front_x, front_y and front_z of the pointer vertex of the inclined instrument in the instrument inclined graph are the front;
[x_Net y_Net z_Net]is the three-dimensional coordinates of the vertex of the pointer of the inclined instrument in the instrument inclined chart, [ x ] n y n z n ]Is the coordinate of the pointer vertex above the rotating inclined plane;
the rotation inclined plane is a rotation angle (theta) x ,θ y ,θ z ) A plane formed by the two layers;
nl is the normal of the rotating ramp obtained by [ x_Net y_Netz_Net ].
4. A method according to claim 3, wherein said step 103 comprises:
acquiring a normal nl of a rotating surface according to the following formula;
up=[0 1 0]
r=[1 0 0]
up_R=rotM X ×rotM Y ×rotM Z ×up
r_R=rotM X ×rotM Y ×rotM Z ×r
nl=-(r_R×up_R)
wherein up and R are respectively the positive unit coordinates on the y axis of the instrument front view, the positive unit coordinates on the x axis, and up_R and r_R are respectively the positive unit coordinates on the y axis and the positive unit coordinates on the x axis after the instrument inclinometer rotates.
5. The method according to any one of claims 1 to 4, wherein prior to step 101, the method further comprises:
acquiring a training data set, and training a deep neural network model by adopting the training data set;
the training data set comprises a plurality of instrument front diagrams marked with characteristic points and instrument inclined diagrams marked with the characteristic points in the corresponding instrument front diagrams;
the loss function of the training process is:
the Angle Loss is used for measuring the difference between the pointer Angle predicted by the deep neural network model and the actual pointer Angle; the lbox Loss is used for measuring the position error of the deep neural network model; the lobj Loss is used for measuring the confidence coefficient error of the deep neural network model, and the lcls Loss is used for measuring the class error of the deep neural network model; λ1, λ2, λ3, λ4 are four weight parameters λ1, λ2, λ3, λ4, respectively;
and/or the number of the groups of groups,
the training process for training the deep neural network model by adopting the training data set comprises the following steps:
preprocessing each image, carrying out n times of convolution on the preprocessed 640 x3 meter image, carrying out depth convolution, point-by-point convolution and depth separable convolution processing to obtain a characteristic image with the size of 40 x 64, carrying out Softmax operation on the characteristic image with the size of 40 x 64 to obtain the probability that each pixel point in the characteristic image belongs to each category respectively, converting the probability into the category to which each pixel specifically belongs through argmax operation, finally restoring the characteristic image with the size of 40 x 64 to obtain the characteristic image with the size of 640 x3, predicting pixel coordinates of a plurality of characteristic points, calculating the slope of a pointer based on the image bottom edge according to the pixel coordinates of the predicted characteristic points, and obtaining the pointer angle according to the slope.
6. The method according to any one of claims 1 to 4, wherein said step 101 comprises:
acquiring a three-dimensional coordinate [ x_Net y_Net z_Net ] to which an angle value of ang_Net belongs according to the following formula;
z_net= -1, z_net represents that the meter tilt map is generated by projection of the meter front map after rotation, and z=0 represents the meter front map.
7. The method according to any one of claims 1 to 4, wherein the step 102 comprises:
102-1, respectively marking three corresponding characteristic points on a front view of the instrument and an inclined view of the instrument; acquiring respective first angle values based on the following formulas aiming at two-dimensional coordinate values of characteristic points in an instrument front view and coordinates of the characteristic points in an instrument inclined view;
102-2, according to the respective first angle values, acquiring three-dimensional coordinates (x, y, z) of each feature point in the instrument front view and the instrument inclined view based on the following formula; ang is a pre-acquired first angle value for each feature point;
z=0;
step 102-3, obtaining a rotation matrix (rotM) from the front view of the instrument to the oblique view of the instrument based on the three-dimensional coordinates of each feature point in the front view of the instrument and the three-dimensional coordinates of each feature point in the oblique view of the instrument X ,rotM Y ,rotM Z );
Wherein: θ x ,θ y ,θ z Traversing and taking values from 0-180 degrees.
8. The method of claim 7, wherein the step 102 further comprises:
acquiring theta by adopting a traversing mode x ,θ y ,θ z The method comprises the steps of carrying out a first treatment on the surface of the The method specifically comprises the following steps:
the coordinates (x_ R, y _ R, z _r) of the three feature points in the instrument front view after rotation are obtained according to the following formula 5, the three rotated coordinate points are projected onto a plane with z=0, and the angles rotAng1, rotAng2 and rotAng3 of the three feature points on the plane with z=0 are calculated according to the formula 6;
wherein, rotAng1 corresponds to (x_R1, y_R1) which belongs to an angle value calculated after the projection of a coordinate point after the rotation of a first feature point in the front view of the instrument, rotAng2 corresponds to (x_R2, y_R2) which belongs to an angle value calculated after the projection of a coordinate point after the rotation of a second feature point in the front view of the instrument, and rotAng3 corresponds to (x_R3, y_R3) which belongs to an angle value calculated after the projection of a coordinate point after the rotation of a third feature point in the front view of the instrument;
if the angles of the three characteristic points after rotation are within the error range with the first angle values of the three characteristic points on the instrument inclined graph, obtaining three rotation angles;
namely: (x 1, y1, z 1) -, (x_R1, y_R1, z_R1),
(x2,y2,z2)-》(x_R2,y_R2,z_R2),
(x3,y3,z3)-》(x_R3,y_R3,z_R3),
r1, R2 and R3 respectively represent three rotated characteristic points, x_R1, y_R1 and z_R1 respectively represent the coordinate x-axis coordinate, y-axis coordinate and z-axis coordinate of the three rotated characteristic points of the instrument front view,
in the formula 5, x_r, y_r, z_r respectively represent x coordinates, y coordinates, z coordinates, x, y, z after the feature points of the front view of the instrument pass through rotation, respectively represent feature point coordinates of the front view of the instrument, rotMz, rotMy, rotMx respectively represent a rotation matrix around the z axis, a rotation matrix around the y axis, and a rotation matrix around the x axis;
theta-thr < rotAng < theta + thr (equation 7)
If ang4-thr<rotAng1<ang4+thr,ang5-thr<rotAng2<ang5+thr,ang6-thr<rotAng3<ang6+thr, θ at this time x ,θ y ,θ z Three rotation angles are finally obtained;
the first angle value corresponding to the first characteristic point of the instrument front view is ang1;
the first angle value corresponding to the second characteristic point of the instrument front view is ang2;
the first angle value corresponding to the third characteristic point of the instrument front view is ang3;
the first angle value corresponding to the first characteristic point of the instrument inclined graph is ang4;
the first angle value corresponding to the second characteristic point of the instrument inclined graph is ang5;
and a first angle value corresponding to a third characteristic point of the instrument inclination chart is ang6.
thr is a preset error.
9. An electronic device, comprising: a memory and a processor, said memory having stored therein a computer program, said processor executing the computer program stored in said memory and performing the steps of a method for automatic reading of a tilt pointer meter according to any one of claims 1 to 8.
CN202311181600.8A 2023-09-13 2023-09-13 Automatic reading method for inclined pointer type instrument Pending CN117315635A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311181600.8A CN117315635A (en) 2023-09-13 2023-09-13 Automatic reading method for inclined pointer type instrument

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311181600.8A CN117315635A (en) 2023-09-13 2023-09-13 Automatic reading method for inclined pointer type instrument

Publications (1)

Publication Number Publication Date
CN117315635A true CN117315635A (en) 2023-12-29

Family

ID=89236315

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311181600.8A Pending CN117315635A (en) 2023-09-13 2023-09-13 Automatic reading method for inclined pointer type instrument

Country Status (1)

Country Link
CN (1) CN117315635A (en)

Similar Documents

Publication Publication Date Title
CN110135455B (en) Image matching method, device and computer readable storage medium
CN110866953B (en) Map construction method and device, and positioning method and device
US10334168B2 (en) Threshold determination in a RANSAC algorithm
JP5830546B2 (en) Determination of model parameters based on model transformation of objects
KR101791590B1 (en) Object pose recognition apparatus and method using the same
CN108648194B (en) Three-dimensional target identification segmentation and pose measurement method and device based on CAD model
CN111429533B (en) Camera lens distortion parameter estimation device and method
US20130195351A1 (en) Image processor, image processing method, learning device, learning method and program
US20230169677A1 (en) Pose Estimation Method and Apparatus
JP5833507B2 (en) Image processing device
CN109919971B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN112163588A (en) Intelligent evolution-based heterogeneous image target detection method, storage medium and equipment
EP3185212B1 (en) Dynamic particle filter parameterization
JP2961264B1 (en) Three-dimensional object model generation method and computer-readable recording medium recording three-dimensional object model generation program
CN114140623A (en) Image feature point extraction method and system
CN112053441A (en) Full-automatic layout recovery method for indoor fisheye image
CN110120013A (en) A kind of cloud method and device
CN113393524A (en) Target pose estimation method combining deep learning and contour point cloud reconstruction
CN112733641A (en) Object size measuring method, device, equipment and storage medium
CN111127556A (en) Target object identification and pose estimation method and device based on 3D vision
CN108447092B (en) Method and device for visually positioning marker
CN114882106A (en) Pose determination method and device, equipment and medium
CN113436251A (en) Pose estimation system and method based on improved YOLO6D algorithm
CN116433822B (en) Neural radiation field training method, device, equipment and medium
CN117315635A (en) Automatic reading method for inclined pointer type instrument

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination