CN111191562B - Pointer instrument indication value reading method based on CNN neural network - Google Patents

Pointer instrument indication value reading method based on CNN neural network Download PDF

Info

Publication number
CN111191562B
CN111191562B CN201911362371.3A CN201911362371A CN111191562B CN 111191562 B CN111191562 B CN 111191562B CN 201911362371 A CN201911362371 A CN 201911362371A CN 111191562 B CN111191562 B CN 111191562B
Authority
CN
China
Prior art keywords
instrument
pointer
image
net
line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911362371.3A
Other languages
Chinese (zh)
Other versions
CN111191562A (en
Inventor
吴武勋
宋建斌
徐晓东
张青
李瀚�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Eshore Technology Co Ltd
Original Assignee
Guangdong Eshore Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Eshore Technology Co Ltd filed Critical Guangdong Eshore Technology Co Ltd
Priority to CN201911362371.3A priority Critical patent/CN111191562B/en
Publication of CN111191562A publication Critical patent/CN111191562A/en
Application granted granted Critical
Publication of CN111191562B publication Critical patent/CN111191562B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/02Recognising information on displays, dials, clocks

Abstract

The invention discloses a pointer instrument indicating value reading method based on a CNN neural network, which can detect an instrument in an image and the type of the instrument through instrument detection and type judgment, instrument characteristic point detection, instrument indicating value calculation and the like, and calculate the indicating value of the instrument. By the method, the pointer instrument in the image can be more accurately identified, and the indicating value of the pointer instrument can be read; can be applied to the demand occasion of the automatic reading of pointer instrument, can replace the manual work and carry out intelligent monitoring instrument, send out the early warning when exceeding the preset value.

Description

Pointer instrument indication value reading method based on CNN neural network
Technical Field
The invention relates to the technical field of image recognition, in particular to a pointer instrument indicating value reading method based on a CNN neural network.
Background
The pointer instrument has the advantages of high reaction speed, strong impact resistance and interference resistance, obvious display and low price, basically no hysteresis of the reaction of the digital instrument exists in the mechanical pointer instrument, and the pointer instrument also has the capacity of displaying overload proportion; in some occasions, the digital meter is not suitable for measuring the polarity of the voltage and current transformer, and the digital meter cannot react at all and only can use a pointer meter.
Despite the development of electronics, pointer instruments are still widely used in certain industrial applications due to the irreplaceable advantages of digital instruments. With the vigorous development of the modern industry 4.0, the traditional way of manually reading the meter indication will gradually be replaced by a visual reading system.
However, the existing pointer instrument mostly adopts an automatic identification principle based on machine vision, the technology is mainly traditional image processing, the defect exists, a stable light source is usually needed, the system acquisition environment is ensured not to have large fluctuation, the system acquisition environment is usually difficult to apply to old equipment and inconvenient modified equipment, and can be influenced by adverse factors such as illumination change, background interference, shadow, rotation, rapid movement and the like under natural environment, the adaptability is poor, the analysis effect has great fluctuation, the requirement on monitoring equipment installation is overhigh, the anti-shaking capability is poor and the like.
In addition, the pointer instrument has a plurality of structural types, and the invention provides a CNN-based deep convolution neural network and designs a set of identification scheme with strong feasibility, high accuracy and suitability for various pointer instruments and indicated values in various complex scenes by combining the structural characteristics of the pointer instrument. The scheme can adapt to the scenes of day and night change and light change, the indicating value of the pointer instrument is analyzed and processed in real time, and early warning capabilities such as real-time monitoring of the instrument and alarm when the indicating value exceeds a threshold value can be achieved instead of manual work.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a pointer instrument indicating value reading method based on a CNN neural network, which can more accurately identify a pointer instrument in an image and read the indicating value of the pointer instrument; the automatic reading device can be applied to demand occasions of automatic reading of pointer instruments, can replace manual work to carry out intelligent monitoring instruments, and sends out early warning when exceeding a preset value.
In order to achieve the above object, the present invention provides a pointer instrument indication value reading method based on CNN neural network, which includes the following steps:
step 1: the instrument detection and key point extraction network based on the CNN neural network is divided into 3 parts, including a P-NET network, an R-NET network and an O-NET network;
step 2: determining the coordinate information of key points in the instrument according to the type of the detected pointer instrument, wherein the coordinate information comprises the key points on a 0 scale mark line or a parallel reference line thereof and the key points on the line where the pointer is positioned;
and 3, step 3: automatically constructing a training data set and training a network model by using various instrument images to obtain a P-NET, R-NET and O-NET network model of the instrument;
and 4, step 4: detecting an original image of the detected instrument by utilizing a P-NET, R-NET and O-NET network model of the instrument to obtain regional position information and key point coordinate information of each instrument in the image;
and 5: calculating a crossed line included angle formed by the two lines according to the key point coordinate information obtained in the step 4, wherein the straight line where the pointer is actually located and a parallel reference line of the straight line where the pointer is located when the pointer points to the 0 scale, and the included angle is the deflection angle theta of the instrument pointer;
step 6: according to the range distribution condition of each instrument, using uniformly distributed region as step-divided range calculation angle interval set, according to the real-time pointer deflection angle theta determining grade interval of pointer in the instrument, according to the instrument indication value calculation formula obtaining real-time instrument indication value V,
Figure BDA0002337519290000021
wherein V min For minimum indication of this gear, theta min Minimum angle of this gear, [ theta ] max This gear maximum angle.
Preferably, the original image of the detected meter is detected, and the pointer meter in the image is detected, including the area where the meter is located and the type of the meter, and the specific process is as follows:
a. zooming the original image into a group of image sets by an image space pyramid method;
b. filtering areas of the image set according to image types and combining the areas according to non-maximum values through a neural network P-NET to obtain the coordinate information and the category information of the output area in the step;
c. in the original image, cutting out an image set according to the coordinate information of the output area in the previous step, filtering the area of the image set according to the image type through a neural network R-NET, and merging the area according to a non-maximum value to obtain the coordinate information and the category information of the output area in the previous step;
d. in the original image, an image set is cut out from the original image according to the area coordinates obtained in the previous step, and the image set is merged through a neural network O-NET and a non-maximum value, and then final image type, area and key point information is output.
Preferably, the instrument key point information includes two key points of a 0-scale line or a parallel reference line thereof, a line connecting the two key points is parallel to the 0-scale horizontal line, and includes two end point coordinates on a line where the pointer is located, and the line connecting the two key points is overlapped with the pointer; the extraction of key points of the instrument comprises the following steps:
in the original image, an image set is cut out according to an area output by an R-NET network, and the type of the instrument, the area of the instrument and the key point coordinate of the instrument are output through a neural network O-NET.
Filtering the areas according to categories, combining the areas according to non-maximum values, removing the areas of the non-meters, combining the areas of the same meters into one area, and obtaining the area coordinates, the categories and the coordinates of the key points of the meters after the step.
Preferably, the key points of the pointer are determined, and the middle point of the pointer and the end point of the pointer are adopted.
Preferably, the calculation formula of the instrument pointer deflection angle θ is:
Figure BDA0002337519290000031
wherein A, B, C and D are key points in the instrument respectively.
Preferably, the precision of the coordinates of the key points of the pointer in the method is determined according to the size of the feature quantity output by the full connect layer in the O-NET network, the size of the feature output by the full connect layer in the modified network is changed, and the corresponding precision is also changed.
The technical scheme of the invention has the following beneficial effects:
1. the invention adopts the CNN neural network as the basis of extracting the key points of the instrument, overcomes the noise interference brought by the traditional method in analysis, can adapt to various illumination environments, day and night switching and the change of environmental shadows, and has good adaptability.
2. The model in the invention has strong trainability, and a user can conveniently train the models aiming at different instrument types, different instrument ranges, different range distribution conditions and the like, and can quickly adapt to various different instrument types.
3. The invention can simultaneously analyze a plurality of instruments of the same type in the image and read the indication values of the instruments, thereby improving the use of resources and playing a good role in practicability.
4. The 0 scale mark of the invention adopts the parallel reference line, thereby avoiding the problem that the key point at the 0 scale is difficult to position after being shielded by the pointer when the pointer is superposed, and improving the accuracy.
5. The key point information of the invention does not need to refer to the convolution center point of the pointer, thereby avoiding the situation that part types of pointer instruments can not be identified because the convolution center is shielded.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is an overall flow chart of the algorithm of the present invention;
FIG. 2 is a diagram of a P-NET network of the present invention;
FIG. 3 is a diagram of the R-NET network of the present invention;
FIG. 4 is a diagram of the O-NET network of the present invention;
FIG. 5 is a schematic diagram of the key point coordinates of the present invention;
FIG. 6 is a schematic diagram of the relationship between the indicating angle and the coordinates of the key points;
FIG. 7 is a pointer instrument of an embodiment of the present invention;
FIG. 8 is a set of spatial pyramid images of an embodiment of the present invention;
FIG. 9 is a schematic diagram of the P-NET output of an embodiment of the present invention;
FIG. 10 is a schematic representation of the R-NET output of an embodiment of the present invention;
fig. 11 is a schematic diagram of the O-NET output of an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The pointer instrument indicating value reading method based on the CNN neural network can simultaneously identify various instruments with more consistent appearance characteristics, such as a square voltmeter and a square ammeter, wherein the instrument panel characteristics of the two instruments are more consistent, namely, the coordinates of the rotation center point are positioned at the lower right corner of the instrument, as shown in figure 7, and the pointer and the horizontal line of the scale mark 0 can find a relevant reference line in the whole instrument, as shown in figure 6, after the reference line is determined, a training set can be made, the network is trained, and a final detection model is generated for real-time analysis.
The invention relates to a pointer instrument indicating value reading method based on a CNN neural network, which has the specific flow as shown in figure 1, and the whole pointer instrument indicating value reading process is divided into 3 steps: the method comprises the steps of instrument detection and type judgment, instrument characteristic point detection and instrument indication value calculation. After 3 steps, the instrument in the image and the type of the instrument can be detected, and the indicating value of the instrument can be calculated. The detailed description is as follows:
1. network definition
The whole network is divided into 3 parts:
part 1 is a full convolution network P-NET, as shown in FIG. 2, the image with size M N is input, the input with any size is supported, the input image size during training is 12X 12, and the P-NET network finally regresses to generate the category vector and the candidate frame vector of the image.
The part 2 is an R-NET network, as shown in fig. 3, an image with a size of M × N is input, an input with any size is supported, the size of the input image during training is 24 × 24, a candidate window generated by the P-NET network is subjected to non-maximum suppression to form an image set, and after passing through the R-NET network, the R-NET network finally regresses to generate a category vector and a candidate frame vector of the image.
Part 3 is an O-NET network, as shown in FIG. 4, in order to make the coordinate regression of the key point more accurate, the network modifies the output characteristic size of the full connect layer, changes the original 256 to 512, improves the accuracy of the key point, and makes the final indication value calculation more practical; while the value of the Meter landmark localization is modified, which is determined from the number of keypoints, i.e. keypoint number 2, in fig. 4 the value of Meter landmark localization is 8, i.e. representing 4 keypoints. In the application process, the number of the key points can be set according to needs, and 4 key points are selected as the optimal scheme in the invention. The network inputs images with the size of M x N, supports input with any size, the size of input images during training is 48 x 48, candidate windows generated by an R-NET network are subjected to image aggregation formed after non-maximum suppression, and after passing through the O-NET network, the O-NET network finally regresses to generate a category vector and a candidate frame vector of the images, and the number and position information of key points in the images.
2. Network training
A training data set is automatically constructed by using various instrument images, network parameters are trained, the regional position relation of the instrument to be detected is determined, relevant parameters and a learning strategy are set, and finally, a P-NET, R-NET and O-NET network model is obtained.
3. Instrument detection and classification
This step is mainly used to detect the pointer instrument in the image, including the area where the instrument is located, and the type of instrument, such as voltmeter and ammeter.
The specific process is as follows:
a. zooming the original image into a group of image sets by an image space pyramid method;
b. filtering areas of the image set according to image types and combining the areas according to non-maximum values through a neural network P-NET to obtain the coordinate information and the category information of the output area in the step;
c. in the original image, cutting out an image set according to the coordinate information of the output area in the previous step, filtering the area of the image set according to the image type through a neural network R-NET, and merging the area according to a non-maximum value to obtain the coordinate information and the category information of the output area in the previous step;
d. in the original image, an image set is cut out from the original image according to the area coordinates obtained in the previous step, and the image set is merged through a neural network O-NET and a non-maximum value, and then final image type, area and key point information is output.
4. Key point extraction
The step is mainly to extract the coordinate information of key points in the instrument, including key points on a 0-scale line or a parallel reference line thereof and key points on a line where a pointer is located. The parallel reference line of the scale 0 is a straight line formed by connecting lines of two points, and can be a line where a pointer which is obvious in characteristics and parallel to the scale line 0 is located, such as a point A and a point B in the figure 5, and a connecting line of two points AB is parallel to the scale line 0; the line where the pointer is located is a line composed of two end points of the pointer, such as point C and point D in fig. 5, so this step will extract coordinate information of four key points of ABCD of the meter. In actual application, the key points on the pointer adopt a point close to a convolution center on the pointer and a central point of the pointer; when selecting the key point on the scale mark 0, if another point of the scale mark 0 of the instrument is difficult to find an obvious feature point, for example, when a convolution center point is blocked, the key point with the obvious feature point can be selected on the parallel line of the scale mark 0 of the instrument for facilitating the selection of the key point.
The method comprises the following specific steps:
in the original image, an image set is cut out according to an area output by an R-NET network, and the type of the instrument, the area of the instrument and the key point coordinate of the instrument are output through a neural network O-NET.
Filtering the areas according to categories, combining the areas according to non-maximum values, removing the areas of the non-meters, combining the areas of the same meters into one area, and obtaining the area coordinates, the categories and the coordinates of the key points of the meters after the step.
So far, the object area containing the instrument in the image and the coordinate information of the key point of each object are analyzed, wherein the key point information comprises two reference points of a 0-scale horizontal line, the connecting line of the two points is parallel to the 0-scale horizontal line, and the two endpoint coordinates of the pointer information, and the connecting line of the two points is superposed with the pointer.
5. Angle calculation
According to the coordinate information of the key points acquired in the key point extraction, as shown in fig. 6, θ is a pointer deflection angle, and in combination with a calculation formula of the pointer deflection angle θ, an included angle between a pointer of the instrument and 0 scale, that is, a value of the pointer deflection angle θ is calculated, so that the indicated value pointed by the pointer can be accurately calculated in the next step.
Figure BDA0002337519290000071
6. Indicating value calculation
The scales of each instrument have the own measuring range and the measuring range distribution condition; however, the instrument of any type has fixed measuring range and distribution, and the indicating value indicated by the instrument at the moment can be calculated according to the measuring range and distribution and the pointer deflection angle theta calculated in the last step.
Since the scale of the meter is often not uniform in the distribution of the whole scale, as shown in fig. 5, the scale can be divided into 5 steps, which are:
a first gear: the indication range is 0 to 300 volts, and theta is 0 to 11.60 degrees;
a second gear: the value range is 300 to 600 volts, and theta is 11.60 to 34.80 degrees;
third gear: the range of values is 600 to 900 volts, theta is 34.80 to 54.70 degrees;
fourth gear: indicating the interval of 900 to 1200 volts and theta is 54.70 to 68.70 degrees;
fifth gear: the range of the indication value is 1200 to 2400, and theta is 68.70 to 90 degrees;
the indication value V for each gear is calculated as follows:
Figure BDA0002337519290000072
/>
wherein V min For minimum indication of this gear, theta min Minimum angle of this gear, θ max This maximum angle of the gear.
The invention is further illustrated by the following examples:
for an image, the original image is shown in fig. 7, firstly, a series of scaling operations are performed on the original image by an image space pyramid method to obtain a space pyramid image set of the image to be detected, as shown in fig. 8.
And inputting each image in the space pyramid image set through a neural network P-NET, and carrying out maximum value inhibition on the output of the P-NET to obtain the instrument category and position information of each image. After the instrument types and the position information of all the images are obtained through analysis, maximum suppression is carried out on all the instrument types and the position information, and the instrument type and the position information of the image to be detected after the P-NET are obtained, and the figure 9 is shown.
According to the position information obtained in fig. 9, region clipping is performed on the original image to be detected for each region, the clipped image is scaled to 24 × 24, then the image is input through the neural network R-NET, and the maximum value suppression is performed on the output of the R-NET, so as to obtain the meter type and position information of all regions. After all the areas are analyzed to obtain the category and position information of the instrument, maximum suppression is performed on the category and position information of the instrument in all the areas, and at this time, the category and position information of the image to be detected after the R-NET is obtained, as shown in FIG. 10.
According to the position information obtained in fig. 9, region cutting is performed on the image to be detected for each region, the cut image is scaled to 48 × 48, then the image is input into the O-NET, and the maximum value suppression is performed on the output of the O-NET, so as to obtain the instrument category, position and key point coordinate information of the image to be detected after the O-NET. After all the areas are analyzed to obtain the instrument category, position and key point coordinate information, maximum suppression is performed on all the instrument category, position and key point coordinate information, and at this time, all the instrument information of the image to be detected, including the instrument position and key point information, is obtained, as shown in fig. 11.
And substituting the obtained instrument key point information A (225, 60), B (35, 42), C (167, 172) and D (120, 167) into an angle calculation formula, calculating an included angle theta value between the pointer and the horizontal scale, and obtaining theta =0.66 degrees.
The indicating value of the instrument can be calculated by the theta value according to the indicating value calculation method provided by the instrument. According to the distribution situation of the measuring range of the instrument in the embodiment, the measuring range can be divided into 3 grades:
a first gear: the indication interval is 0 to 200 volts, theta is 0 to 30 degrees, wherein Vmin is 0, theta min is 0, and theta max is 30;
a second gear: the indication interval is 200 to 400 volts, theta is 30 to 60 degrees, wherein Vmin is 200, theta min is 30, and theta max is 60;
third gear: the indication interval is 400 to 600 volts, theta is 60 to 90 degrees, wherein Vmin is 400, theta min is 60, and theta max is 90;
since θ =0.66 is found in the previous step, falling within the first range region, the indication is calculated as:
v =0+ (0.66-0)/(30-0) =0.022 volts.
This value is very consistent with reality.
The above embodiments of the present invention are described in detail, and the principle and the implementation of the present invention are explained by applying specific embodiments, and the above description of the embodiments is only used to help understanding the method of the present invention and the core idea thereof; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (6)

1. A pointer instrument indicating value reading method based on a CNN neural network is characterized by comprising the following steps:
step 1: the instrument detection and key point extraction network based on the CNN neural network is divided into 3 parts, including a P-NET network, an R-NET network and an O-NET network;
step 2: determining key point coordinate information in the instrument according to the type of the detected pointer instrument, wherein the key point coordinate information comprises a key point on a 0-scale line or a parallel reference line thereof and a key point on a line where the pointer is located;
and step 3: automatically constructing a training data set and training a network model by using various instrument images to obtain a P-NET, R-NET and O-NET network model of the instrument;
and 4, step 4: detecting an original image of the detected instrument by utilizing a P-NET, R-NET and O-NET network model of the instrument to obtain regional position information and key point coordinate information of each instrument in the image;
and 5: calculating a crossed line included angle formed by the two lines according to the key point coordinate information obtained in the step 4, wherein the straight line where the pointer is actually located and a parallel reference line of the straight line where the pointer is located when the pointer points to the 0 scale, and the included angle is the deflection angle theta of the instrument pointer;
step 6: according to the range distribution condition of each instrument, using uniformly distributed region as step-divided range calculation angle interval set, according to the real-time pointer deflection angle theta determining grade interval of pointer in the instrument, according to the instrument indication value calculation formula obtaining real-time instrument indication value V,
Figure FDA0002337519280000011
wherein V min For a minimum indication of this gear, θ min Minimum angle of this gear, [ theta ] max This maximum angle of the gear.
2. The method of claim 1, wherein the original image of the detected meter is detected, and the pointer meter in the image is detected, including the area where the meter is located and the type of the meter, and the specific process is as follows:
a. zooming the original image into a group of image sets by an image space pyramid method;
b. filtering areas of the image set according to image types and combining the areas according to non-maximum values through a neural network P-NET to obtain the coordinate information and the category information of the output area in the step;
c. in the original image, cutting out an image set according to the coordinate information of the output area in the previous step, filtering the area of the image set according to the image type through a neural network R-NET, and merging the area according to a non-maximum value to obtain the coordinate information and the category information of the output area in the previous step;
d. in the original image, an image set is cut out on the original image according to the area coordinates obtained in the previous step, and the image set is merged through a neural network O-NET and a non-maximum value, and then final image type, area and key point information are output.
3. The method according to claim 1 or 2, wherein the meter key point information includes two key points of a 0-scale line or a parallel reference line thereof, a line connecting the two key points being parallel to a 0-scale horizontal line, and includes two end point coordinates on a line on which the pointer is located, the line connecting the two key points being coincident with the pointer; the extraction of key points of the instrument comprises the following steps:
in an original image, cutting out an image set according to an area output by an R-NET network, and outputting the type of an instrument, the area of the instrument and the key point coordinates of the instrument through a neural network O-NET;
filtering the areas according to categories, combining the areas according to non-maximum values, removing the areas of the non-meters, combining the areas of the same meters into one area, and obtaining the area coordinates, the categories and the coordinates of the key points of the meters after the step.
4. The method of claim 1, 2 or 3, wherein the key point on the pointer is a point on the pointer near the center of gyration and a center point of the pointer; when the key point on the scale line 0 is selected, if another point of the scale line 0 of the instrument is difficult to find an obvious feature point, for example, when the convolution center point is shielded, the key point with the obvious feature point can be selected on the parallel line of the scale line 0 of the instrument, so as to be convenient for selecting the key point.
5. The method of claim 1, wherein the meter pointer deflection angle θ is calculated by:
Figure FDA0002337519280000021
wherein A, B, C and D are key points in the instrument respectively.
6. The method of claim 1, wherein the accuracy of the coordinates of the key points of the pointer in the method is determined according to the size of the feature quantity output by the full connect layer in the O-NET network, and the size of the feature output by the full connect layer in the network is modified, and the corresponding accuracy is changed.
CN201911362371.3A 2019-12-26 2019-12-26 Pointer instrument indication value reading method based on CNN neural network Active CN111191562B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911362371.3A CN111191562B (en) 2019-12-26 2019-12-26 Pointer instrument indication value reading method based on CNN neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911362371.3A CN111191562B (en) 2019-12-26 2019-12-26 Pointer instrument indication value reading method based on CNN neural network

Publications (2)

Publication Number Publication Date
CN111191562A CN111191562A (en) 2020-05-22
CN111191562B true CN111191562B (en) 2023-04-18

Family

ID=70709387

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911362371.3A Active CN111191562B (en) 2019-12-26 2019-12-26 Pointer instrument indication value reading method based on CNN neural network

Country Status (1)

Country Link
CN (1) CN111191562B (en)

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105809179B (en) * 2014-12-31 2019-10-25 中国科学院深圳先进技术研究院 A kind of Recognition of Reading method and device of pointer instrument
US11093793B2 (en) * 2017-08-29 2021-08-17 Vintra, Inc. Systems and methods for a tailored neural network detector

Also Published As

Publication number Publication date
CN111191562A (en) 2020-05-22

Similar Documents

Publication Publication Date Title
CN110659636B (en) Pointer instrument reading identification method based on deep learning
CN105091922B (en) A kind of pointer gauge Recognition of Reading method based on virtual dial plate
CN102521560B (en) Instrument pointer image identification method of high-robustness rod
CN109190473A (en) The application of a kind of " machine vision understanding " in remote monitoriong of electric power
CN109115812A (en) A kind of weld seam egative film defect identification method and system
CN110119680B (en) Automatic error checking system of regulator cubicle wiring based on image recognition
CN108759973A (en) A kind of water level measurement method
CN111368906B (en) Pointer type oil level meter reading identification method based on deep learning
CN109284718B (en) Inspection robot-oriented variable-view-angle multi-instrument simultaneous identification method
CN109993154A (en) The lithium sulfur type instrument intelligent identification Method of substation's simple pointer formula
CN110909738A (en) Automatic reading method of pointer instrument based on key point detection
CN104197900A (en) Meter pointer scale recognizing method for automobile
CN107561736B (en) LCD defect detection method based on Fourier transform and Hough transform
CN106595496A (en) Man-machine interaction part size flexibility vision measurement method
CN114663744A (en) Instrument automatic identification method and system based on machine learning
CN115063579A (en) Train positioning pin looseness detection method based on two-dimensional image and three-dimensional point cloud projection
CN113962951B (en) Training method and device for detecting segmentation model, and target detection method and device
CN103063674B (en) Detection method for copper grade of copper block, and detection system thereof
CN105118069B (en) A kind of robot of complex environment straight-line detection and screening technique and application this method
CN111191562B (en) Pointer instrument indication value reading method based on CNN neural network
Liu et al. Research on surface defect detection based on semantic segmentation
CN117152727A (en) Automatic reading method of pointer instrument for inspection robot
CN111553345A (en) Method for realizing meter pointer reading identification processing based on Mask RCNN and orthogonal linear regression
CN206146375U (en) System for online dimension parameter of large -scale side's rectangular pipe of many specifications detects usefulness
CN115115889A (en) Instrument image analysis method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant