CN109784257B - Transformer thermometer detection and identification method - Google Patents

Transformer thermometer detection and identification method Download PDF

Info

Publication number
CN109784257B
CN109784257B CN201910014109.3A CN201910014109A CN109784257B CN 109784257 B CN109784257 B CN 109784257B CN 201910014109 A CN201910014109 A CN 201910014109A CN 109784257 B CN109784257 B CN 109784257B
Authority
CN
China
Prior art keywords
pointer
target
transformer
image
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910014109.3A
Other languages
Chinese (zh)
Other versions
CN109784257A (en
Inventor
陈成全
姚书龙
陆子清
闫琛
廖婕
韦佳贝
王弈心
唐志勇
朱兵
潘卫国
陈晖�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CRSC Research and Design Institute Group Co Ltd
Original Assignee
CRSC Research and Design Institute Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CRSC Research and Design Institute Group Co Ltd filed Critical CRSC Research and Design Institute Group Co Ltd
Priority to CN201910014109.3A priority Critical patent/CN109784257B/en
Publication of CN109784257A publication Critical patent/CN109784257A/en
Application granted granted Critical
Publication of CN109784257B publication Critical patent/CN109784257B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a method for detecting and identifying a transformer thermometer, which comprises the following steps that firstly, an inspection robot arrives at a designated position to obtain a picture; secondly, performing coarse positioning and accurate positioning on a target transformer thermometer area to be detected, calculating perceptual hash indexes, mutual information indexes and cross-over ratios of the target candidate areas, and screening the target candidate areas to obtain a final target image; then, reading in a target image, and adopting different image preprocessing operations according to the color of the pointer; then, positioning the instrument by using a target detector, dividing a dial area, and eliminating interference factors outside the dial of the transformer thermometer; then, extracting the pointer outline by using a corresponding method according to the color of the pointer; finally, calculating the rotation direction of the pointer and the reading of the pointer by using a cosine law; the method can detect and identify the transformer thermometers under various illumination and posture changes.

Description

Transformer thermometer detection and identification method
Technical Field
The invention relates to the field of power inspection robots, in particular to a method for detecting and identifying a thermometer of a transformer.
Background
The electric power inspection robot needs to realize basic functions of autonomous positioning and navigation in a transformer substation, on-site instrument reading identification, automatic charging and the like. The core function is to detect and identify the meter reading of the field power device.
At present, the power inspection robot has great difficulty in the process of detecting and identifying the temperature representation number of the transformer. Most transformer thermometers do not have the remote transmission function of intelligent instruments due to cost and history reasons, and only an inspection robot can read the instrument readings through a computer vision method. The premise of accurately identifying the temperature representation of the transformer is that the position of the temperature representation of the transformer in a visual image is accurately detected, most of the temperature representations of the transformer are outdoors, most of the conventional methods are used for detection and identification, the detection effect is poor under the condition that the illumination condition changes, a group of parameters are needed for one illumination condition, and the conditions of different outdoor illuminations, different shooting angles of an inspection robot and the like exist in the identification process. Therefore, a universal detection and identification method needs to be provided to deal with the detection and identification tasks of the transformer thermometer under different illumination and posture conditions.
Disclosure of Invention
The invention aims to provide a method for detecting and identifying a transformer thermometer, which is optimized aiming at the problem of indicating the number identification of the transformer thermometer on site under the working conditions of different illumination, postures and the like.
The technical solution of the purpose of the invention is as follows: a transformer thermometer detection and identification method comprises the steps of,
step 1: the inspection robot reaches a designated position to obtain a thermometer picture of the transformer to be detected;
step 2: based on the transformer thermometer picture to be detected, positioning a target transformer thermometer:
carrying out coarse positioning on the target transformer thermometer area by utilizing Mellin Fourier transform and phase correlation technology to obtain a coarse positioning target transformer thermometer area;
accurately positioning the target transformer thermometer region by using a machine learning method, and sending the transformer thermometer picture to be detected into a classifier to obtain a plurality of target candidate regions;
calculating perception hash indexes, mutual information indexes and cross-over ratios of the target candidate areas, and screening the target candidate areas to obtain a final target image;
and step 3: reading in the final target image, and carrying out preprocessing operation on the final target image;
and 4, step 4: utilizing a target detector to position the instrument, and segmenting a dial area of the transformer thermometer in the final target image;
and 5: extracting the outline of the pointer according to the color of the pointer;
step 6: the direction of pointer rotation is calculated, as well as the reading of the pointing.
Further, the step 1 further comprises the steps of,
and training a classifier by using the transformer thermometer image data set, and selecting an image in the middle of the transformer thermometer shot at each inspection point as a template image.
Further, the step 2 of calculating the intersection ratio, the mutual information index and the perceptual hash index of the plurality of target candidate regions, and screening the plurality of target candidate regions to obtain a final target image includes:
respectively solving the intersection ratio of each target candidate region and the temperature table region of the coarse positioning target transformer;
respectively calculating mutual information indexes of each target candidate area image and the template image;
respectively obtaining the perceptual hash index of each target candidate region image and the transformer thermometer region image in the template image;
performing weighting operation on the intersection ratio, mutual information indexes and perceptual hash indexes of each target candidate region to obtain the confidence coefficient of each target candidate region, and taking the target candidate region with the maximum confidence coefficient as an alternative detection result;
if the alternative detection result satisfies less than the set threshold value threshold and (pHash +1/I (G)(X),H(y)) When the temperature of the transformer is larger than the threshold, taking the temperature table area of the transformer with the rough positioning target as a final target image; otherwise, taking the alternative detection result as a final target image; wherein pHash is an alternative detection result perception hash index, I (G)(X),H(Y)) Is a mutual information index.
Further, the confidence coefficient is calculated by the following formula:
Confidence=1-(pHash+1/I(G(X),H(y)))/(IOU+D)
in the formula, I (G)(X),H(Y)) And the mutual information index of the target candidate area is pHash which is a perception hash index of the target candidate area, IOU which is an intersection ratio index of the target candidate area and a temperature table area of the coarse positioning target transformer, and D which is a set constant.
Furthermore, the value range of the threshold value threshold is 0.1-0.4, and the value range of the threshold value threshold is 10-50.
Further, the preprocessing operation on the final target image in the step 3 includes:
if the pointer is red, converting the final target image into an HSV format;
and if the pointer is not red, graying the final target image.
Further, the extracting pointer profiles in the step 5 includes,
if the pointer is red, the method specifically comprises the following steps:
extracting a red area by using the color continuity of HSV, wherein the value range of H is (0,10) (156,180), the value range of S channel is (43,255), and the value range of V channel is (46,255);
performing a closing operation;
extracting a pointer outline;
if the pointer is not red, the method specifically comprises the following steps:
carrying out histogram equalization and Gaussian filtering on the image;
binaryzation is carried out on the watch panel area by utilizing an Otsu algorithm, a black pointer is changed into a white area, and other backgrounds are changed into black;
and performing an opening operation to extract the pointer outline.
Further, the step 6 includes, in response to the determination,
extracting a contour point set of the pointer contour;
finding two points with the largest distance in the contour set points, and taking the two points as straight lines;
continuously finding out the point pairs on the pointer outline to form a line segment perpendicular to the straight line, and finding out the point pair with the farthest distance from the point pairs;
a point where a line segment formed by the farthest point pair intersects with the straight line is the center of a circle of the transformer thermometer, and a coordinate system is established by taking the center of the circle as an origin;
and calculating the rotation direction of the pointer and the reading of the pointer by utilizing the cosine law based on a vector formed by two points with the largest distance in the contour set points and any two-point vector in a vertical axis in a coordinate system.
Compared with the prior art, the invention has the following remarkable advantages: (1) the invention integrates the existing positioning information of the robot, and the positioning error is less than 5cm due to the accuracy of the robot navigation positioning technology, thereby solving the problem that the target size and angle change is large when the position of the robot is uncertain; (2) the change of the dimension, rotation and the like of an image shot by the camera is small, on the basis, the phase correlation is utilized for coarse detection, and a machine learning method is adopted for accurate detection aiming at the illumination change of the image and the posture change of a detected target, so that the problems that the target is greatly influenced by illumination, such as over-brightness, over-darkness and the like, are solved; (3) after the target image is detected, the pointer is used for preprocessing the image according to red and non-red colors, the accuracy of transformer thermometer indication identification is improved through operations such as closing operation and histogram equalization, the problem of transformer thermometer indication identification under different illumination and posture conditions is well solved, and the inspection efficiency of the robot is improved. (4) And the HSV color space is utilized, so that the influence of illumination on reading identification is reduced.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart of a method for detecting and identifying a transformer thermometer according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a detailed flow of steps 3-5 of a method for detecting and identifying a thermometer of a transformer according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating a perceptual hash calculation process according to an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating the calculation of the cosine theorem of the rotation direction and the reading of the pointer according to the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, the embodiment of the present invention introduces a method for detecting and identifying a transformer thermometer, which includes the following steps:
step 1: the inspection robot reaches a designated position to obtain a thermometer picture of the transformer to be detected;
step 2: based on the transformer thermometer picture to be detected, carrying out rough positioning and accurate positioning on the transformer thermometer area to be detected, and screening a target candidate area to obtain a final target image;
and step 3: reading in the final target image, and adopting different image preprocessing operations according to the color of the pointer;
and 4, step 4: positioning the target transformer thermometer area by using a target detector to enable the instrument position to be in the center of the picture, and then dividing a dial area; therefore, the method is adopted to eliminate the interference factors outside the dial plate of the transformer thermometer; the central area of the preprocessed picture is directly segmented according to the coordinates, and the central area contains the dial plate, so that the pointer extraction is facilitated.
And 5: extracting a pointer outline according to a preset method according to the color of the pointer;
step 6: calculating the rotation direction of the pointer and the reading of the pointer by using a cosine law;
further, the step 1 is specifically operated as: and training a classifier by using an instrument image data set of the transformer thermometer, selecting an image in the middle of the transformer thermometer shot at the inspection point as a template image for each inspection point, and acquiring an instrument picture to be detected when the inspection robot reaches the specified inspection point. Preferably, the classifier is an Adaboost classifier.
Further, the specific operation of step 2 is as follows:
step 2.1: roughly positioning a target transformer thermometer area in a transformer thermometer picture to be detected by utilizing Mellin Fourier transform and phase correlation technology;
step 2.2: accurately positioning a target transformer thermometer region by using a machine learning method, and sending a transformer thermometer picture to be detected into a trained classifier to obtain a plurality of target candidate regions;
step 2.3: and calculating three parameter indexes of perceptual hash, mutual information and intersection ratio of the candidate areas, and screening the target candidate areas to obtain a final target.
Further, step 2.3 comprises the steps of:
step 2.3.1: respectively intersecting each target candidate region with the temperature table region of the coarse positioning target transformer and obtaining a parallel ratio parameter IOU;
step 2.3.2: respectively carrying out perceptual hash calculation on each target candidate region image and a transformer thermometer region image in the template image to obtain a perceptual hash index;
step 2.3.3: respectively calculating mutual information indexes of each target candidate area image and the template image;
step 2.3.4: weighting three indexes of the cross-over ratio IOU, the mutual information index and the perceptual hash index of each target candidate region to obtain the confidence coefficient of each target candidate region, and taking the target candidate region with the maximum confidence coefficient as a candidate detection result;
step 2.3.5: if the IOU of the alternative detection result is less than the set threshold value threshold and (pHash +1/I (G)(X),H(y)) When the temperature of the transformer is greater than the threshold, taking the temperature table area of the transformer with the rough positioning target determined in the step 2 as a final target image, otherwise, taking the alternative detection result as the final target image, and taking pHash as the perception hash index of the alternative detection result, I (G)(X),H(Y)) Is a mutual information index.
Preferably, in step 2.3.1, the formula of the merging ratio parameter IOU of each target candidate region and the coarse positioning target instrument region is as follows:
Figure BDA0001938430990000071
where C is the coarse positioning target instrument area, niIs the ith target candidate region.
Preferably, the specific method for performing perceptual hash calculation on each target candidate area image and the meter area image in the template image in step 2.3.2 is as follows:
a1, pretreatment: reducing the size of the image and graying the image;
a2, DCT transform: performing DCT (discrete cosine transformation) on the preprocessed image to obtain a matrix F (u, v);
a3, reducing a DCT matrix, as shown in FIG. 3, wherein the features of the whole image are concentrated in a low-frequency region at the upper left corner, and a pixel 8 x 8 matrix at the upper left corner of the matrix is extracted as a feature matrix of the image;
a4, performing matrix binarization, in the figure 3, averaging the matrix, and setting the matrix larger than the average value to be 1 and setting the matrix smaller than the average value to be 0;
a5, generating a hash value: arranging the binarized matrix into a 64-bit sequence, wherein the sequence is a hash sequence of an input image;
a6, calculating the Hamming distance of the feature vectors of the target candidate area image and the template image as a perceptual hash index;
preferably, the calculation formula of the mutual information index of each target candidate area image and the template image in step 2.3.3 is as follows:
Figure BDA0001938430990000081
Figure BDA0001938430990000082
Figure BDA0001938430990000083
Figure BDA0001938430990000084
G(X)、H(Y)the number of grayscale pixels of the template image and the candidate image, respectively, and W, H the width and height of the candidate area image, respectively.
Preferably, the confidence level in step 2.3.4 is calculated by the formula:
Confidence=1-(pHash+1/I(G(X),H(y)))/(IOU+D)
in the formula, I (G)(X),H(Y)) The index is mutual information index, pHash is perception hash index, IOU is cross-over ratio index, and D is set constant.
Preferably, the threshold value range is 0.1-0.4, and the threshold value range is 10-50.
Further, as shown in fig. 2, a specific flow diagram of steps 3 to 5 is introduced, where in step 3, when the robot reaches a specified patrol point, a transformer thermometer picture is taken and obtained, and the pointer is divided into two types, red and non-red, corresponding to different picture processing methods. Because the black is easily affected by illumination when the picture is placed in the HSV color space, the black is easily blurred in the HSV color space, and the red is not easily affected by illumination when the picture is placed in the HSV color space, the picture is converted into the HSV format for the red pointer, and the picture is grayed for the non-red pointer.
Further, in fig. 2, the pointer is extracted using a corresponding method according to the color of the pointer. In the HSV color space, as shown in table 1, the color distribution is continuous, and therefore, according to the following table, the pointer portion is extracted by utilizing the characteristic that the pointer color is red. And graying the picture for the non-red pointer.
TABLE 1 color value ranges under HSV color space
Figure BDA0001938430990000091
Further, the step 5 specifically operates as follows:
for a red pointer:
step 5.1.1: the pointer is red, and because the other parts of the transformer thermometer do not have a red area, the red area is extracted by using the color continuity of HSV, the value of H is (0,10) (156,180), the value of S channel is (43,255), and the value of V channel is (46,255);
step 5.1.2: and performing closed operation to reduce the interference of noise points on the extraction pointer and make the outline smooth.
Step 5.1.3: and extracting a red area with the largest area, namely the pointer outline.
For non-red pointers:
step 5.2.1: histogram equalization and Gaussian filtering are carried out on the image, and illumination interference is reduced.
Step 5.2.2: because the difference between the pointer of the transformer thermometer and the color of the dial is obvious, the dial area is binarized by utilizing an Otsu algorithm, the black pointer is changed into a white area, and other backgrounds are changed into black.
Step 5.2.3: and performing an opening operation to extract the pointer outline.
Further, as shown in fig. 4, the step 6 is specifically operated to:
step 6.1: and (3) finding the pointer region obtained in the step (5), extracting a contour point set of the region, finding two points which are farthest away in the set, making a straight line a by using the two points, finding a point pair on the contour, enabling a straight line b formed by the point pair to be perpendicular to the straight line a, and finding the point pair which is farthest away from the point pairs.
Step 6.2: and the point where the straight lines a and b intersect is the circle center O of the transformer thermometer, and a coordinate system is established by taking the O as an origin.
Step 6.3: because the pointer is integrally in a triangular shape, the vertex A and the circle center O of the pointer are farthest away, the farthest point in the contour point set is found as the vertex A according to the characteristics, and the rotation direction and the pointing reading of the pointer are calculated by using the vector AO and any vector BC on a vertical axis through the cosine law:
AO·BC=||AO||||BC||cosθ
Θ=antcos(AO·BC/||AO||*||BC||)
wherein AO and BC are vectors, AO & BC is the inner product of the vectors, and theta is the included angle of the two vectors.
Compared with the prior art, the invention has the following remarkable advantages: (1) the invention integrates the existing positioning information of the robot. Due to the accuracy of the robot navigation positioning technology, the positioning error is less than 5cm, and the problem that the target size and angle change is large when the position of the robot is uncertain is solved; (2) the change of the dimension, rotation and the like of an image shot by the camera is small, on the basis, the phase correlation is utilized for coarse detection, and a machine learning method is adopted for accurate detection aiming at the illumination change of the image and the posture change of a detected target, so that the problems that the target is greatly influenced by illumination, such as over-brightness, over-darkness and the like, are solved; (3) after the target image is detected, the pointer is used for preprocessing the image according to red and non-red colors, the accuracy of transformer thermometer indication identification is improved through operations such as closing operation and histogram equalization, the problem of transformer thermometer indication identification under different illumination and posture conditions is well solved, and the inspection efficiency of the robot is improved. (4) And the HSV color space is utilized, so that the influence of illumination on reading identification is reduced.
Although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (6)

1. A transformer thermometer detection and identification method is characterized by comprising the following steps,
step 1: fusing the existing positioning information of the robot, and when the inspection robot reaches a specified position, acquiring a thermometer picture of the transformer to be detected, wherein the positioning error is less than 5 cm; training a classifier by using a transformer thermometer image data set, and selecting an image in the middle of the transformer thermometer shot at each inspection point as a template image;
step 2: screening a final target image of a target transformer thermometer area based on the to-be-detected transformer thermometer picture:
carrying out coarse positioning on the target transformer thermometer area by utilizing Mellin Fourier transform and phase correlation technology to obtain a coarse positioning target transformer thermometer area;
accurately positioning the target transformer thermometer region by using a machine learning method, and sending the transformer thermometer picture to be detected into a classifier to obtain a plurality of target candidate regions;
calculating perception hash indexes, mutual information indexes and cross-over ratios of the target candidate areas, and screening the target candidate areas to obtain a final target image;
and step 3: reading in the final target image, and carrying out preprocessing operation on the final target image;
and 4, step 4: utilizing a target detector to position the instrument, and segmenting a dial area of the transformer thermometer in the final target image;
and 5: extracting the outline of the pointer according to the color of the pointer;
step 6: and calculating the rotation direction of the pointer and the reading of the pointing direction, wherein the calculation mode is as follows: the vertex A and the circle center O of the pointer are farthest away, the farthest point in the contour point set is found according to the characteristic and is the vertex A, and the rotation direction and the pointing reading of the pointer are calculated by using the vector AO and any vector BC on a vertical axis through the cosine law:
AO·BC=||AO||||BC||cosθ
Θ=antcos(AO·BC/||AO||*||BC||)
wherein AO and BC are vectors, AO & BC are inner products of the vectors, and theta is an included angle of the two vectors;
calculating the cross-over ratio, the mutual information index and the perceptual hash index of the target candidate regions in the step 2, and screening the target candidate regions to obtain a final target image, wherein the step comprises:
respectively solving the intersection ratio of each target candidate region and the temperature table region of the coarse positioning target transformer;
respectively calculating mutual information indexes of each target candidate area image and the template image;
respectively obtaining the perceptual hash index of each target candidate region image and the transformer thermometer region image in the template image;
performing weighting operation on the intersection ratio, mutual information indexes and perceptual hash indexes of each target candidate region to obtain the confidence coefficient of each target candidate region, and taking the target candidate region with the maximum confidence coefficient as an alternative detection result;
if the intersection-parallel ratio of the alternative detection results is less than the set threshold and (pHash +1/I (G)(X),H(y)) When the temperature of the transformer is larger than a threshold, taking the temperature table area of the transformer with the rough positioning target as a final target image; otherwise, taking the alternative detection result as a final target image; wherein pHash is an alternative detection result perception hash index, I (G)(X),H(Y)) And the candidate detection result is a mutual information index.
2. The transformer thermometer detecting and identifying method as recited in claim 1, wherein said confidence level is calculated by the formula:
Confidence=1-(pHash+1/I(G(X),H(y)))/(IOU+D)
in the formula, I (G)(X),H(Y)) And the mutual information index of the target candidate area is pHash which is a perception hash index of the target candidate area, IOU which is an intersection ratio index of the target candidate area and a temperature table area of the coarse positioning target transformer, and D which is a set constant.
3. The transformer thermometer detecting and identifying method as recited in claim 2, wherein the threshold diou ranges from 0.1 to 0.4, and the threshold da ranges from 10 to 50.
4. The transformer thermometer detecting and identifying method as recited in claim 1, wherein said preprocessing operation on said final target image in said step 3 comprises:
if the pointer is red, converting the final target image into an HSV format;
and if the pointer is not red, graying the final target image.
5. The transformer thermometer detecting and identifying method as recited in claim 1, wherein said extracting pointer profile in step 5 includes, if the pointer is red, the following steps:
extracting a red area by using the color continuity of HSV, wherein the value range of H is (0,10) (156,180), the value range of S channel is (43,255), and the value range of V channel is (46, 255);
performing a closing operation;
extracting a pointer outline;
if the pointer is not red, the method specifically comprises the following steps:
carrying out histogram equalization and Gaussian filtering on the image;
binaryzation is carried out on the watch panel area by utilizing an Otsu algorithm, a black pointer is changed into a white area, and other backgrounds are changed into black;
and performing an opening operation to extract the pointer outline.
6. The transformer thermometer detecting and identifying method of claim 1, wherein said step 6 comprises,
extracting a contour point set of the pointer contour;
finding two points with the largest distance in the contour set points, and taking the two points as straight lines;
continuously finding out the point pairs on the pointer outline to form a line segment perpendicular to the straight line, and finding out the point pair with the farthest distance from the point pairs;
a point where a line segment formed by the farthest point pair intersects with the straight line is the center of a circle of the transformer thermometer, and a coordinate system is established by taking the center of the circle as an origin;
and calculating the rotation direction of the pointer and the reading of the pointer by utilizing the cosine law based on a vector formed by two points with the largest distance in the contour set points and any two-point vector in a vertical axis in a coordinate system.
CN201910014109.3A 2019-01-08 2019-01-08 Transformer thermometer detection and identification method Active CN109784257B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910014109.3A CN109784257B (en) 2019-01-08 2019-01-08 Transformer thermometer detection and identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910014109.3A CN109784257B (en) 2019-01-08 2019-01-08 Transformer thermometer detection and identification method

Publications (2)

Publication Number Publication Date
CN109784257A CN109784257A (en) 2019-05-21
CN109784257B true CN109784257B (en) 2021-10-12

Family

ID=66499158

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910014109.3A Active CN109784257B (en) 2019-01-08 2019-01-08 Transformer thermometer detection and identification method

Country Status (1)

Country Link
CN (1) CN109784257B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111311580A (en) * 2020-02-19 2020-06-19 中冶赛迪重庆信息技术有限公司 Steam drum liquid level abnormity identification method and system based on image identification
CN113780263B (en) * 2021-09-03 2023-06-16 华南师范大学 Method and device for positioning and identifying reading of pressure alarm instrument

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104751187A (en) * 2015-04-14 2015-07-01 山西科达自控股份有限公司 Automatic meter-reading image recognition method
CN105868776A (en) * 2016-03-25 2016-08-17 中国科学院自动化研究所 Transformer equipment recognition method and device based on image processing technology
CN108491838A (en) * 2018-03-08 2018-09-04 南京邮电大学 Pointer-type gauges registration read method based on SIFT and HOUGH
CN108764134A (en) * 2018-05-28 2018-11-06 江苏迪伦智能科技有限公司 A kind of automatic positioning of polymorphic type instrument and recognition methods suitable for crusing robot
CN108764257A (en) * 2018-05-23 2018-11-06 郑州金惠计算机系统工程有限公司 A kind of pointer instrument recognition methods of various visual angles
CN109447062A (en) * 2018-09-29 2019-03-08 南京理工大学 Pointer-type gauges recognition methods based on crusing robot

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102567958B1 (en) * 2016-11-10 2023-08-17 삼성디스플레이 주식회사 Display apparatus, controlling method thereof, and terminal

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104751187A (en) * 2015-04-14 2015-07-01 山西科达自控股份有限公司 Automatic meter-reading image recognition method
CN105868776A (en) * 2016-03-25 2016-08-17 中国科学院自动化研究所 Transformer equipment recognition method and device based on image processing technology
CN108491838A (en) * 2018-03-08 2018-09-04 南京邮电大学 Pointer-type gauges registration read method based on SIFT and HOUGH
CN108764257A (en) * 2018-05-23 2018-11-06 郑州金惠计算机系统工程有限公司 A kind of pointer instrument recognition methods of various visual angles
CN108764134A (en) * 2018-05-28 2018-11-06 江苏迪伦智能科技有限公司 A kind of automatic positioning of polymorphic type instrument and recognition methods suitable for crusing robot
CN109447062A (en) * 2018-09-29 2019-03-08 南京理工大学 Pointer-type gauges recognition methods based on crusing robot

Also Published As

Publication number Publication date
CN109784257A (en) 2019-05-21

Similar Documents

Publication Publication Date Title
CN112949564B (en) Pointer type instrument automatic reading method based on deep learning
CN112818988B (en) Automatic identification reading method and system for pointer instrument
CN107729853B (en) Automatic identification method suitable for narrow-scale pointer instrument of transformer substation
CN112257676A (en) Pointer instrument reading method and system and inspection robot
CN109284718B (en) Inspection robot-oriented variable-view-angle multi-instrument simultaneous identification method
CN109447062A (en) Pointer-type gauges recognition methods based on crusing robot
CN111563896B (en) Image processing method for detecting abnormality of overhead line system
CN103729631A (en) Vision-based connector surface feature automatically-identifying method
CN104197900A (en) Meter pointer scale recognizing method for automobile
CN109993154A (en) The lithium sulfur type instrument intelligent identification Method of substation's simple pointer formula
CN108550165A (en) A kind of image matching method based on local invariant feature
CN110659637A (en) Electric energy meter number and label automatic identification method combining deep neural network and SIFT features
CN109784257B (en) Transformer thermometer detection and identification method
CN111368906A (en) Pointer type oil level indicator reading identification method based on deep learning
CN116109915B (en) Intelligent recognition method for container door state
CN111476246A (en) Robust and efficient intelligent reading method for pointer instrument applied to complex environment
CN116188756A (en) Instrument angle correction and indication recognition method based on deep learning
CN109389165A (en) Oil level gauge for transformer recognition methods based on crusing robot
CN110232703B (en) Moving object recognition device and method based on color and texture information
CN113947714A (en) Multi-mode collaborative optimization method and system for video monitoring and remote sensing
CN109858474B (en) Detection and identification method for transformer oil surface temperature controller
Shuo et al. Digital recognition of electric meter with deep learning
Singh et al. Vidaq: A framework for monitoring human machine interfaces
CN116612461A (en) Target detection-based pointer instrument whole-process automatic reading method
CN116188755A (en) Instrument angle correction and reading recognition device based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant