CN109360289B - Power meter detection method fusing inspection robot positioning information - Google Patents
Power meter detection method fusing inspection robot positioning information Download PDFInfo
- Publication number
- CN109360289B CN109360289B CN201811148567.8A CN201811148567A CN109360289B CN 109360289 B CN109360289 B CN 109360289B CN 201811148567 A CN201811148567 A CN 201811148567A CN 109360289 B CN109360289 B CN 109360289B
- Authority
- CN
- China
- Prior art keywords
- image
- target
- instrument
- positioning
- inspection robot
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C1/00—Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people
- G07C1/20—Checking timed patrols, e.g. of watchman
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/42—Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
- G06V10/431—Frequency domain transformation; Autocorrelation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a power meter detection method fusing inspection robot positioning information. The invention mainly comprises 4 steps: (1) the inspection robot reaches a specified position to obtain a picture; (2) roughly positioning a target instrument area by utilizing Fourier and phase correlation techniques in combination with positioning information of the inspection robot; (3) accurately positioning a target instrument region by using machine learning to obtain a plurality of target candidate regions; (4) fusing the positioning information of the inspection robot, calculating characteristic parameters of multi-characteristic fusion such as IoU, mutual information, visual hash and the like, and screening candidate areas by using the parameters to obtain a final target. The invention can detect instruments under various illumination and posture changes by using machine learning.
Description
Technical Field
The invention relates to a target detection method, in particular to a power meter detection method fusing inspection robot positioning information.
Background
The electric power inspection robot needs to realize basic functions of autonomous positioning and navigation in a transformer substation, on-site instrument reading identification, automatic charging and the like. The core function is to detect and identify the number of instruments and meters of the field power equipment, such as a discharge counter, an oil level meter, a voltmeter, a thermometer and other instrument numbers, the instruments are basically mechanical instruments, and a robot needs to read the instrument numbers by a visual sensor. The premise of accurately identifying the indication number of the instrument is that the position of the instrument in a visual image is accurately detected, most instruments are outdoors, most methods are used for detecting and identifying by using a traditional image processing means at present, the detection effect is poor under the condition that the illumination condition is changed, a group of parameters are needed for one illumination condition generally, and a relatively universal detection and identification method needs to be provided to deal with instrument detection tasks under different illumination and posture conditions.
Disclosure of Invention
The invention aims to provide a power meter detection method fusing inspection robot positioning information, and solves the problems that when the position of a robot is uncertain, the target size and angle change is large, and the target is greatly influenced by illumination, so that the detection is inaccurate in the existing meter detection technology.
The technical scheme for realizing the purpose of the invention is as follows: a power meter detection method fusing inspection robot positioning information comprises the following specific steps:
step 1, training a classifier by using an instrument image data set, selecting an image in the middle of an instrument shot at a routing inspection point as a template image for each routing inspection point, and enabling a routing inspection robot to reach the designated routing inspection point to obtain an image of the instrument to be detected;
and 4, calculating three parameter indexes of perceptual hash, mutual information and cross-over ratio of the candidate areas, and screening the target candidate areas to obtain a final target.
Preferably, the specific steps of calculating three parameter indexes of perceptual hash, mutual information and cross-to-parallel ratio of the plurality of candidate regions in step 4, and screening target candidate regions to obtain a final target are as follows:
4-1, respectively intersecting each target candidate region with a coarse positioning target instrument region and comparing a parameter IOU;
4-2, respectively carrying out perceptual hash calculation on each target candidate area image and the instrument area image in the template image to obtain a perceptual hash index;
4-3, respectively calculating mutual information indexes of each target candidate area image and the template image;
4-4, weighting the cross-over ratio IOU, mutual information indexes and perceptual hash indexes of each target candidate region to obtain the confidence coefficient of each target candidate region, and taking the target candidate region with the maximum confidence coefficient as a candidate detection result;
step 4-5, if the IOU of the alternative detection result is less than the set threshold value threshold and (pHash +1/I (G)(X),H(Y)) When the threshold value is larger than threshold, the rough positioning target instrument area determined in the step 2 is used as a final target, otherwise, the alternative detection result is used as the final target, pHash is used as the alternative detection result to sense the Hash index, I (G)(X),H(Y)) Is a mutual information index.
Preferably, step 4-1 is to solve the intersection ratio parameter IOU between each target candidate region and the rough positioning target instrument region by the following formula:
where C is the coarse positioning target instrument area, niIs the target candidate region.
Preferably, the specific method for performing perceptual hash calculation on each target candidate area image and the instrument area image in the template image in step 4-2 is as follows:
the target candidate area image and the template image are scaled to the same size, cosine transformation is carried out, a partial low-frequency area at the upper left corner of the image after cosine transformation is selected, direct current components of coordinates (0,0) are removed to obtain a total N-dimensional feature vector, and the Hamming distance of the feature vector of the target candidate area image and the feature vector of the template image is calculated and used as a perceptual hash index.
Preferably, the calculation formula of the mutual information index of each target candidate region image and the template image in step 4-3 is as follows:
G(X)、H(Y)the number of grayscale pixels of the template image and the candidate image, respectively, and W, H the width and height of the candidate area image, respectively.
Preferably, the calculation formula of confidence level in step 4-4 is:
Confidence=1-(pHash+1/I(G(X),H(Y)))/(IOU+D)
in the formula, I (G)(X),H(Y)) The index is mutual information index, pHash is perception hash index, IOU is cross-over ratio index, and D is set constant.
Preferably, the threshold value range is 0.1-0.4, and the threshold value range is 10-50.
Preferably, the classifier in step 1 is an Adaboost classifier.
Compared with the prior art, the invention has the following remarkable advantages: 1) on the basis of classifier screening, the method is based on robot positioning navigation, is carried out by utilizing the frequency domain of the image, carries out phase correlation calculation, and greatly reduces the false detection problem of missed detection by using the image displacement index; 2) the invention fuses the positioning information of the robot, utilizes the existing positioning information of the robot to ensure that the position repetition degree is high (for example, the positioning is lower than 5 cm), the changes of scale, rotation and the like are small as much as possible, and on the basis, the phase correlation is utilized to carry out coarse detection, and basically all tables can be detected; 3) the invention can detect instruments under various illumination and posture changes by using machine learning.
Drawings
FIG. 1 is a schematic view of the overall process of the present invention.
Fig. 2 is a diagram of training data.
FIG. 3 is a schematic diagram of a captured template image.
Fig. 4 is a graph showing the results of embodiment 1.
FIG. 5 is a graph showing the results of example 2.
FIG. 6 is a schematic flow chart of the present invention.
Detailed Description
As shown in fig. 1 and 6, a method for detecting a power meter by fusing inspection robot positioning information includes the following steps:
step 1, training a classifier by using an instrument image data set, selecting an image in the middle of an instrument shot at a routing inspection point as a template image for each routing inspection point, and enabling a routing inspection robot to reach the designated routing inspection point to obtain an image of the instrument to be detected;
and 4, calculating three parameter indexes of perceptual hash, mutual information and cross-over ratio of the candidate areas, and screening the target candidate areas to obtain a final target.
In a further embodiment, the specific steps of calculating three parameter indexes of perceptual hash, mutual information and cross-over ratio of the multiple candidate regions in step 4, and screening target candidate regions to obtain a final target are as follows:
4-1, respectively intersecting each target candidate region with a coarse positioning target instrument region and comparing a parameter IOU;
4-2, respectively carrying out perceptual hash calculation on each target candidate area image and the instrument area image in the template image to obtain a perceptual hash index;
4-3, respectively calculating mutual information indexes of each target candidate area image and the template image;
4-4, weighting the cross-over ratio IOU, mutual information indexes and perceptual hash indexes of each target candidate region to obtain the confidence coefficient of each target candidate region, and taking the target candidate region with the maximum confidence coefficient as a candidate detection result;
step 4-5, if the IOU of the alternative detection result is less than the set threshold value threshold and (pHash +1/I (G)(X),H(Y)) When the threshold value is larger than threshold, the rough positioning target instrument area determined in the step 2 is used as a final target, otherwise, the alternative detection result is used as the final target, pHash is used as the alternative detection result to sense the Hash index, I (G)(X),H(Y)) Is a mutual information index.
In a further embodiment, step 4-1 is to solve the intersection ratio parameter IOU between each target candidate region and the coarse positioning target instrument region by the following formula:
where C is the coarse positioning target instrument area, niIs the target candidate region.
In a further embodiment, the specific method for performing perceptual hash calculation on each target candidate area image and the instrument area image in the template image in step 4-2 is as follows:
the target candidate area image and the template image are scaled to the same size, cosine transformation is carried out, a partial low-frequency area at the upper left corner of the image after cosine transformation is selected, direct current components of coordinates (0,0) are removed to obtain a total N-dimensional feature vector, and the Hamming distance of the feature vector of the target candidate area image and the feature vector of the template image is calculated and used as a perceptual hash index.
In a further embodiment, the calculation formula of the mutual information index of each target candidate region image and the template image in step 4-3 is as follows:
G(X)、H(Y)the number of grayscale pixels of the template image and the candidate image, respectively, and W, H the width and height of the candidate area image, respectively.
In a further embodiment, the confidence level in step 4-4 is calculated as:
Confidence=1-(pHash+1/I(G(X),H(Y)))/(IOU+D)
in the formula, I (G)(X),H(Y)) The index is mutual information index, pHash is perception hash index, IOU is cross-over ratio index, and D is set constant.
In a further embodiment, the threshold value threshold range is 0.1-0.4, and the threshold value threshold range is 10-50.
In a further embodiment, the classifier in step 1 is an Adaboost classifier.
The invention solves the defects existing in the existing instrument detection technology, such as: 1) when the position of the robot is uncertain, the target dimension and angle change greatly; 2) the target is greatly affected by the illumination, such as too bright, too dark, etc. In order to solve the two problems, a technical scheme combining two detection methods is adopted: 1) the existing positioning information of the robot is utilized. Because of the accuracy of the robot navigation positioning technology, the positioning error is less than 5cm, so the changes of the image scale, rotation and the like shot by the camera are small, and the phase correlation is utilized to carry out coarse detection on the basis; 2) and aiming at the illumination change of the image and the posture change of the detection target, a machine learning method is adopted for accurate detection.
The present invention will be further described with reference to the following examples.
Example 1
A power meter detection method fusing inspection robot positioning information comprises the following specific steps:
step 1-1, collecting pictures of field inspection points collected by an inspection robot, extracting an image of an electric meter from the pictures, classifying the electric meter into a 48 x 48 gray image data set according to the meter, and sending the image data set into a classifier for training as shown in figure 2;
step 1-2, selecting an image centered on the inspection instrument from the pictures shot from each inspection point, and taking the image as a detected template image as shown in fig. 3;
1-3, the inspection robot reaches a specified inspection point through positioning navigation, the navigation error is 5cm, and an image A is shot for instrument detection;
and 4, respectively calculating the IOU, the mutual information, the pHash and the Confidence coefficient of each candidate frame, wherein the first row of the numerical values in the candidate frames is perceptual Hash pHash, the second row of the numerical values is mutual information, the third row is the IOU, and the fourth row is the Confidence coefficient. (ii) a
Step 4-1, solving an intersection ratio parameter IOU between the candidate area obtained by the classifier and the estimated position (area C) of the instrument in step 2 to respectively obtain three intersection ratio parameter indexes (0.7,0.0 and 0.0), wherein the calculation mode is shown as formula (1):
and 4-2, calculating a perception Hash pHash index. And performing perceptual hash pHash calculation on the candidate area image A obtained by the classifier and the instrument area image B in the intercepted template image. The perceptual hash pHash calculation is to scale two pictures to 32 × 32, perform cosine transform, select an 8 × 8 region at the upper left corner of the image after cosine transform, remove the direct current component of coordinates (0,0) to obtain 63-dimensional feature vectors, calculate the Hamming distance of the feature vectors of the image A and the image B as a perceptual hash pHash index, and calculate the obtained index as shown in FIG. 4.
And 4-3, calculating mutual information indexes. Calculating a mutual information index of the candidate region image and the template image by using formulas (2) to (5):
wherein X, Y are gray pixel values, G(X)、H(Y)The number of grayscale pixels of the template image and the candidate image. .
And 4-4, weighting three indexes of the cross-over ratio IOU, the mutual information and the perceptual hash pHash of each candidate area according to a formula (6) to obtain the confidence coefficient of the candidate area, wherein D is a constant.
Confidence=1-(pHash+1/I(G(X),H(Y)))/(IOU+D) (6)
And (4) sequencing the confidence degrees of all the candidate regions from high to low, and obtaining the region with the maximum confidence degree as a final region.
Example 2
A power meter detection method fusing inspection robot positioning information comprises the following specific steps:
step 1, the inspection robot reaches a specified inspection point through positioning and navigation, the navigation error is 5cm, and an image A is shot for instrument detection;
And 4, calculating the IOU, the mutual information, the pHash and the confidence coefficient of each candidate frame, wherein the numerical value of each candidate frame is the first action pHash, the second action mutual information, the third action IOU and the fourth action confidence.
Step 4-1, the candidate area obtained by the classifier and the estimated position of the instrument are subjected to intersection ratio parameter IOU, as shown in formula (7), and the result is the fourth line of the numerical value in the box of FIG. 5:
and 4-2, calculating a perception Hash pHash index. And performing perceptual hash pHash calculation on the candidate area image A obtained by the classifier and the instrument area image B in the intercepted template image. The perceptual hash pHash calculation is to scale two pictures to 32 × 32, perform cosine transform, select an 8 × 8 region at the upper left corner of the image after cosine transform, remove the direct current component of coordinates (0,0) to obtain 63-dimensional feature vectors, calculate the Hamming distance of the feature vectors of the image A and the image B, and calculate the index as the perceptual hash pHash index, as shown in FIG. 5.
And 4-3, calculating mutual information indexes. And (4) calculating a mutual information index of the candidate area image and the template image by using the formulas (8) to (11).
Wherein X, Y are gray pixel values, G _ ((X)), H _ ((Y)) are the number of gray pixels of the template image and the candidate image.
And 4-4, weighting three indexes of the cross-over ratio IOU, the mutual information and the perceptual hash pHash of each candidate area according to a formula (12) to obtain the confidence coefficient of the candidate area, wherein D is a constant.
Confidence=1-(pHash+1/I(G(X),H(Y)))/(IOU+D) (12)
Since the IOU index is smaller than the threshold value threshold OU (0.3), the alternative detection area of the Adaboost classifier is abandoned, and the estimation area obtained in the step 2 is adopted as the detection area result to be output.
Claims (7)
1. The utility model provides a fuse electric power meter detection method of patrolling and examining robot locating information which characterized in that, concrete step is:
step 1, training a classifier by using an instrument image data set, selecting an image in the middle of an instrument shot at a routing inspection point as a template image for each routing inspection point, and enabling a routing inspection robot to reach the designated routing inspection point to obtain an image of the instrument to be detected;
step 2, roughly positioning a target instrument area in the image of the instrument to be detected by utilizing Mellin Fourier transform and phase correlation technology;
step 3, accurately positioning the target instrument region by using a machine learning method, and sending the image to be detected into a trained classifier to obtain a plurality of target candidate regions;
step 4, calculating three parameter indexes of perceptual hash, mutual information and intersection ratio of a plurality of candidate areas, and screening target candidate areas to obtain a final target, wherein the specific steps are as follows:
4-1, respectively intersecting each target candidate region with a coarse positioning target instrument region and comparing a parameter IOU;
4-2, respectively carrying out perceptual hash calculation on each target candidate area image and the instrument area image in the template image to obtain a perceptual hash index;
4-3, respectively calculating mutual information indexes of each target candidate area image and the template image;
4-4, weighting the cross-over ratio IOU, mutual information indexes and perceptual hash indexes of each target candidate region to obtain the confidence coefficient of each target candidate region, and taking the target candidate region with the maximum confidence coefficient as a candidate detection result;
step 4-5, if the IOU of the alternative detection result is less than the set threshold value threshold and (pHash +1/I (G)(X),H(Y)) When the threshold value is larger than threshold, the rough positioning target instrument area determined in the step 2 is used as a final target, otherwise, the alternative detection result is used as the final target, pHash is used as the alternative detection result to sense the Hash index, I (G)(X),H(Y)) Is a mutual information index.
2. The power meter detection method integrating the positioning information of the inspection robot according to the claim 1, wherein the formula of the merging ratio parameter IOU of each target candidate area and the coarse positioning target meter area in the step 4-1 is as follows:
where C is the coarse positioning target instrument area, niIs the target candidate region.
3. The power meter detection method fusing inspection robot positioning information according to claim 1, wherein the specific method of performing perceptual hash calculation on each target candidate area image and the meter area image in the template image in the step 4-2 is as follows:
the target candidate area image and the template image are scaled to the same size, cosine transformation is carried out, a partial low-frequency area at the upper left corner of the image after cosine transformation is selected, direct current components of coordinates (0,0) are removed to obtain a total N-dimensional feature vector, and the Hamming distance of the feature vector of the target candidate area image and the feature vector of the template image is calculated and used as a perceptual hash index.
4. The power meter detection method integrating the positioning information of the inspection robot according to the claim 1, wherein the calculation formula of the mutual information index of each target candidate area image and the template image in the step 4-3 is as follows:
G(X)、H(Y)the number of grayscale pixels of the template image and the candidate image, respectively, and W, H the width and height of the candidate area image, respectively.
5. The power meter detection method integrating the positioning information of the inspection robot according to claim 1, wherein the calculation formula of the confidence level in the step 4-4 is as follows:
Confidence=1-(pHash+1/I(G(X),H(Y)))/(IOU+D)
in the formula, I (G)(X),H(Y)) The index is mutual information index, pHash is perception hash index, IOU is cross-over ratio index, and D is set constant.
6. The power meter detection method integrating the positioning information of the inspection robot according to claim 1, wherein a threshold value range is 0.1-0.4, and a threshold value range is 10-50.
7. The power meter detection method fusing the positioning information of the inspection robot according to the claim 1, wherein the classifier in the step 1 is an Adaboost classifier.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811148567.8A CN109360289B (en) | 2018-09-29 | 2018-09-29 | Power meter detection method fusing inspection robot positioning information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811148567.8A CN109360289B (en) | 2018-09-29 | 2018-09-29 | Power meter detection method fusing inspection robot positioning information |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109360289A CN109360289A (en) | 2019-02-19 |
CN109360289B true CN109360289B (en) | 2021-09-28 |
Family
ID=65348090
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811148567.8A Active CN109360289B (en) | 2018-09-29 | 2018-09-29 | Power meter detection method fusing inspection robot positioning information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109360289B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110298339A (en) * | 2019-06-27 | 2019-10-01 | 北京史河科技有限公司 | A kind of instrument disk discrimination method, device and computer storage medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104268528A (en) * | 2014-09-28 | 2015-01-07 | 深圳市科松电子有限公司 | Method and device for detecting crowd gathered region |
CN105260412A (en) * | 2015-09-24 | 2016-01-20 | 东方网力科技股份有限公司 | Image storage method and device, and image retrieval method and device |
CN106951930A (en) * | 2017-04-13 | 2017-07-14 | 杭州申昊科技股份有限公司 | A kind of instrument localization method suitable for Intelligent Mobile Robot |
CN107092905A (en) * | 2017-03-24 | 2017-08-25 | 重庆邮电大学 | A kind of instrument localization method to be identified of electric inspection process robot |
CN107610162A (en) * | 2017-08-04 | 2018-01-19 | 浙江工业大学 | A kind of three-dimensional multimode state medical image autoegistration method based on mutual information and image segmentation |
CN107665348A (en) * | 2017-09-26 | 2018-02-06 | 山东鲁能智能技术有限公司 | A kind of digit recognition method and device of transformer station's digital instrument |
CN107958073A (en) * | 2017-12-07 | 2018-04-24 | 电子科技大学 | A kind of Color Image Retrieval based on particle swarm optimization algorithm optimization |
CN108446584A (en) * | 2018-01-30 | 2018-08-24 | 中国航天电子技术研究院 | A kind of unmanned plane scouting video image target automatic testing method |
-
2018
- 2018-09-29 CN CN201811148567.8A patent/CN109360289B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104268528A (en) * | 2014-09-28 | 2015-01-07 | 深圳市科松电子有限公司 | Method and device for detecting crowd gathered region |
CN105260412A (en) * | 2015-09-24 | 2016-01-20 | 东方网力科技股份有限公司 | Image storage method and device, and image retrieval method and device |
CN107092905A (en) * | 2017-03-24 | 2017-08-25 | 重庆邮电大学 | A kind of instrument localization method to be identified of electric inspection process robot |
CN106951930A (en) * | 2017-04-13 | 2017-07-14 | 杭州申昊科技股份有限公司 | A kind of instrument localization method suitable for Intelligent Mobile Robot |
CN107610162A (en) * | 2017-08-04 | 2018-01-19 | 浙江工业大学 | A kind of three-dimensional multimode state medical image autoegistration method based on mutual information and image segmentation |
CN107665348A (en) * | 2017-09-26 | 2018-02-06 | 山东鲁能智能技术有限公司 | A kind of digit recognition method and device of transformer station's digital instrument |
CN107958073A (en) * | 2017-12-07 | 2018-04-24 | 电子科技大学 | A kind of Color Image Retrieval based on particle swarm optimization algorithm optimization |
CN108446584A (en) * | 2018-01-30 | 2018-08-24 | 中国航天电子技术研究院 | A kind of unmanned plane scouting video image target automatic testing method |
Also Published As
Publication number | Publication date |
---|---|
CN109360289A (en) | 2019-02-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107229930B (en) | Intelligent identification method for numerical value of pointer instrument | |
CN108009515B (en) | Power transmission line positioning and identifying method of unmanned aerial vehicle aerial image based on FCN | |
CN112906694B (en) | Reading correction system and method for transformer substation inclined pointer instrument image | |
CN112818988B (en) | Automatic identification reading method and system for pointer instrument | |
CN102521560B (en) | Instrument pointer image identification method of high-robustness rod | |
CN111368906B (en) | Pointer type oil level meter reading identification method based on deep learning | |
CN109447949A (en) | Insulated terminal defect identification method based on crusing robot | |
CN106643965B (en) | Method for accurately identifying liquid level by utilizing template matching | |
CN105938554B (en) | The tongue telescopic displacement monitoring method and system read based on image automatic judging | |
CN109389165A (en) | Oil level gauge for transformer recognition methods based on crusing robot | |
CN109344768A (en) | Pointer breaker recognition methods based on crusing robot | |
CN103149087B (en) | Follow-up window and digital image-based non-contact real-time strain measurement method | |
CN109447062A (en) | Pointer-type gauges recognition methods based on crusing robot | |
CN111563896A (en) | Image processing method for catenary anomaly detection | |
CN109344766A (en) | Slide block type breaker recognition methods based on crusing robot | |
CN110427943A (en) | A kind of intelligent electric meter technique for partitioning based on R-CNN | |
CN111832760B (en) | Automatic inspection method for well lid based on visual algorithm | |
CN116844147A (en) | Pointer instrument identification and abnormal alarm method based on deep learning | |
CN113536895A (en) | Disc pointer meter identification method | |
CN113947714B (en) | Multi-mode collaborative optimization method and system for video monitoring and remote sensing | |
CN113627427B (en) | Instrument reading method and system based on image detection technology | |
CN106529548A (en) | Sub-pixel level multi-scale Harris corner point detection algorithm | |
CN111582270A (en) | Identification tracking method based on high-precision bridge region visual target feature points | |
CN109360289B (en) | Power meter detection method fusing inspection robot positioning information | |
CN114494373A (en) | High-precision rail alignment method and system based on target detection and image registration |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |