CN109360289A - Merge the electric power meter detection method of crusing robot location information - Google Patents

Merge the electric power meter detection method of crusing robot location information Download PDF

Info

Publication number
CN109360289A
CN109360289A CN201811148567.8A CN201811148567A CN109360289A CN 109360289 A CN109360289 A CN 109360289A CN 201811148567 A CN201811148567 A CN 201811148567A CN 109360289 A CN109360289 A CN 109360289A
Authority
CN
China
Prior art keywords
image
crusing robot
instrument
object candidate
location information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811148567.8A
Other languages
Chinese (zh)
Other versions
CN109360289B (en
Inventor
茅耀斌
陆亚涵
郭健
李胜
李萌
项文波
胥安东
潘云云
王天野
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN201811148567.8A priority Critical patent/CN109360289B/en
Publication of CN109360289A publication Critical patent/CN109360289A/en
Application granted granted Critical
Publication of CN109360289B publication Critical patent/CN109360289B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C1/00Registering, indicating or recording the time of events or elapsed time, e.g. time-recorders for work people
    • G07C1/20Checking timed patrols, e.g. of watchman
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • G06V10/431Frequency domain transformation; Autocorrelation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of electric power meter detection methods for merging crusing robot location information.The present invention is broadly divided into 4 steps: (1) crusing robot reaches designated position and obtains picture;(2) crusing robot location information is combined, coarse positioning is carried out to target instrument region using Fourier and phase coherent techniques;(3) it is accurately positioned using the target instrument region of machine learning, obtains multiple object candidate areas;(4) crusing robot location information is merged, the characteristic parameter of the multiple features fusions such as IoU, mutual information, vision Hash is calculated, candidate region is waited using the choice of parameters, obtains final goal.The present invention utilizes machine learning, can detect the instrument under a variety of illumination, attitudes vibration.

Description

Merge the electric power meter detection method of crusing robot location information
Technical field
The present invention relates to object detection methods, and in particular to a kind of electric power meter for merging crusing robot location information Detection method.
Background technique
Electric inspection process robot needs to realize autonomous localization and navigation in substation, the identification of field instrument registration, fills automatically The basic functions such as electricity.Wherein the function of core is exactly the registration of the instrument and meter of detection and the live power equipment of identification, is such as discharged The instrument registration such as counter, oil level gauge, voltmeter, thermometer, these instrument are substantially mechanical instrument, need robot according to The reading of instrument registration is carried out by visual sensor.And the premise for accurately identifying instrument registration is accurately to detect instrument in visual pattern The position of table, and most of instrument, in outdoor, most methods, are detected and are known using traditional image processing means at present Not, in the case where illumination condition changes, detection effect is bad, as soon as a kind of general illumination condition just needs a group parameter, this is needed It proposes a kind of more general detection and recognition methods, copes with the instrument Detection task under the conditions of different illumination, posture.
Summary of the invention
The object of the present invention is to provide a kind of electric power meter detection method for merging crusing robot location information, solutions Robot location's not timing present in certainly existing instrument detection technique, target scale, angle change are big, and target is illuminated by the light influence Greatly so as to cause the problem of detection inaccuracy.
Realize the technical solution of the object of the invention are as follows: a kind of electric power meter detection side for merging crusing robot location information Method, specific steps are as follows:
Step 1 chooses one in inspection point bat using Instrument image data set training classifier, and for each inspection point As template image, crusing robot reaches specified inspection point and obtains instrument picture to be detected the instrument taken the photograph image placed in the middle;
Step 2 treats the target gauge field detected in instrument picture using plum forests Fourier transformation and phase coherent techniques Domain carries out coarse positioning;
Step 3 is accurately positioned target instrument region using the method for machine learning, and image to be detected is sent into and is instructed The classifier practiced obtains several object candidate areas;
Step 4, the perceptual hash, mutual information and the friendship that calculate multiple candidate regions and three kinds of parameter indexes of ratio, screen target Candidate region obtains final goal.
Preferably, perceptual hash, mutual information and the friendship of multiple candidate regions and three kinds of parameter indexes of ratio are calculated in step 4, Screening object candidate area obtains the specific steps of final goal are as follows:
Step 4-1, each object candidate area and coarse positioning target instrument region are asked respectively and is handed over and than parameter IOU;
Step 4-2, the instrument area image in each object candidate area image and template image perception is done respectively to breathe out It is uncommon to calculate, obtain perceptual hash index;
Step 4-3, the mutual information index of each object candidate area image and template image is calculated separately;
Step 4-4, by the friendship of each object candidate area and than three kinds of IOU, mutual information index and perceptual hash indexs The confidence level that weighting finds out each object candidate area is done, the maximum object candidate area of confidence level is alternately detected into knot Fruit;
If the IOU of step 4-5, alternative testing result meets while being less than given threshold thresholdIOU and (pHash+ 1/I(G(X),H(Y))) be greater than threshold value thresholdA when, using step 2 determine coarse positioning target instrument region as final mesh Mark, otherwise in case selecting testing result as final goal, pHash is alternative testing result perceptual hash index, I (G(X),H(Y)) For mutual information index.
Preferably, each object candidate area and coarse positioning target instrument region are asked and are handed over and than parameter IOU's by step 4-1 Formula are as follows:
In formula, C is coarse positioning target instrument region, niFor object candidate area.
Preferably, step 4-2 respectively does each object candidate area image and the instrument area image in template image What perceptual hash calculated method particularly includes:
Object candidate area image and template image are zoomed into same size, carry out cosine transform, chooses cosine transform The DC component of the part low frequency region in the image upper left corner afterwards, removal coordinate (0,0) obtains N-dimensional feature vector altogether, calculates The Hamming distance of the feature vector of object candidate area image and template image, perceptually Hash index.
Preferably, the calculation formula of the mutual information index of each object candidate area image of step 4-3 and template image are as follows:
G(X)、H(Y)The respectively number of template image and candidate image gray-scale pixels, W, H are respectively candidate region image It is wide, high.
Preferably, in step 4-4 confidence level calculation formula are as follows:
Confidence=1- (pHash+1/I (G(X),H(Y)))/(IOU+D)
In formula, I (G(X),H(Y)) it is mutual information index, pHash is perceptual hash index, and for IOU to hand over and than index, D is to set Fixed constant.
Preferably, threshold value thresholdIOU value range 0.1~0.4, threshold value thresholdA value range 10~50.
Preferably, the classifier in step 1 is Adaboost classifier.
Compared with prior art, the present invention its remarkable advantage are as follows: 1) present invention is based on machine on the basis of classifier screening Device people's location navigation is carried out using the frequency domain of image, carries out phase relevant calculation, subtracted using picture displacement index limits The erroneous detection problem of missing inspection is lacked;2) present invention has merged robot localization information, using the existing location information of robot, as far as possible So that position multiplicity is high (such as the positioning for being lower than 5cm), the variations such as scale, rotation are smaller, utilize phase phase on this basis Put row rough detection into, substantially all tables can be examined;3) present invention utilizes machine learning, can detect that a variety of illumination, posture become Instrument under changing.
Detailed description of the invention
Fig. 1 is overall procedure schematic diagram of the present invention.
Fig. 2 is training data schematic diagram.
Fig. 3 is the template image schematic diagram of acquisition.
Fig. 4 is the result figure of case study on implementation 1.
The result figure of Fig. 5 implementation case column 2.
Fig. 6 is flow diagram of the present invention.
Specific embodiment
As shown in figures 1 to 6, a kind of electric power meter detection method merging crusing robot location information, specific steps Are as follows:
Step 1 chooses one in inspection point bat using Instrument image data set training classifier, and for each inspection point As template image, crusing robot reaches specified inspection point and obtains instrument picture to be detected the instrument taken the photograph image placed in the middle;
Step 2 treats the target gauge field detected in instrument picture using plum forests Fourier transformation and phase coherent techniques Domain carries out coarse positioning;
Step 3 is accurately positioned target instrument region using the method for machine learning, and image to be detected is sent into and is instructed The classifier practiced obtains several object candidate areas;
Step 4, the perceptual hash, mutual information and the friendship that calculate multiple candidate regions and three kinds of parameter indexes of ratio, screen target Candidate region obtains final goal.
Perceptual hash, mutual information and the friendship of multiple candidate regions and three kinds of ratio are calculated in further embodiment, in step 4 Parameter index, screening object candidate area obtain the specific steps of final goal are as follows:
Step 4-1, each object candidate area and coarse positioning target instrument region are asked respectively and is handed over and than parameter IOU;
Step 4-2, the instrument area image in each object candidate area image and template image perception is done respectively to breathe out It is uncommon to calculate, obtain perceptual hash index;
Step 4-3, the mutual information index of each object candidate area image and template image is calculated separately;
Step 4-4, by the friendship of each object candidate area and than three kinds of IOU, mutual information index and perceptual hash indexs The confidence level that weighting finds out each object candidate area is done, the maximum object candidate area of confidence level is alternately detected into knot Fruit;
If the IOU of step 4-5, alternative testing result meets while being less than given threshold thresholdIOU and (pHash+ 1/I(G(X),H(Y))) be greater than threshold value thresholdA when, using step 2 determine coarse positioning target instrument region as final mesh Mark, otherwise in case selecting testing result as final goal, pHash is alternative testing result perceptual hash index, I (G(X),H(Y)) For mutual information index.
In further embodiment, each object candidate area and coarse positioning target instrument region are asked friendship simultaneously by step 4-1 Formula than parameter IOU are as follows:
In formula, C is coarse positioning target instrument region, niFor object candidate area.
In further embodiment, step 4-2 is respectively by the instrument in each object candidate area image and template image Area image does perceptual hash calculating method particularly includes:
Object candidate area image and template image are zoomed into same size, carry out cosine transform, chooses cosine transform The DC component of the part low frequency region in the image upper left corner afterwards, removal coordinate (0,0) obtains N-dimensional feature vector altogether, calculates The Hamming distance of the feature vector of object candidate area image and template image, perceptually Hash index.
In further embodiment, the mutual information index of each object candidate area image of step 4-3 and template image Calculation formula are as follows:
G(X)、H(Y)The respectively number of template image and candidate image gray-scale pixels, W, H are respectively candidate region image It is wide, high.
In further embodiment, the calculation formula of confidence level in step 4-4 are as follows:
Confidence=1- (pHash+1/I (G(X),H(Y)))/(IOU+D)
In formula, I (G(X),H(Y)) it is mutual information index, pHash is perceptual hash index, and for IOU to hand over and than index, D is to set Fixed constant.
In further embodiment, threshold value thresholdIOU value range 0.1~0.4, threshold value thresholdA value Range 10~50.
In further embodiment, the classifier in step 1 is Adaboost classifier.
The present invention, which solves, solves deficiency present in existing instrument detection technique, such as: 1) robot location's not timing, mesh Scale, angle change are big;2) it is big such as excessively bright, excessively dark etc. to be illuminated by the light influence for target.It is used to solve above-mentioned two problems The technical solution that two kinds of detection methods combine: 1) robot existing location information is utilized.Since robot navigation positions skill The accuracy of art, position error is less than 5cm, therefore the variations such as camera graphical rule, rotation for taking are smaller, on this basis Rough detection is carried out using phase correlation;2) for the attitudes vibration of the illumination variation of image, detection target, using machine learning Method is accurately detected.
Below with reference to embodiment, the present invention will be further described.
Embodiment 1
A kind of electric power meter detection method merging crusing robot location information, specific steps are as follows:
Step 1-1, the picture for collecting the live inspection point of crusing robot acquisition, therefrom extracts the image of ammeter, according to Instrument, which is classified, is fabricated to the greyscale image data collection of 48*48, as shown in Fig. 2, image data set is sent into training in classifier;
Step 1-2, an inspection instrument image placed in the middle, such as Fig. 3 are respectively chosen from the picture that each inspection point takes Shown, as detection template image;
Step 1-3, crusing robot reaches specified inspection point by location navigation, and navigation error 5cm shoots an image A is detected for instrument;
The image A and current inspection point template image B of crusing robot shooting are carried out plum forests Fourier transformation by step 2 With phase relevant calculation, the image shift index (- 20,30) of abscissa and ordinate is found out, the instrument in image A is calculated Estimated location (728,891,201,198);
Step 3 is accurately positioned image to be detected using trained machine learning Adaboost classifier in advance, Multiple object candidate areas are obtained, if Fig. 4 is the target area that classifier screens, wherein Fig. 1 classifies according to Adaboost Device obtains three candidate frames;
Step 4, the IOU for calculating separately each candidate frame, mutual information and pHash and confidence level, numerical value in candidate frame One behavior perceptual hash pHash, the second behavior mutual information, third behavior IOU, fourth line are confidence level Confidence.;
Step 4-1, the instrument estimated location (region C) in the candidate region that classifier obtains and step 2 asks friendship and than joining Number IOU respectively obtain three and hand over and than parameter index (0.7,0.0,0.0), shown in calculation such as formula (1):
Step 4-2, perceptual hash pHash index is calculated.The candidate region image A and interception template image that classifier obtains Middle instrument area image B does perceptual hash pHash calculating.Perceptual hash pHash calculating is that two pictures are zoomed to 32*32 Size carries out cosine transform, the region of the 8*8 in the image upper left corner after choosing cosine transform, the direct current point of removal coordinate (0,0) The Hamming distance that the feature vector of image A and image B is calculated to 63 dimensional feature vectors is measured, perceptually Hash pHash refers to Mark, the index being calculated are as shown in Figure 4.
Step 4-3, mutual information index is calculated.The mutual of candidate region image and template image is calculated using formula (2)~(5) Information index:
Wherein X, Y are gray-scale pixel values, G(X)、H(Y)For the number of template image and candidate image gray-scale pixels..
Step 4-4, by the friendship of each candidate region and than tri- kinds of IOU, mutual information, perceptual hash pHash indexs according to public affairs Formula (6) does the confidence level that weighting finds out the candidate region, and wherein D is a constant.
Confidence=1- (pHash+1/I (G(X),H(Y)))/(IOU+D) (6)
It is sorted from large to small according to the confidence level of all candidate regions, the maximum region of confidence level is found out, as final area Domain.
Embodiment 2
A kind of electric power meter detection method merging crusing robot location information, specific steps are as follows:
Step 1, crusing robot reach specified inspection point by location navigation, and navigation error 5cm shoots an image A It is detected for instrument;
The image A and current inspection point template image B of crusing robot shooting are carried out plum forests Fourier transformation by step 2 With phase relevant calculation, the picture displacement offset target (- 12,33) of abscissa and ordinate is found out, is calculated in image A Instrument estimated location (984,502,103,86);
Step 3 is accurately positioned image to be detected using trained machine learning Adaboost classifier in advance, It is as shown in Figure 5 to obtain multiple object candidate areas;1 candidate frame (having number) is obtained according to Adaboost classifier, is free of The box of numerical value is that step 2 obtains.
Step 4, IOU, mutual information and the pHash and confidence level for calculating each candidate frame, numerical value the first row in candidate frame For pHash, the second behavior mutual information, third behavior IOU, fourth line confidence.
Step 4-1, the candidate region that classifier obtains and instrument estimated location ask friendship and than parameter IOU, such as formula (7) institute Show, result is numerical value fourth line in Fig. 5 box:
Step 4-2, perceptual hash pHash index is calculated.The candidate region image A and interception template image that classifier obtains Middle instrument area image B does perceptual hash pHash calculating.Perceptual hash pHash calculating is that two pictures are zoomed to 32*32 Size carries out cosine transform, the region of the 8*8 in the image upper left corner after choosing cosine transform, the direct current point of removal coordinate (0,0) The Hamming distance that the feature vector of image A and image B is calculated to 63 dimensional feature vectors is measured, perceptually Hash pHash index The index being calculated is as shown in Figure 5.
Step 4-3, mutual information index is calculated.The mutual of candidate region image and template image is calculated using formula (8)-(11) Information index.
Wherein X, Y are gray-scale pixel values, and G_ ((X)), H_ ((Y)) are the number of template image and candidate image gray-scale pixels Mesh.
Step 4-4, by the friendship of each candidate region and than tri- kinds of IOU, mutual information, perceptual hash pHash indexs according to public affairs Formula (12) does the confidence level that weighting finds out the candidate region, and wherein D is a constant.
Confidence=1- (pHash+1/I (G(X),H(Y)))/(IOU+D) (12)
Since IOU index is less than threshold value thresholdIOU (0.3), therefore abandon the alternative detection zone of Adaboost classifier Domain, the estimation region that 2 obtain that takes steps are exported as detection zone result.

Claims (8)

1. a kind of electric power meter detection method for merging crusing robot location information, which is characterized in that specific steps are as follows:
Step 1 chooses one in inspection point shooting using Instrument image data set training classifier, and for each inspection point As template image, crusing robot reaches specified inspection point and obtains instrument picture to be detected instrument image placed in the middle;
Step 2, using plum forests Fourier transformation and phase coherent techniques treat detection instrument picture in target instrument region into Row coarse positioning;
Step 3 is accurately positioned target instrument region using the method for machine learning, and image to be detected feeding was trained Classifier, obtain several object candidate areas;
Step 4, the perceptual hash, mutual information and the friendship that calculate multiple candidate regions and three kinds of parameter indexes of ratio, screen target candidate Region obtains final goal.
2. the electric power meter detection method of fusion crusing robot location information according to claim 1, which is characterized in that Perceptual hash, mutual information and the friendship of multiple candidate regions and three kinds of parameter indexes of ratio are calculated in step 4, screen object candidate area Obtain the specific steps of final goal are as follows:
Step 4-1, each object candidate area and coarse positioning target instrument region are asked respectively and is handed over and than parameter IOU;
Step 4-2, the instrument area image in each object candidate area image and template image is made into perceptual hash meter respectively It calculates, obtains perceptual hash index;
Step 4-3, the mutual information index of each object candidate area image and template image is calculated separately;
Step 4-4, add by the friendship of each object candidate area and than three kinds of IOU, mutual information index and perceptual hash indexs Power finds out the confidence level of each object candidate area, by the maximum object candidate area of confidence level alternately testing result;
If the IOU of step 4-5, alternative testing result meets while being less than given threshold thresholdIOU and (pHash+1/I (G(X), H(Y))) be greater than threshold value thresholdA when, using step 2 determine coarse positioning target instrument region as final goal, it is no Then in case selecting testing result as final goal, pHash is alternative testing result perceptual hash index, I (G(X), H(Y)) it is mutual trust Cease index.
3. the electric power meter detection method of fusion crusing robot location information according to claim 2, which is characterized in that Each object candidate area and coarse positioning target instrument region are sought the formula handed over and than parameter IOU by step 4-1 are as follows:
In formula, C is coarse positioning target instrument region, niFor object candidate area.
4. the electric power meter detection method of fusion crusing robot location information according to claim 2, which is characterized in that Instrument area image in each object candidate area image and template image is done the tool of perceptual hash calculating by step 4-2 respectively Body method are as follows:
Object candidate area image and template image are zoomed into same size, carry out cosine transform, after choosing cosine transform The part low frequency region in the image upper left corner, the DC component of removal coordinate (0,0) obtain N-dimensional feature vector altogether, calculate target The Hamming distance of the feature vector of candidate region image and template image, perceptually Hash index.
5. the electric power meter detection method of fusion crusing robot location information according to claim 2, which is characterized in that The calculation formula of the mutual information index of each object candidate area image of step 4-3 and template image are as follows:
G(X)、H(Y)The respectively number of template image and candidate image gray-scale pixels, W, H are respectively that candidate region image is wide, high.
6. the electric power meter detection method of fusion crusing robot location information according to claim 2, which is characterized in that The calculation formula of confidence level in step 4-4 are as follows:
Confidence=1- (pHash+1/I (G(X), H(Y)))/(IOU+D)
In formula, I (G(X), H(Y)) it is mutual information index, pHash is perceptual hash index, and for IOU to hand over and than index, D is setting Constant.
7. the electric power meter detection method of fusion crusing robot location information according to claim 2, which is characterized in that Threshold value thresholdIOU value range 0.1~0.4, threshold value thresholdA value range 10~50.
8. the electric power meter detection method of fusion crusing robot location information according to claim 1, which is characterized in that Classifier in step 1 is Adaboost classifier.
CN201811148567.8A 2018-09-29 2018-09-29 Power meter detection method fusing inspection robot positioning information Active CN109360289B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811148567.8A CN109360289B (en) 2018-09-29 2018-09-29 Power meter detection method fusing inspection robot positioning information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811148567.8A CN109360289B (en) 2018-09-29 2018-09-29 Power meter detection method fusing inspection robot positioning information

Publications (2)

Publication Number Publication Date
CN109360289A true CN109360289A (en) 2019-02-19
CN109360289B CN109360289B (en) 2021-09-28

Family

ID=65348090

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811148567.8A Active CN109360289B (en) 2018-09-29 2018-09-29 Power meter detection method fusing inspection robot positioning information

Country Status (1)

Country Link
CN (1) CN109360289B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110298339A (en) * 2019-06-27 2019-10-01 北京史河科技有限公司 A kind of instrument disk discrimination method, device and computer storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104268528A (en) * 2014-09-28 2015-01-07 深圳市科松电子有限公司 Method and device for detecting crowd gathered region
CN105260412A (en) * 2015-09-24 2016-01-20 东方网力科技股份有限公司 Image storage method and device, and image retrieval method and device
CN106951930A (en) * 2017-04-13 2017-07-14 杭州申昊科技股份有限公司 A kind of instrument localization method suitable for Intelligent Mobile Robot
CN107092905A (en) * 2017-03-24 2017-08-25 重庆邮电大学 A kind of instrument localization method to be identified of electric inspection process robot
CN107610162A (en) * 2017-08-04 2018-01-19 浙江工业大学 A kind of three-dimensional multimode state medical image autoegistration method based on mutual information and image segmentation
CN107665348A (en) * 2017-09-26 2018-02-06 山东鲁能智能技术有限公司 A kind of digit recognition method and device of transformer station's digital instrument
CN107958073A (en) * 2017-12-07 2018-04-24 电子科技大学 A kind of Color Image Retrieval based on particle swarm optimization algorithm optimization
CN108446584A (en) * 2018-01-30 2018-08-24 中国航天电子技术研究院 A kind of unmanned plane scouting video image target automatic testing method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104268528A (en) * 2014-09-28 2015-01-07 深圳市科松电子有限公司 Method and device for detecting crowd gathered region
CN105260412A (en) * 2015-09-24 2016-01-20 东方网力科技股份有限公司 Image storage method and device, and image retrieval method and device
CN107092905A (en) * 2017-03-24 2017-08-25 重庆邮电大学 A kind of instrument localization method to be identified of electric inspection process robot
CN106951930A (en) * 2017-04-13 2017-07-14 杭州申昊科技股份有限公司 A kind of instrument localization method suitable for Intelligent Mobile Robot
CN107610162A (en) * 2017-08-04 2018-01-19 浙江工业大学 A kind of three-dimensional multimode state medical image autoegistration method based on mutual information and image segmentation
CN107665348A (en) * 2017-09-26 2018-02-06 山东鲁能智能技术有限公司 A kind of digit recognition method and device of transformer station's digital instrument
CN107958073A (en) * 2017-12-07 2018-04-24 电子科技大学 A kind of Color Image Retrieval based on particle swarm optimization algorithm optimization
CN108446584A (en) * 2018-01-30 2018-08-24 中国航天电子技术研究院 A kind of unmanned plane scouting video image target automatic testing method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110298339A (en) * 2019-06-27 2019-10-01 北京史河科技有限公司 A kind of instrument disk discrimination method, device and computer storage medium

Also Published As

Publication number Publication date
CN109360289B (en) 2021-09-28

Similar Documents

Publication Publication Date Title
CN107229930B (en) Intelligent identification method for numerical value of pointer instrument
CN102521560B (en) Instrument pointer image identification method of high-robustness rod
CN109447061A (en) Reactor oil level indicator recognition methods based on crusing robot
CN105335973B (en) Apply to the visual processing method of strip machining production line
CN108319949A (en) Mostly towards Ship Target Detection and recognition methods in a kind of high-resolution remote sensing image
CN109100741A (en) A kind of object detection method based on 3D laser radar and image data
CN107025648A (en) A kind of board failure infrared image automatic testing method
CN109583324A (en) A kind of pointer meters reading automatic identifying method based on the more box detectors of single-point
CN109389165A (en) Oil level gauge for transformer recognition methods based on crusing robot
CN108520514B (en) Consistency detection method for electronic elements of printed circuit board based on computer vision
CN105913415A (en) Image sub-pixel edge extraction method having extensive adaptability
CN106056597B (en) Object visual detection method and device
CN104977313A (en) Method and device for detecting and identifying X-ray image defects of welding seam
CN103927758B (en) Saliency detection method based on contrast ratio and minimum convex hull of angular point
CN109447062A (en) Pointer-type gauges recognition methods based on crusing robot
CN107862690A (en) The circuit board element localization method and positioner of a kind of feature based Point matching
CN107016353B (en) A kind of integrated method and system of variable resolution target detection and identification
CN106157308A (en) Rectangular target object detecting method
CN107578021A (en) Pedestrian detection method, apparatus and system based on deep learning network
JP2011002919A (en) Method and device for detecting object
CN108960115A (en) Multi-direction Method for text detection based on angle point
CN110096980A (en) Character machining identifying system
CN109507198A (en) Mask detection system and method based on Fast Fourier Transform (FFT) and linear Gauss
CN103854278A (en) Printed circuit board image registration method based on shape context of mass center of communicated region
CN114092728A (en) Pointer instrument intelligent identification method and system based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant