CN110781731A - Inspection robot instrument identification method based on specular reflection removal - Google Patents

Inspection robot instrument identification method based on specular reflection removal Download PDF

Info

Publication number
CN110781731A
CN110781731A CN201910877355.1A CN201910877355A CN110781731A CN 110781731 A CN110781731 A CN 110781731A CN 201910877355 A CN201910877355 A CN 201910877355A CN 110781731 A CN110781731 A CN 110781731A
Authority
CN
China
Prior art keywords
image
specular reflection
area
inspection robot
instrument
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910877355.1A
Other languages
Chinese (zh)
Inventor
刘杨
刘俊
李冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Dianji University
Original Assignee
Shanghai Dianji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Dianji University filed Critical Shanghai Dianji University
Priority to CN201910877355.1A priority Critical patent/CN110781731A/en
Publication of CN110781731A publication Critical patent/CN110781731A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering

Abstract

The invention relates to a method for identifying an inspection robot instrument based on specular reflection removal, which comprises the following steps: (1) acquiring an instrument image to be identified; (2) detecting the area of a specular reflection area in an instrument image to be identified, identifying the reading in the image if the area is smaller than a threshold value, and otherwise executing the step (3); (3) controlling the inspection robot to move and continuously shooting the target instrument image; (4) detecting the area of a mirror reflection area of a target instrument image after the inspection robot stops moving, if the area is smaller than a threshold value, identifying a reading in the image, and if the area is not smaller than the threshold value, executing the step (5); (5) processing the target instrument image in the moving process of the inspection robot to remove the specular reflection light spots; (6) and (5) taking the image with the specular reflection light spots removed as a new instrument image to be identified, and returning to the step (1). Compared with the prior art, the invention can effectively remove the mirror reflection and realize the data reading of the instrument by the inspection robot in the severe illumination environment.

Description

Inspection robot instrument identification method based on specular reflection removal
Technical Field
The invention relates to a meter identification method, in particular to a method for identifying an inspection robot meter based on specular reflection removal.
Background
With the development of industry and the acceleration of urbanization, electric power plays a vital role in human society. And the transformer substation is an important part in the power system, and the smooth operation of the transformer substation is very important. However, since it is located in an outdoor environment, it is also subject to adverse weather such as rain and snow, in addition to natural loss of the power equipment itself. These accelerate the aging of the power equipment, so we need to make regular checks on the substation to ensure safety. At present, most of instrument equipment of a transformer substation in China is needle-type instruments due to the anti-interference performance and stability of the needle-type instruments, the instruments cannot directly output digital signals, and most of the instruments are manually inspected at present. However, the patrol effect is easily restricted by a plurality of conditions such as psychological quality, service level and the like of patrol personnel, and phenomena such as false detection, omission detection and the like occur frequently. According to the statistical report of the power grid operation of the Chinese academy of electric power science, the economic loss caused by the operation of missed detection, false detection and the like of the power transformation equipment is up to more than 26 hundred million yuan each year. In addition, working under the condition of high pressure for a long time has great harm to inspection personnel, and is not suitable for manual inspection under bad weather such as thunderstorm and rain. Therefore, with the continuous development of the inspection robot technology, the robot gradually replaces the manual work to complete the inspection task. Because the environment that acquires the image is very complicated among the transformer substation's the in-service use of patrolling and examining robot, has illumination inequality, shelter from the thing, bad conditions such as rainwater cover, and the pointer instrument dial plate image that obtains still can have the interference of surrounding environment, so the quality of image can bring very big interference for the image preprocessing and the instrument reading in later stage, can lead to the inaccurate scheduling problem of reading. In particular, strong specular reflections can obscure the portion indicators, greatly affecting the accuracy of the identification.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide an inspection robot instrument identification method based on specular reflection removal.
The purpose of the invention can be realized by the following technical scheme:
a method for identifying an inspection robot instrument based on specular reflection removal comprises the following steps:
(1) the inspection robot acquires an instrument image to be identified;
(2) detecting the area of a specular reflection area in an instrument image to be identified, identifying the reading in the image if the area of the specular reflection area is smaller than a threshold value, and otherwise, executing the step (3);
(3) controlling the inspection robot to move and continuously shooting the target instrument image;
(4) detecting the area of the specular reflection area of the target instrument image after the inspection robot stops moving, if the area of the specular reflection area is smaller than a threshold value, identifying the reading in the image, and if not, executing the step (5);
(5) processing the target instrument image in the moving process of the inspection robot based on the characteristic matching to remove the specular reflection light spots;
(6) and (4) taking the image with the specular reflection light spots removed as a new instrument image to be identified, and returning to the step (1).
The area of the specular reflection area is obtained by the following method:
(a) converting an RGB image to be detected into a YUV brightness space;
(b) obtaining the brightness significant value of each pixel point in the RGB image;
(c) determining pixel points with the brightness significant value larger than a threshold value as highlight pixel points, and determining the rest as diffuse reflection points;
(d) and determining the area occupied by the highlight pixel points in the RGB image to be detected as the area of the specular reflection area.
Step (a) is converted by the following formula:
Figure BDA0002204726080000021
where Y denotes a luminance channel, U and V denote chrominance differences, and R, G, B denote three color channels in an RGB image, respectively.
The luminance significant value in the step (b) is obtained by the following formula:
Figure BDA0002204726080000022
wherein S (p) is the significant value of brightness of the pixel point p to be solved, Y pIs the gray value of the pixel point p in the brightness channel,Y qThe gray value of the pixel point q in the brightness channel, | | | · | |, represents the gray distance, and N is the total pixel point of the RGB image.
The threshold value in step (c) is determined as
Figure BDA0002204726080000023
Figure BDA0002204726080000031
The step (5) specifically comprises the following substeps:
(51) respectively detecting angular points in the target instrument image and determining the angular points as characteristic points;
(52) matching feature points of the target instrument images in pairs respectively;
(53) eliminating mismatching feature points;
(54) curling the two matched images to complete image alignment;
(55) replacing the overlapped area in the aligned images by the minimum value of the pixels in the two images;
(56) and performing overlapping area smoothing processing on the image subjected to the overlapping area replacement to finish the image from which the specular reflection light spots are removed.
And (51) detecting the characteristic points by adopting a Harris algorithm.
And (52) performing feature point matching through a normalized cross-correlation algorithm.
And (53) eliminating the mismatched feature points by a random sampling consistency algorithm.
And (56) smoothing by adopting a Multiband blender algorithm.
Compared with the prior art, the invention has the following advantages:
(1) the automatic inspection robot can help the automatic inspection robot to read the data of the instruments in a severe illumination environment, master the operation conditions of the instruments of the transformer substation, reduce potential safety hazards and have very important significance for promoting the development of intelligent transformer substations;
(2) the invention obtains instrument images of a plurality of angles by utilizing the convenient mobility of the inspection robot, and recovers the images through the effective information of the adjacent images, thereby indirectly removing light spots, eliminating mirror reflection without additionally installing a sensor, recovering the original information of the images, improving the quality and the identification precision, and simultaneously realizing low cost.
Drawings
FIG. 1 is a block diagram of the overall flow of the inspection robot instrument recognition method based on specular reflection removal according to the present invention;
FIG. 2 is a schematic diagram of the inspection robot movement of the present invention;
FIG. 3 is a block diagram of a process for removing specularly reflected light spots according to the present invention.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. Note that the following description of the embodiments is merely a substantial example, and the present invention is not intended to be limited to the application or the use thereof, and is not limited to the following embodiments.
Examples
As shown in fig. 1, a method for identifying a patrol robot instrument based on specular reflection removal includes the following steps:
(1) the inspection robot acquires an instrument image to be identified;
(2) detecting the area of a specular reflection area in an instrument image to be identified, identifying the reading in the image if the area of the specular reflection area is smaller than a threshold value, and otherwise, executing the step (3);
(3) controlling the inspection robot to move and continuously shooting the target instrument image;
(4) detecting the area of the specular reflection area of the target instrument image after the inspection robot stops moving, if the area of the specular reflection area is smaller than a threshold value, identifying the reading in the image, and if not, executing the step (5);
(5) processing the target instrument image in the moving process of the inspection robot based on the characteristic matching to remove the specular reflection light spots;
(6) and (4) taking the image with the specular reflection light spots removed as a new instrument image to be identified, and returning to the step (1).
The area of the specular reflection area is obtained by:
(a) converting an RGB image to be detected into a YUV brightness space, specifically converting by the following formula:
where Y denotes a luminance channel, U and V denote chrominance differences, and R, G, B denote three color channels in an RGB image, respectively.
(b) The brightness significant value of each pixel point in the RGB image is obtained:
Figure BDA0002204726080000042
wherein S (p) is the significant value of brightness of the pixel point p to be solved, Y pIs the gray value of pixel point p in the brightness channel, Y qThe gray value of the pixel point q in the brightness channel, | | | · | |, represents the gray distance, and N is the total pixel point of the RGB image.
(c) Determining the pixel points with the brightness significant value larger than the threshold value as highlight pixel points, and determining the rest as diffuse reflection points:
the threshold is determined as
Figure BDA0002204726080000043
Figure BDA0002204726080000051
When the pixel saliency value is greater than the threshold If so, setting the pixel point as a highlight pixel, and if the significance value of the pixel is less than Y, setting the pixel point as a diffuse reflection point.
(d) And determining the area occupied by the highlight pixel points in the RGB image to be detected as the area of the specular reflection area.
In step (3), the inspection robot is controlled to move along an arc to reduce the offset of the target instrument caused by the movement of the robot, as shown in fig. 2. Since the robot only needs to move a small distance or rotate a small angle to obtain images with different reflection areas, the obtained position of the instrument is not affected. During the movement, the inspection robot shoots a target instrument from multiple angles to obtain multiple images of the same instrument. And detecting the mirror surface reflection area again after the movement is stopped, and if the mirror surface reflection area still exceeds the threshold value, performing image restoration based on the feature correspondence.
And (5) removing the light spots by adopting a new light spot removing algorithm based on feature matching, which is specifically shown in fig. 3. Firstly, feature point detection is carried out by adopting a Harris algorithm, then feature matching is carried out by utilizing an NCC algorithm and through RANSAC algorithm points, then image curling and image overlapping area replacement are carried out, and finally the overlapping area of the images is smoothed to eliminate the boundary. The final output image can effectively eliminate the specular reflection. The method comprises the following specific steps:
(51) respectively detecting angular points in the target instrument image and determining the angular points as characteristic points;
(52) matching feature points of the target instrument images in pairs respectively;
(53) eliminating mismatching feature points;
(54) curling the two matched images to complete image alignment;
(55) replacing the overlapped area in the aligned images by the minimum value of the pixels in the two images;
(56) and performing overlapping area smoothing processing on the image subjected to the overlapping area replacement to finish the image from which the specular reflection light spots are removed.
More specifically:
1. angular point detection
In a first step, feature points are detected from a plurality of captured images. An image feature is a point that contains rich information, such as a corner of an image or a location where texture changes dramatically. The commonly used corner detection algorithms at present include Harris algorithm, SIFT algorithm, SURF algorithm, and the like. Among the above-mentioned algorithm, Harris algorithm has noise immunity when detecting the instrument characteristic point of transformer substation, and Harris algorithm calculates simply and has the invariance of rotation and illumination in addition, can adapt to the influence of different illumination. Therefore, the embodiment of the invention adopts Harris algorithm to detect the characteristic points. The Harris operator calculates the grey scale transformation in the image by the motion of the window to determine the corner points.
Assuming that the image I (x, y) is moved (m, n) at point (x, y), the obtained adaptive correlation function is as shown in equations (1) and (2):
Figure BDA0002204726080000061
Figure BDA0002204726080000062
where w (x, y) is a window function (here a gaussian weighting function) centered at point (x, y), I xAnd I yIs the partial derivative, and M is the matrix of partial derivatives that determines the pixel properties by calculating their eigenvalues. In addition, the angular response function R is created in formula (3).
R=detM-k(traceM) 2(3)
Where det M is the determinant of matrix M, and M is the trace of the matrix. If the R value of a certain point is larger than the set threshold value, the certain point is a corner point, and otherwise, if the R value of the certain point is smaller than the set threshold value, the certain point is not a corner point.
2. Feature point matching
After the characteristic points are obtained through the Harris algorithm, the characteristic points are in one-to-one correspondence by searching for possible matches in the next step. According to the accuracy and speed requirements of feature matching, a Normalized cross correlation algorithm (Normalized cross correlation) is used for matching. The NCC algorithm determines whether two images are related by calculating a correlation coefficient between the two images, as shown in equation (4).
Figure BDA0002204726080000063
Wherein G and HIs two images to be matched, M x N and M x N respectively represent the size of the two images, H x,yIs a sub-image of size m x n with its upper left coordinate (x, y). The larger the calculated NCC value, the higher the linear correlation of the two images.
3. Mismatch cancellation
After the matching of the feature points is completed, some mismatched points exist between the two pictures due to the similarity of the gray values and the similarity of partial feature points. In order to secure matching accuracy and protect detailed parts of an image, it is necessary to eliminate mismatch. Therefore, the mismatch points in this document are filtered by the RANdom SAmple Consensus algorithm (RANSAC). The basic idea of RANSAC is: and randomly selecting characteristic points from the matching points obtained each time, and obtaining the optimal model with the characteristic points matched most through calculation and iteration. Furthermore, since the motion between the images involves rotation in different directions and angles, it is necessary to project the images onto the same plane. The relationship between the two images can be described by a homography matrix H, as shown in equation (5).
Figure BDA0002204726080000071
Wherein h is ijIs an arbitrary number, h 33Typically set to 1. The process of obtaining the optimal matrix H of best matching feature points is as follows:
1. n data points are randomly selected from the set P of N data points, and the size of N does not exceed N.
2. Model M was fitted using n data points.
3. The distance between the remaining point in P and the model M is calculated. If the value of the point exceeds the threshold, it is regarded as an outlier, otherwise it is determined as an inlier. And recording the number M of local points corresponding to the model M at the moment.
4. After k iterations, the model M with the largest value of M is selected as the result of the fitting. The probability of the selected model being correct is shown in (6):
p=1-(1-w m) k(6)
where p is the probability that the model selected by RANSAC is correct, w is the ratio of the interior points, m is the number of points needed to compute the model, and k is the number of iterations of the algorithm. The number of points used in each experiment was m-4 and k-100 (which could produce reliable homography).
4. Image warping
Since the use of homography matrices alone can lead to alignment errors, it is necessary to realign them. Two identical environment maps are first formed using the computed projective transformation matrix. The images may then be individually warped into the environmental map. To avoid distortion of the original image, the mesh is created from a cartesian coordinate system, with the creation of the complex surface as shown in (7).
Where C is the synthesis matrix, x col、y rowRepresenting the column and row sizes of the matching images, respectively. In H obtained in (5), the equation as shown in (8) can be obtained from the concept of homogeneous coordinates.
Figure BDA0002204726080000073
The coordinate calculation of the final mapping point is shown as (9) and (10):
Figure BDA0002204726080000081
where x ', y' are the coordinates of the mapped pixel location after perspective projection. The overlapping portions can be precisely aligned by the warping of the two images.
5. Replacement of overlapping areas
For two images that have been aligned, there will be an overlap area between the images and a portion with no valid information. In addition, since the pixel value of the specular reflection point, i.e., the spot area, is high and the pixel value of the diffuse reflection is low, the pixel minimum value of the two images is used to replace the pixel of the overlapping portion in this step.
K overlap=min(K 1_overlap,K 2_overlap) (11)
6. Smoothing of overlapping regions
Due to different lighting conditions and chromaticities, stitching gaps may occur at the boundaries of the overlapping regions between the images. For smooth transition of the overlapping region and better stitching effect, a Multiband blender algorithm is introduced to gradually reduce the intensity discontinuity. The smoothing operation is completed by the following five steps.
1. First, a gaussian pyramid of the input image is calculated.
2. A laplacian pyramid of the input image is computed.
3. Fusing the laplacian pyramids at the same level.
4. The high-level laplacian pyramid is extended in turn until it has the same resolution as the source image.
5. The images obtained in step 4 are successively superimposed and an output image is obtained.
The invention can help the automatic inspection robot to read the data of the instruments in a severe illumination environment and master the running conditions of the instruments of the transformer substation. The mirror reflection can be eliminated without additionally installing a sensor, and experimental results show that the reflection can be effectively eliminated and the original information of the image can be restored. The potential safety hazard can be reduced, and the method has very important significance for promoting the development of the intelligent substation.
The above embodiments are merely examples and do not limit the scope of the present invention. These embodiments may be implemented in other various manners, and various omissions, substitutions, and changes may be made without departing from the technical spirit of the present invention.

Claims (10)

1. A method for identifying an inspection robot instrument based on specular reflection removal is characterized by comprising the following steps:
(1) the inspection robot acquires an instrument image to be identified;
(2) detecting the area of a specular reflection area in an instrument image to be identified, identifying the reading in the image if the area of the specular reflection area is smaller than a threshold value, and otherwise, executing the step (3);
(3) controlling the inspection robot to move and continuously shooting the target instrument image;
(4) detecting the area of the specular reflection area of the target instrument image after the inspection robot stops moving, if the area of the specular reflection area is smaller than a threshold value, identifying the reading in the image, and if not, executing the step (5);
(5) processing the target instrument image in the moving process of the inspection robot based on the characteristic matching to remove the specular reflection light spots;
(6) and (4) taking the image with the specular reflection light spots removed as a new instrument image to be identified, and returning to the step (1).
2. The inspection robot instrument recognition method based on specular reflection removal according to claim 1, wherein the area of the specular reflection area is obtained by:
(a) converting an RGB image to be detected into a YUV brightness space;
(b) obtaining the brightness significant value of each pixel point in the RGB image;
(c) determining pixel points with the brightness significant value larger than a threshold value as highlight pixel points, and determining the rest as diffuse reflection points;
(d) and determining the area occupied by the highlight pixel points in the RGB image to be detected as the area of the specular reflection area.
3. The inspection robot instrument recognition method based on specular reflection removal according to claim 2, wherein the step (a) is converted by the following formula:
Figure FDA0002204726070000011
where Y denotes a luminance channel, U and V denote chrominance differences, and R, G, B denote three color channels in an RGB image, respectively.
4. The inspection robot instrument recognition method based on specular reflection removal according to claim 3, wherein the brightness saliency value of step (b) is obtained by the following formula:
Figure FDA0002204726070000021
wherein S (p) is the significant value of brightness of the pixel point p to be solved, Y pIs the gray value of pixel point p in the brightness channel, Y qThe gray value of the pixel point q in the brightness channel, | | | · | |, represents the gray distance, and N is the total pixel point of the RGB image.
5. The inspection robot instrument recognition method based on specular reflection removal according to claim 4, wherein the threshold value is determined in the step (c)
Figure FDA0002204726070000022
Figure FDA0002204726070000023
6. The inspection robot instrument recognition method based on specular reflection removal according to claim 1, wherein the step (5) specifically comprises the following sub-steps:
(51) respectively detecting angular points in the target instrument image and determining the angular points as characteristic points;
(52) matching feature points of the target instrument images in pairs respectively;
(53) eliminating mismatching feature points;
(54) curling the two matched images to complete image alignment;
(55) replacing the overlapped area in the aligned images by the minimum value of the pixels in the two images;
(56) and performing overlapping area smoothing processing on the image subjected to the overlapping area replacement to finish the image from which the specular reflection light spots are removed.
7. The inspection robot instrument recognition method based on specular reflection removal according to claim 6, wherein in the step (51), Harris algorithm is adopted for feature point detection.
8. The inspection robot instrument recognition method based on specular reflection removal according to claim 1, wherein step (52) performs feature point matching through a normalized cross-correlation algorithm.
9. The inspection robot instrument recognition method based on specular reflection removal according to claim 1, wherein the step (53) eliminates mismatching feature points through a random sampling consensus algorithm.
10. The inspection robot instrument recognition method based on specular reflection removal according to claim 1, wherein the step (56) adopts a Multiband blender algorithm for smoothing.
CN201910877355.1A 2019-09-17 2019-09-17 Inspection robot instrument identification method based on specular reflection removal Pending CN110781731A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910877355.1A CN110781731A (en) 2019-09-17 2019-09-17 Inspection robot instrument identification method based on specular reflection removal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910877355.1A CN110781731A (en) 2019-09-17 2019-09-17 Inspection robot instrument identification method based on specular reflection removal

Publications (1)

Publication Number Publication Date
CN110781731A true CN110781731A (en) 2020-02-11

Family

ID=69383571

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910877355.1A Pending CN110781731A (en) 2019-09-17 2019-09-17 Inspection robot instrument identification method based on specular reflection removal

Country Status (1)

Country Link
CN (1) CN110781731A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113762239A (en) * 2021-08-10 2021-12-07 国网河北省电力有限公司保定供电分公司 Meter reflection identification method based on inspection robot

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105741244A (en) * 2016-01-28 2016-07-06 山东鲁能智能技术有限公司 Method for removing shadows and halos under weak light through indoor polling robot
CN107481201A (en) * 2017-08-07 2017-12-15 桂林电子科技大学 A kind of high-intensity region method based on multi-view image characteristic matching
CN107993193A (en) * 2017-09-21 2018-05-04 沈阳工业大学 The tunnel-liner image split-joint method of surf algorithms is equalized and improved based on illumination

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105741244A (en) * 2016-01-28 2016-07-06 山东鲁能智能技术有限公司 Method for removing shadows and halos under weak light through indoor polling robot
CN107481201A (en) * 2017-08-07 2017-12-15 桂林电子科技大学 A kind of high-intensity region method based on multi-view image characteristic matching
CN107993193A (en) * 2017-09-21 2018-05-04 沈阳工业大学 The tunnel-liner image split-joint method of surf algorithms is equalized and improved based on illumination

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
温佩芝,等: "多视角图像特征匹配的高光去除方法" *
臧雪: "巡检机器人自主仪表视觉识别系统的设计与研究" *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113762239A (en) * 2021-08-10 2021-12-07 国网河北省电力有限公司保定供电分公司 Meter reflection identification method based on inspection robot

Similar Documents

Publication Publication Date Title
CN107014294B (en) Contact net geometric parameter detection method and system based on infrared image
WO2016062159A1 (en) Image matching method and platform for testing of mobile phone applications
CN113592861B (en) Bridge crack detection method based on dynamic threshold
CN105160652A (en) Handset casing testing apparatus and method based on computer vision
CN108846397B (en) Automatic detection method for cable semi-conducting layer based on image processing
CN108369650A (en) The method that candidate point in the image of calibrating pattern is identified as to the possibility characteristic point of the calibrating pattern
CN110400315A (en) A kind of defect inspection method, apparatus and system
CN107092905B (en) Method for positioning instrument to be identified of power inspection robot
CN105095896A (en) Image distortion correction method based on look-up table
CN110619328A (en) Intelligent ship water gauge reading identification method based on image processing and deep learning
CN110675447A (en) People counting method based on combination of visible light camera and thermal imager
CN111932504A (en) Sub-pixel positioning method and device based on edge contour information
CN106228541A (en) The method and device of screen location in vision-based detection
CN109492525B (en) Method for measuring engineering parameters of base station antenna
CN111157532A (en) Visual detection device and method for scratches of mobile phone shell
CN111624203A (en) Relay contact alignment non-contact measurement method based on machine vision
CN111638218A (en) Method for detecting surface defects of coating
CN115639248A (en) System and method for detecting quality of building outer wall
CN110781731A (en) Inspection robot instrument identification method based on specular reflection removal
CN111127409A (en) Train component detection method based on SIFT image registration and cosine similarity
CN107833223B (en) Fruit hyperspectral image segmentation method based on spectral information
CN107977959B (en) Respirator state identification method suitable for electric power robot
CN110263784A (en) The English paper achievement of intelligence identifies input method
JP2006098217A (en) Image inspection apparatus, image inspection method, and image inspection program
CN115019297B (en) Real-time license plate detection and identification method and device based on color augmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination