CN112950562A - Fastener detection algorithm based on line structured light - Google Patents
Fastener detection algorithm based on line structured light Download PDFInfo
- Publication number
- CN112950562A CN112950562A CN202110197421.8A CN202110197421A CN112950562A CN 112950562 A CN112950562 A CN 112950562A CN 202110197421 A CN202110197421 A CN 202110197421A CN 112950562 A CN112950562 A CN 112950562A
- Authority
- CN
- China
- Prior art keywords
- template
- point
- algorithm
- gradient
- structured light
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 50
- 238000013135 deep learning Methods 0.000 claims abstract description 14
- 230000007547 defect Effects 0.000 claims abstract description 11
- 239000000284 extract Substances 0.000 claims abstract description 5
- 239000011159 matrix material Substances 0.000 claims description 19
- 238000012549 training Methods 0.000 claims description 9
- 238000013519 translation Methods 0.000 claims description 9
- 230000006870 function Effects 0.000 claims description 7
- 238000013528 artificial neural network Methods 0.000 claims description 3
- 238000002372 labelling Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 238000004519 manufacturing process Methods 0.000 claims description 2
- 238000005259 measurement Methods 0.000 claims description 2
- 238000000034 method Methods 0.000 abstract description 7
- 230000000694 effects Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000002159 abnormal effect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000009792 diffusion process Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30236—Traffic on road, railway or crossing
Abstract
The invention discloses a fastener detection algorithm based on line structured light, which is characterized by comprising the following steps of: (a) positioning a fastener area in the depth map through deep learning target detection; (b) roughly positioning the position of the elastic strip in the fastener area through a template matching algorithm; (c) reducing the depth map into a three-dimensional point cloud, and performing accurate positioning according to a coarse positioning result to obtain an accurately positioned elastic strip position; (d) and detecting corresponding defects according to the accurately positioned elastic strip position. The method extracts a fastener area from a depth map through deep learning, roughly positions the fastener area through template matching, converts the fastener area into a three-dimensional Point cloud, finely positions the fastener area by adopting an Iterative closest Point (Iterative closest Point) algorithm, and then detects subsequent components.
Description
Technical Field
The invention relates to the field of track detection, in particular to a fastener detection algorithm based on line structured light.
Background
Railway fasteners are important components of rail systems and function to secure rails to sleepers and prevent displacement and tilting of the rails. At present, due to the rapid development of the structured light technology, especially the improvement of the frame rate, the advantages of non-contact, high precision and high efficiency dynamic detection are widely applied to the detection in the rail industry. The fastener detection scheme adopted in the industry is to acquire a depth map of a rail and a fastener area through a structured light camera and realize a detection function through a traditional computer vision technology.
In the full picture, the fastener region needs to be located first. Due to the complex running environment of the train, the sensor detection or template matching mode in the conventional scheme has the defects of poor robustness or long time consumption and the like. The problems of incomplete function, poor stability and the like generally exist in the detection process after positioning.
In view of this, the present application provides a fastener detection algorithm based on line structured light.
Disclosure of Invention
The invention aims to provide a fastener detection algorithm based on structured light, which is rapid, accurate and complete in function aiming at the defects of the prior art.
In order to solve the technical problems, the following technical scheme is adopted:
a fastener detection algorithm based on line structured light is characterized by comprising the following steps:
(a) positioning a fastener area in the depth map through deep learning target detection;
(b) roughly positioning the position of the elastic strip in the fastener area through a template matching algorithm;
(c) reducing the depth map into a three-dimensional point cloud, and performing accurate positioning according to a coarse positioning result to obtain an accurately positioned elastic strip position;
(d) and detecting corresponding defects according to the accurately positioned elastic strip position.
Further, in the step (a), based on the deep learning target detection, a fast-RCNN detection method is adopted, and the detection steps of the fast-RCNN detection method are as follows:
(1) collecting original pictures shot in various environments on site, marking the positions of fasteners, and inputting the pictures and marking files to the Faster-RCNN for training;
(2) the fast-RCNN extracts the picture into a multi-dimensional feature map in a convolution mode, and generates a plurality of prediction frames on the picture through an RPN region generation network and an RCNN classification network;
(3) and determining a marking frame and a prediction frame according to the marking file, automatically searching for the difference between the prediction frame and the marking frame, and generating and recording a training result.
(4) After training a large number of pictures, the neural network predicts the fastener positions as new pictures are imported in the same rule.
Further, in the step (3) of the step (a), after the labeling frame and the prediction frame are determined, the deviation of the prediction frame needs to be modified.
Further, in the step (b), the step of coarsely positioning the position of the pop-up strip by the template matching algorithm is as follows:
firstly, calculating the gradient of each direction of a template picture through a sobel operator, obtaining a gradient map through convolution of each gradient and the picture, wherein the gray value of each pixel on the gradient map represents the gradient information of the pixel on an original picture,
combining the gradient map to obtain the comprehensive gradient size and direction of each point, then selecting points with gradient size higher than a set value from the gradient map, and uniformly sampling the points to be used as finally selected characteristic points; and finally, taking the gradient direction as a descriptor to obtain a template file.
Further, a multi-angle template is adopted for manufacturing the template file to obtain a multi-angle template file, and the best result is selected according to the matching score during matching according to the multi-angle template file.
Further, during matching, a gradient map of the original image is calculated, the gradient characteristics of each point are diffused, matching is performed through a sliding window, and a result with the highest score is output.
Further, in the step (c), the depth map is restored to a three-dimensional point cloud, and a template matching result on the depth map is mapped to the point cloud map in an iterative closest point algorithm mode to serve as an initial value of the iterative closest point algorithm; then finding out the most adjacent points of each point of the template point cloud, and calculating optimal matching parameters R and t; error function adoptedThe following were used:
where n is the number of nearest neighbor point pairs, PiFor the point to be matched in the ith pair of nearest neighbors, qiThe template points in the ith pair of nearest neighbor points are R is a rotation matrix and t is a translation matrix.
Further, the steps of using the iterative closest point algorithm are as follows:
(1) calculating the nearest points of each point in the point cloud to be matched in the point cloud to be template according to the initial value of the iterative nearest point algorithm, and forming point pairs;
(2) calculating a rotation matrix R and a translation matrix t;
(3) and (3) converting the template point cloud through the calculated rotation matrix R and translation matrix t, and repeating the step (1) and the step (1) for the converted template point cloud until E (R, t) is less than a set threshold or the iteration times reach a set value, so as to obtain the position of the accurately positioned elastic strip.
Further, in the step (d), the detecting the corresponding defect at the position of the elastic strip comprises: and (4) measuring whether the elastic strip is broken, deformed and surface foreign matters and the stop block exist or not and measuring the gap value.
Further, the gap value is measured, and the height value of the base at the position of the elastic strip and the height value of the bolt and the nut fastened on the elastic strip are used for calculating.
Due to the adoption of the technical scheme, the method has the following beneficial effects:
the invention relates to a fastener detection algorithm based on line structured light, which extracts a fastener area from a depth map through deep learning, performs rough positioning on the fastener area through template matching, converts the fastener area into three-dimensional Point cloud, performs fine positioning by adopting an Iterative closest Point (Iterative closest Point) algorithm, and then performs subsequent component detection.
Compared with the similar scheme, the detection algorithm provided by the invention has the advantages of higher running speed, higher precision and stronger robustness.
Drawings
The invention will be further described with reference to the accompanying drawings in which:
FIG. 1 is a flow chart of a fastener detection algorithm based on line structured light in accordance with the present invention;
FIG. 2 is a depth map for deep learning target detection in the present invention;
FIG. 3 is a diagram of the effect of template matching for coarsely locating the position of the ejector strip by the template matching algorithm in the present invention;
FIG. 4 is a diagram of ICP matching effect of the iterative closest point algorithm of the present invention;
FIG. 5 is a schematic view of the detection effect of the fastener of the present invention;
FIG. 6 is a gradient feature diffusion map for coarse localization by the template matching algorithm of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings and examples. It should be understood, however, that the description herein of specific embodiments is only intended to illustrate the invention and not to limit the scope of the invention. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present invention.
As shown in fig. 1, a fastener detection algorithm based on line structured light includes the following steps:
(a) positioning a fastener area in the depth map through deep learning target detection;
(b) roughly positioning the position of the elastic strip in the fastener area through a template matching algorithm;
(c) reducing the depth map into a three-dimensional point cloud, and performing accurate positioning according to a coarse positioning result to obtain an accurately positioned elastic strip position;
(d) and detecting corresponding defects according to the accurately positioned elastic strip position.
Further, referring to fig. 2, in step (a), based on the deep learning target detection, considering that the railway detection may be affected by different illumination, weather and other factors in indoor and outdoor environments and day and night environments, if a traditional positioning algorithm is adopted, the difference between the target object and the environment needs to be manually found, and the stability is difficult to be ensured in a complex environment, so that the target detection algorithm in the deep learning is adopted as the fastener positioning algorithm.
Combining the requirements of precision and speed, therefore, adopting a fast-RCNN detection method, wherein the detection steps of the fast-RCNN detection method are as follows:
(1) collecting original pictures shot in various environments on site, marking the positions of fasteners, and inputting the pictures and marking files to the Faster-RCNN for training;
(2) the fast-RCNN extracts the picture into a multi-dimensional feature map in a convolution mode, and generates a plurality of prediction frames on the picture through an RPN region generation network and an RCNN classification network;
(3) and determining a marking frame and the prediction frame according to the marking file, automatically searching the difference between the prediction frame and the marking frame, and generating and recording a training result.
(4) After training a large number of pictures, the neural network predicts the fastener positions as new pictures are imported in the same rule.
Specifically, in the step (3) of the step (a), after the labeling frame and the prediction frame are determined, the deviation of the prediction frame needs to be modified.
Specifically, through deep learning, the difference between the fastener and the detection background can be found by oneself only by a large number of marked pictures, and the difference is often difficult to write in program logic. And deep learning is operated by the GPU, so that the method has the advantages of extremely high speed and no occupation of CPU resources.
Specifically, referring to fig. 3, in step (b), the step of coarsely locating the position of the pop-up strip by the template matching algorithm is as follows:
firstly, each gradient of a template picture is calculated through a sobel operator, a gradient image is obtained through convolution of each gradient and the picture, and the gray value of each pixel on the gradient image represents the gradient information of the pixel on an original picture. Specifically, the operator includes two sets of 3 × 3 matrices, horizontal and vertical, respectively, and the sobel formula is as follows:
wherein Gx represents a transverse gradient map, Gy represents a longitudinal gradient map, and a represents an original map;
the meaning of the above formula is that the pixel value of each point at Gx is obtained by multiplying and adding the pixel values in the range of 3x3 with the point as the core in the a diagram by the corresponding matrix element on the left side of a.
Combining the gradient map to obtain the comprehensive gradient size and direction of each point, then selecting points with gradient size higher than a set value from the gradient map, and uniformly sampling the points to be used as finally selected characteristic points; and finally, taking the gradient direction as a descriptor to obtain a template file.
In particular, referring to FIG. 6, a multi-angled template may also need to be fabricated due to the possible rotation of the fasteners in the field environment. Therefore, the template file is manufactured by adopting a multi-angle template to obtain a multi-angle template file, and the best result is selected according to the matching score during matching according to the multi-angle template file.
Specifically, during matching, a gradient map of the original image is calculated, gradient features of each point are diffused to a certain extent so that matching has a certain fault tolerance, matching is performed through a sliding window, and a result with the highest score is output.
Specifically, referring to fig. 4, in step (c), the depth map is restored to a three-dimensional Point cloud, and an Iterative closest Point (Iterative closest Point) algorithm is adopted to map a template matching result on the depth map into a Point cloud map as an initial value of the Iterative closest Point algorithm; then finding out the most adjacent points of each point of the template point cloud, and calculating optimal matching parameters R and t; the error function used is as follows:
where n is the number of nearest neighbor point pairs, PiIn the ith pair of nearest neighbor pointsTo be matched point of qiThe template points in the ith pair of nearest neighbor points are R is a rotation matrix and t is a translation matrix.
Further, the steps of using the iterative closest point algorithm are as follows:
(1) calculating the nearest points of each point in the point cloud to be matched in the point cloud to be template according to the initial value of the iterative nearest point algorithm, and forming point pairs;
(2) calculating a rotation matrix R and a translation matrix t;
(3) and (3) converting the template point cloud through the calculated rotation matrix R and translation matrix t, and repeating the step (1) and the step (1) for the converted template point cloud until E (R, t) is less than a set threshold or the iteration times reach a set value, so as to obtain the position of the accurately positioned elastic strip.
In an Iterative closest Point (Iterative closest Point) algorithm, the initial value is calculated by adopting template matching, and the template Point cloud and the Point cloud to be matched are matched quite well when the matching is started. Therefore, the matching time is saved, and the matching accuracy is greatly improved.
Referring to fig. 5, in step (d), the detecting the corresponding defect at the position of the elastic strip includes: and (4) measuring whether the elastic strip is broken, deformed and surface foreign matters and the stop block exist or not and measuring the gap value. Specifically, after the elastic strip is determined to be in the three-dimensional point cloud, the defects of breakage, deformation, surface foreign matters and the like of the elastic strip can be detected by comparing the elastic strip with the template. And according to the position of the elastic strip, determining the position of the gauge stop, comparing the point cloud at the gauge stop with the point cloud of the rail web near the stop, and judging whether the stop exists.
Specifically, for the gap value measurement, the height value of the base at the position of the elastic strip and the height value of the bolt and the nut fastened on the elastic strip are calculated. The method mainly comprises the following steps:
1. and selecting proper point clouds at the base position to serve as the reference of a fitting plane, fitting the plane and calculating the distance between the original point clouds and the result plane.
2. And rejecting abnormal points with larger distance from the result plane for fitting again, or judging as the base foreign matter when the abnormal points are excessive.
3. And filtering and screening point clouds near the bolt, and screening point clouds on the upper surface of the nut in a normal vector mode and the like.
4. And calculating the distance between the two, namely the gap value in the measuring mode.
The method firstly combines deep learning, template matching and an Iterative closest Point (Iterative closest Point) algorithm to accurately and quickly position the positions of the fastener and the elastic strip. And detecting the defects of the fasteners through the three-dimensional point cloud data.
The above is only a specific embodiment of the present invention, but the technical features of the present invention are not limited thereto. Any simple changes, equivalent substitutions or modifications made on the basis of the present invention to solve the same technical problems and achieve the same technical effects are all covered in the protection scope of the present invention.
Claims (10)
1. A fastener detection algorithm based on line structured light is characterized by comprising the following steps:
(a) positioning a fastener area in the depth map through deep learning target detection;
(b) roughly positioning the position of the elastic strip in the fastener area through a template matching algorithm;
(c) reducing the depth map into a three-dimensional point cloud, and performing accurate positioning according to a coarse positioning result to obtain an accurately positioned elastic strip position;
(d) and detecting corresponding defects according to the accurately positioned elastic strip position.
2. The line structured light based fastener detection algorithm of claim 1, wherein: in the step (a), based on the deep learning target detection, a fast-RCNN detection method is adopted, and the detection steps of the fast-RCNN detection method are as follows:
(1) collecting original pictures shot in various environments on site, marking the positions of fasteners, and inputting the pictures and marking files to the Faster-RCNN for training;
(2) the fast-RCNN extracts the picture into a multi-dimensional feature map in a convolution mode, and generates a plurality of prediction frames on the picture through an RPN region generation network and an RCNN classification network;
(3) and determining a marking frame and the prediction frame according to the marking file, automatically searching the difference between the prediction frame and the marking frame, and generating and recording a training result.
(4) After training a large number of pictures, the neural network predicts the fastener positions as new pictures are imported in the same rule.
3. The line structured light based fastener detection algorithm of claim 2, wherein: in the step (3) of the step (a), after the labeling frame and the prediction frame are determined, the deviation of the prediction frame needs to be modified.
4. The line structured light based fastener detection algorithm of claim 1, wherein: in the step (b), the step of coarsely positioning the position of the pop-up strip by the template matching algorithm is as follows:
firstly, calculating the gradient of each direction of a template picture through a sobel operator, obtaining a gradient map through convolution of each gradient and the picture, wherein the gray value of each pixel on the gradient map represents the gradient information of the pixel on an original picture,
combining the gradient map to obtain the comprehensive gradient size and direction of each point, then selecting points with gradient size higher than a set value from the gradient map, and uniformly sampling the points to be used as finally selected characteristic points; and finally, taking the gradient direction as a descriptor to obtain a template file.
5. The line structured light based fastener detection algorithm of claim 4, wherein: and manufacturing the template file by adopting a multi-angle template to obtain a multi-angle template file, and selecting the best result according to the matching score during matching according to the multi-angle template file.
6. The line structured light based fastener detection algorithm of claim 5, wherein: and during matching, calculating a gradient map of the original image, diffusing the gradient characteristics of each point, matching through a sliding window, and outputting a result with the highest score.
7. The line structured light based fastener detection algorithm of claim 1, wherein: in the step (c), the depth map is restored into a three-dimensional point cloud, and a template matching result on the depth map is mapped into the point cloud map in an iterative closest point algorithm mode to serve as an initial value of the iterative closest point algorithm; then finding out the most adjacent points of each point of the template point cloud, and calculating optimal matching parameters R and t; the error function used is as follows:
where n is the number of nearest neighbor point pairs, PiFor the point to be matched in the ith pair of nearest neighbors, qiThe template points in the ith pair of nearest neighbor points are R is a rotation matrix and t is a translation matrix.
8. The line structured light based fastener detection algorithm of claim 7, wherein: the steps of using the iterative closest point algorithm are as follows:
(1) calculating the nearest points of each point in the point cloud to be matched in the point cloud to be template according to the initial value of the iterative nearest point algorithm, and forming point pairs;
(2) calculating a rotation matrix R and a translation matrix t;
(3) and (3) converting the template point cloud through the calculated rotation matrix R and translation matrix t, and repeating the step (1) and the step (1) for the converted template point cloud until E (R, t) is less than a set threshold or the iteration times reach a set value, so as to obtain the position of the accurately positioned elastic strip.
9. The line structured light based fastener detection algorithm of claim 1, wherein: in step (d), the detecting the corresponding defect at the position of the elastic strip comprises: and (4) measuring whether the elastic strip is broken, deformed and surface foreign matters and the stop block exist or not and measuring the gap value.
10. The line structured light based fastener detection algorithm of claim 9, wherein: and aiming at the gap value measurement, calculating through the height values of the base at the position of the elastic strip and the bolt and the nut fastened on the elastic strip.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110197421.8A CN112950562A (en) | 2021-02-22 | 2021-02-22 | Fastener detection algorithm based on line structured light |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110197421.8A CN112950562A (en) | 2021-02-22 | 2021-02-22 | Fastener detection algorithm based on line structured light |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112950562A true CN112950562A (en) | 2021-06-11 |
Family
ID=76245177
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110197421.8A Pending CN112950562A (en) | 2021-02-22 | 2021-02-22 | Fastener detection algorithm based on line structured light |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112950562A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113959362A (en) * | 2021-09-22 | 2022-01-21 | 杭州申昊科技股份有限公司 | Structured light three-dimensional measurement system calibration method and routing inspection data processing method |
CN114049355A (en) * | 2022-01-14 | 2022-02-15 | 杭州灵西机器人智能科技有限公司 | Method, system and device for identifying and labeling scattered workpieces |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109767427A (en) * | 2018-12-25 | 2019-05-17 | 北京交通大学 | The detection method of train rail fastener defect |
US20190188872A1 (en) * | 2017-12-18 | 2019-06-20 | Samsung Electronics Co., Ltd. | Image processing with iterative closest point (icp) technique |
CN110567397A (en) * | 2018-06-05 | 2019-12-13 | 成都精工华耀科技有限公司 | Fastener spring tongue separation detection method |
CN110634121A (en) * | 2018-06-05 | 2019-12-31 | 成都精工华耀科技有限公司 | Track fastener loosening detection method based on texture and depth images |
CN110634123A (en) * | 2018-06-05 | 2019-12-31 | 成都精工华耀科技有限公司 | Track fastener loosening detection method adopting depth image |
CN111080597A (en) * | 2019-12-12 | 2020-04-28 | 西南交通大学 | Track fastener defect identification algorithm based on deep learning |
CN111311560A (en) * | 2020-02-10 | 2020-06-19 | 中国铁道科学研究院集团有限公司基础设施检测研究所 | Method and device for detecting state of steel rail fastener |
CN111476767A (en) * | 2020-04-02 | 2020-07-31 | 南昌工程学院 | High-speed rail fastener defect identification method based on heterogeneous image fusion |
-
2021
- 2021-02-22 CN CN202110197421.8A patent/CN112950562A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190188872A1 (en) * | 2017-12-18 | 2019-06-20 | Samsung Electronics Co., Ltd. | Image processing with iterative closest point (icp) technique |
CN110567397A (en) * | 2018-06-05 | 2019-12-13 | 成都精工华耀科技有限公司 | Fastener spring tongue separation detection method |
CN110634121A (en) * | 2018-06-05 | 2019-12-31 | 成都精工华耀科技有限公司 | Track fastener loosening detection method based on texture and depth images |
CN110634123A (en) * | 2018-06-05 | 2019-12-31 | 成都精工华耀科技有限公司 | Track fastener loosening detection method adopting depth image |
CN109767427A (en) * | 2018-12-25 | 2019-05-17 | 北京交通大学 | The detection method of train rail fastener defect |
CN111080597A (en) * | 2019-12-12 | 2020-04-28 | 西南交通大学 | Track fastener defect identification algorithm based on deep learning |
CN111311560A (en) * | 2020-02-10 | 2020-06-19 | 中国铁道科学研究院集团有限公司基础设施检测研究所 | Method and device for detecting state of steel rail fastener |
CN111476767A (en) * | 2020-04-02 | 2020-07-31 | 南昌工程学院 | High-speed rail fastener defect identification method based on heterogeneous image fusion |
Non-Patent Citations (10)
Title |
---|
ALEJANDRO GARCÍA LORENTE等: "Range-based rail gauge and rail fasteners detection using high-resolution 2D/3D images", 《TRB 2014 ANNUAL MEETING》 * |
代先星等: "铁路扣件缺陷自动检测研究进展", 《铁道科学与工程学报》 * |
代小红等: "一种基于改进Faster RCNN的金属材料工件表面缺陷检测与实现研究", 《表面技术》 * |
刘玉婷等: "基于Faster R-CNN的铁路扣件状态检测研究", 《大连民族大学学报》 * |
袁建英等: "改进ICP算法实现多视点云精确配准研究", 《传感器与微系统》 * |
谢凤英等: "基于互信息的铁路轨枕扣件自动定位算法", 《中国体视学与图像分析》 * |
赵兴东等: "《矿用三维激光数字测量原理及其工程应用》", 31 January 2016, 北京:冶金工业出版社 * |
钱广春: "基于计算机视觉的铁路扣件缺失快速探测方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
陈金胜等: "基于边缘特征的无砟轨道扣件定位方法", 《计算机测量与控制》 * |
高济等: "《人工智能基础》", 31 August 2002, 北京:高等教育出版社 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113959362A (en) * | 2021-09-22 | 2022-01-21 | 杭州申昊科技股份有限公司 | Structured light three-dimensional measurement system calibration method and routing inspection data processing method |
CN113959362B (en) * | 2021-09-22 | 2023-09-12 | 杭州申昊科技股份有限公司 | Calibration method and inspection data processing method of structured light three-dimensional measurement system |
CN114049355A (en) * | 2022-01-14 | 2022-02-15 | 杭州灵西机器人智能科技有限公司 | Method, system and device for identifying and labeling scattered workpieces |
CN114049355B (en) * | 2022-01-14 | 2022-04-19 | 杭州灵西机器人智能科技有限公司 | Method, system and device for identifying and labeling scattered workpieces |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11551341B2 (en) | Method and device for automatically drawing structural cracks and precisely measuring widths thereof | |
US20210319561A1 (en) | Image segmentation method and system for pavement disease based on deep learning | |
CN107633516B (en) | Method and device for identifying road surface deformation diseases | |
CN109685858B (en) | Monocular camera online calibration method | |
CN112651968B (en) | Wood board deformation and pit detection method based on depth information | |
CN116071387A (en) | Sleeper rail production quality detection method based on machine vision | |
CN112950562A (en) | Fastener detection algorithm based on line structured light | |
CN111489339A (en) | Method for detecting defects of bolt spare nuts of high-speed railway positioner | |
CN107705294B (en) | Cross laser image type roadbed surface settlement monitoring method and monitoring system | |
CN110866430A (en) | License plate recognition method and device | |
CN111485475A (en) | Pavement pit recognition method and device | |
CN111539436B (en) | Rail fastener positioning method based on straight template matching | |
CN113011283B (en) | Non-contact type rail sleeper relative displacement real-time measurement method based on video | |
CN106203238A (en) | Well lid component identification method in mobile mapping system streetscape image | |
Liang et al. | Research on concrete cracks recognition based on dual convolutional neural network | |
CN101470802A (en) | Object detection apparatus and method thereof | |
CN112884753A (en) | Track fastener detection and classification method based on convolutional neural network | |
CN110503634B (en) | Visibility measuring method based on automatic image identification | |
CN113435452A (en) | Electrical equipment nameplate text detection method based on improved CTPN algorithm | |
CN113673011A (en) | Method for intelligently identifying tunnel invasion boundary in operation period based on point cloud data | |
EP4250245A1 (en) | System and method for determining a viewpoint of a traffic camera | |
CN106056926B (en) | Video vehicle speed detection method based on dynamic virtual coil | |
CN109559356B (en) | Expressway sight distance detection method based on machine vision | |
CN116664501A (en) | Method for judging grain storage change based on image processing | |
CN112857252B (en) | Tunnel image boundary line detection method based on reflectivity intensity |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210611 |
|
RJ01 | Rejection of invention patent application after publication |