CN113409242A - Intelligent monitoring method for point cloud of rail intersection bow net - Google Patents

Intelligent monitoring method for point cloud of rail intersection bow net Download PDF

Info

Publication number
CN113409242A
CN113409242A CN202110201934.1A CN202110201934A CN113409242A CN 113409242 A CN113409242 A CN 113409242A CN 202110201934 A CN202110201934 A CN 202110201934A CN 113409242 A CN113409242 A CN 113409242A
Authority
CN
China
Prior art keywords
point cloud
point
image
target
cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110201934.1A
Other languages
Chinese (zh)
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Zheyu Intelligent Technology Co ltd
Original Assignee
Hangzhou Zheyu Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Zheyu Intelligent Technology Co ltd filed Critical Hangzhou Zheyu Intelligent Technology Co ltd
Priority to CN202110201934.1A priority Critical patent/CN113409242A/en
Publication of CN113409242A publication Critical patent/CN113409242A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The invention discloses an intelligent monitoring method for point cloud of a rail-crossing pantograph, which comprises the following steps: acquiring left and right images of a target object by using a binocular camera, and segmenting the target object from the images by preprocessing; step two: the cloud computer automatically intercepts a rectangular area where the target is located, and performs stereo matching on the rectangular area to obtain the parallax of the measured object; step three: calculating the parallax processed by the cloud computer and a reprojection matrix obtained by binocular calibration to obtain a three-dimensional point cloud under a certain angle of the object; step four: and the cloud computer performs ICP registration on the point clouds obtained in the first step to the third step at different angles to obtain a complete three-dimensional point cloud of the bow net monitoring target object. The bow net monitoring device is suitable for installation of rail transit vehicles and carrying out bow net monitoring work, reduces cost and improves efficiency.

Description

Intelligent monitoring method for point cloud of rail intersection bow net
Technical Field
The invention relates to an intelligent monitoring technology for three-dimensional point cloud of an orbital-crossing pantograph network, belonging to the technical field of monitoring of orbital-crossing equipment.
Background
The key point of the operation safety of the rail transit vehicle is the safety and the guarantee of equipment in a pantograph system, so that a wireless video monitoring system is usually adopted to monitor the operation condition in real time, a plane camera and a traditional monitoring method of two-dimensional image identification are usually adopted, and a plurality of identification technical problems exist. Therefore, an image recognition system begins to develop a more advanced technical field of three-dimensional image recognition, three-dimensional point cloud adopting stereoscopic vision is a set for describing three-dimensional information data points of an object, and a three-dimensional model of the object is a key technology in development by rapidly acquiring complete three-dimensional point cloud of a target with a complex shape and reconstructing the three-dimensional model of the target. The methods for acquiring three-dimensional image data of three-dimensional point cloud can be mainly divided into two types: contact measurements acquired by some mechanical contact, such as three coordinate measuring machine measurements; non-contact measurements, such as measurements based on machine vision techniques, acquired by mechanical contact are not required. Three-dimensional scanning is an effective means for acquiring three-dimensional point cloud, and is widely applied to various fields such as target identification, three-dimensional reconstruction, Geographic Information System (GIS) construction, medical assistance, ancient cultural relic restoration and the like. However, the high-precision scanners used in China mostly depend on imports such as laser scanning, are high in precision and cost, and are not suitable for equipment application environments of rail transit vehicles and the like.
Therefore, how to replace a complex laser three-dimensional monitoring system by a binocular vision system which is simple in structure and easy to install and quickly acquire complete three-dimensional point cloud three-dimensional monitoring data of a complex target object becomes an important problem in that the binocular vision system is easy to install on rail transit vehicles and carry out safety monitoring work of a pantograph-catenary.
Disclosure of Invention
The purpose is as follows: in order to overcome the defects in the prior art, the invention provides an intelligent monitoring method for a point cloud of an intersection bow net.
The technical scheme is as follows: in order to solve the technical problems, the technical scheme adopted by the invention is as follows:
an intelligent monitoring method for a point cloud of an orbit intersection network comprises the following steps:
the method comprises the following steps: respectively carrying out computer image preprocessing on left and right images acquired by a binocular camera, and segmenting a target object monitored by a pantograph from the images;
the first step comprises the following steps:
1 a: performing threshold segmentation binarization processing on the gray level images of the left image and the right image by using an inter-maximum class variance (OTSU) method, and then respectively overlapping the gray level images with the original gray level images to obtain target images;
1 b: the preprocessed target image R becomes a target area of an original gray image, the background is a pure white image, and a target object is segmented;
step two: automatically intercepting a rectangular area where a target is located by adopting a cloud computer, and performing stereo matching on the rectangular area to obtain the parallax of the measured object;
the second step comprises the following steps:
2a Algorithm for automatically bounding a target object with a minimum rectangle for left and right images R after image pre-processing, the left image minimum rectangle frame is represented by Rect-L (X L, Y L, W L, H L), and the right image minimum rectangle frame is represented by Rect-R (X R, Y R, W R, H R). Where X, Y denote the abscissa and ordinate of the upper left corner of the rectangular frame in the image, and W and H denote the width and height of the rectangular frame.
2b, in the process of stereo matching, the effective matching area of the image to be matched (right image) starts from the numdisparities column, so that the numdisparities column needs to be left of the target object of the image to be matched. The regions automatically intercepted by the algorithm are represented by Rect _ ROI (X ROI, Y ROI, W ROI, H ROI), and the numerical values of the variables are calculated as follows:
Figure DEST_PATH_IMAGE001
2c, performing stereo matching on the preprocessed left and right images by adopting a semi-global block matching (SGBM) algorithm to obtain parallax;
and 2d, acquiring the parallax on the basis of the left image, modifying the parallax matrix on the basis of the left image to eliminate partial mismatching at the edge, and modifying the parallax:
Figure 308585DEST_PATH_IMAGE002
in the above formula: l (x, y) represents the gray value of an arbitrary point (x, y) of the left image; d (x, y) represents a disparity value of an arbitrary point (x, y) of the disparity matrix;
step three: computing the parallax and a re-projection matrix obtained by binocular calibration by using a cloud computer to obtain a three-dimensional point cloud under a certain angle of the object;
the third step comprises:
3a, calculating by adopting parallax obtained by stereo matching and a reprojection matrix obtained by binocular calibration to obtain coordinates of each point in the three-dimensional point cloud; wherein the reprojection matrix is:
Figure DEST_PATH_IMAGE003
in the formula: c x denotes the x coordinate of the principal point in the left image; c y denotes the y coordinate of the principal point in the left image; f represents the focal length obtained by calculation after the three-dimensional calibration; t x denotes the baseline distance of the two cameras;
given a two-dimensional homogeneous point (x, y) and its associated disparity d, this point can be projected into three dimensions as follows:
Figure 916283DEST_PATH_IMAGE004
the three-dimensional coordinates of the point are (X W, Y W, Z W);
step four: and the cloud computer performs ICP registration on the point clouds obtained in the first step to the third step at different angles to obtain a complete three-dimensional point cloud stereo image of the monitored target object.
The fourth step comprises:
4 a: reading model point cloud X and target point P, the number of the point concentration points being
Figure DEST_PATH_IMAGE005
Figure 499712DEST_PATH_IMAGE006
4 b: searching each point in the model point cloud X and the target point cloud P
Figure DEST_PATH_IMAGE007
Nearest corresponding point
Figure 749427DEST_PATH_IMAGE008
And form a new point set
Figure DEST_PATH_IMAGE009
Wherein
Figure 203542DEST_PATH_IMAGE010
∈X ;
4 c: separately computing model point cloud subsets
Figure 513301DEST_PATH_IMAGE009
Center of gravity of target point cloud P
Figure DEST_PATH_IMAGE011
Covariance matrix of two point clouds
Figure 584025DEST_PATH_IMAGE012
4 d: set of points
Figure 106273DEST_PATH_IMAGE009
Covariance matrix of P
Figure 946053DEST_PATH_IMAGE012
Constructing a 4 x 4 symmetric matrix
Figure DEST_PATH_IMAGE013
4 e: computing matrices
Figure 426713DEST_PATH_IMAGE013
Wherein the feature vector corresponding to the maximum feature value is the optimal rotation vector expressed by unit quaternion
Figure 719154DEST_PATH_IMAGE014
And then calculating the optimal translation vector
Figure DEST_PATH_IMAGE015
4 f: transforming the rigid body in the flow 5) into a matrix
Figure 45094DEST_PATH_IMAGE014
Figure 739380DEST_PATH_IMAGE015
Acting on the target point cloud
Figure 390941DEST_PATH_IMAGE009
Obtaining a new transformed point cloud set
Figure 436258DEST_PATH_IMAGE016
Calculating a target point cloud of new locations
Figure 831467DEST_PATH_IMAGE016
Mean sum of squared euclidean distances with the model point cloud P
Figure DEST_PATH_IMAGE017
4 g: if the target point cloud
Figure 849101DEST_PATH_IMAGE016
Difference from model point cloud P
Figure 937143DEST_PATH_IMAGE017
Is less than a given threshold τ, i.e.
Figure 204176DEST_PATH_IMAGE017
<Tau, terminating the iteration, otherwise returning to the flow 4b until the condition is met
Figure 403077DEST_PATH_IMAGE017
< τ。
Has the advantages that: according to the intelligent monitoring method for the point cloud of the rail-crossing pantograph network, the complete three-dimensional point cloud three-dimensional image of the target with the complex shape of the rail-crossing pantograph network monitoring target object can be quickly acquired by adopting the binocular vision system and the cloud computing equipment, the structure is simple, the requirement of the installation and operation environment of rail-crossing traffic vehicles is met, the monitoring pantograph network operation safety is facilitated, the precise and complex imported laser three-dimensional scanning system is effectively replaced, the cost is reduced, and the efficiency is improved.
Drawings
FIG. 1 is a flow chart of intelligent acquisition of point clouds in accordance with the present invention
Detailed Description
The present invention will be further described with reference to the accompanying drawings.
An intelligent monitoring method for a point cloud of an orbit intersection network comprises the following steps:
the method comprises the following steps: respectively carrying out computer image preprocessing on left and right images acquired by a binocular camera, and segmenting a target object monitored by a pantograph from the images;
the first step comprises the following steps:
1 a: performing threshold segmentation binarization processing on the gray level images of the left image and the right image by using an inter-maximum class variance (OTSU) method, and then respectively overlapping the gray level images with the original gray level images to obtain target images;
1 b: the preprocessed target image R becomes a target area of an original gray image, the background is a pure white image, and a target object is segmented;
step two: processing by a cloud computer, automatically intercepting a rectangular area where a target is located, and performing stereo matching on the rectangular area to obtain the parallax of a measured object;
the second step comprises the following steps:
2a Algorithm for automatically bounding a target object with a minimum rectangle for left and right images R after image pre-processing, the left image minimum rectangle frame is represented by Rect-L (X L, Y L, W L, H L), and the right image minimum rectangle frame is represented by Rect-R (X R, Y R, W R, H R). Where X, Y denote the abscissa and ordinate of the upper left corner of the rectangular frame in the image, and W and H denote the width and height of the rectangular frame.
2b, in the process of stereo matching, the effective matching area of the image to be matched (right image) starts from the numdisparities column, so that the numdisparities column needs to be left of the target object of the image to be matched. The regions automatically intercepted by the algorithm are represented by Rect _ ROI (X ROI, Y ROI, W ROI, H ROI), and the numerical values of the variables are calculated as follows:
Figure 785472DEST_PATH_IMAGE001
2c, performing stereo matching on the preprocessed left and right images by adopting a semi-global block matching (SGBM) algorithm to obtain parallax;
and 2d, acquiring the parallax on the basis of the left image, modifying the parallax matrix on the basis of the left image to eliminate partial mismatching at the edge, and modifying the parallax:
Figure 44415DEST_PATH_IMAGE002
in the above formula: l (x, y) represents the gray value of an arbitrary point (x, y) of the left image; d (x, y) represents a disparity value of an arbitrary point (x, y) of the disparity matrix;
step three: computing the parallax and a re-projection matrix obtained by binocular calibration by using a cloud computer to obtain a three-dimensional point cloud under a certain angle of the object;
the third step comprises:
3a, calculating by adopting parallax obtained by stereo matching and a reprojection matrix obtained by binocular calibration to obtain coordinates of each point in the three-dimensional point cloud; wherein the reprojection matrix is:
Figure 798744DEST_PATH_IMAGE003
in the formula: c x denotes the x coordinate of the principal point in the left image; c y denotes the y coordinate of the principal point in the left image; f represents the focal length obtained by calculation after the three-dimensional calibration; t x denotes the baseline distance of the two cameras;
given a two-dimensional homogeneous point (x, y) and its associated disparity d, this point can be projected into three dimensions as follows:
Figure 535756DEST_PATH_IMAGE004
the three-dimensional coordinates of the point are (X W, Y W, Z W);
step four: and the cloud computer performs ICP registration on the point clouds obtained in the first step to the third step at different angles to obtain a complete three-dimensional point cloud stereo image of the monitored target object.
The fourth step comprises:
4 a: reading model point cloud X and target point P, the number of points in the point set being respectivelyIs composed of
Figure 59141DEST_PATH_IMAGE005
Figure 957827DEST_PATH_IMAGE006
4 b: searching each point in the model point cloud X and the target point cloud P
Figure 933873DEST_PATH_IMAGE007
Nearest corresponding point
Figure 474576DEST_PATH_IMAGE008
And form a new point set
Figure 118047DEST_PATH_IMAGE009
Wherein
Figure 453213DEST_PATH_IMAGE010
∈X ;
4 c: separately computing model point cloud subsets
Figure 650976DEST_PATH_IMAGE009
Center of gravity of target point cloud P
Figure 729791DEST_PATH_IMAGE011
Covariance matrix of two point clouds
Figure 227768DEST_PATH_IMAGE012
4 d: set of points
Figure 733836DEST_PATH_IMAGE009
Covariance matrix of P
Figure 950054DEST_PATH_IMAGE012
Constructing a 4 x 4 symmetric matrix
Figure 301401DEST_PATH_IMAGE013
4 e: computing matrices
Figure 653885DEST_PATH_IMAGE013
Wherein the feature vector corresponding to the maximum feature value is the optimal rotation vector expressed by unit quaternion
Figure 596433DEST_PATH_IMAGE014
And then calculating the optimal translation vector
Figure 34367DEST_PATH_IMAGE015
4 f: transforming the rigid body in the flow 5) into a matrix
Figure 454984DEST_PATH_IMAGE014
Figure 927554DEST_PATH_IMAGE015
Acting on the target point cloud
Figure 244266DEST_PATH_IMAGE009
Obtaining a new transformed point cloud set
Figure 169497DEST_PATH_IMAGE016
Calculating a target point cloud of new locations
Figure 393805DEST_PATH_IMAGE016
Mean sum of squared euclidean distances with the model point cloud P
Figure 720881DEST_PATH_IMAGE017
4 g: if the target point cloud
Figure 5231DEST_PATH_IMAGE016
Difference from model point cloud P
Figure 886600DEST_PATH_IMAGE017
Is less than a given threshold τ, i.e.
Figure 649019DEST_PATH_IMAGE017
<Tau, terminating the iteration, otherwise returning to the flow 4b until the condition is met
Figure 830602DEST_PATH_IMAGE017
< τ。
The above description is only of the preferred embodiments of the present invention, and it should be noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the invention and these are intended to be within the scope of the invention.

Claims (5)

1. An intelligent monitoring method for a point cloud of an orbit intersection network is characterized by comprising the following steps: the method comprises the following steps: the method comprises the following steps: respectively carrying out computer image preprocessing on left and right images acquired by a binocular camera, and segmenting a target object monitored by a pantograph from the images; step two: automatically intercepting a rectangular area where a target is located by adopting a cloud computer, and performing stereo matching on the rectangular area to obtain the parallax of the measured object; step three: computing the parallax and a re-projection matrix obtained by binocular calibration by using a cloud computer to obtain a three-dimensional point cloud under a certain angle of the object; step four: and the cloud computer performs ICP registration on the point clouds obtained in the first step to the third step at different angles to obtain a complete three-dimensional point cloud stereo image of the monitored target object.
2. The intelligent monitoring method for the point cloud of the pantograph according to claim 1, which is characterized by comprising the following steps: respectively preprocessing a left image and a right image acquired by a binocular camera, and segmenting a target object monitored by the pantograph from the images; the first step comprises the following steps: 1 a: performing threshold segmentation binarization processing on the gray level images of the left image and the right image by using an inter-maximum class variance (OTSU) method, and then respectively overlapping the gray level images with the original gray level images to obtain target images; 1 b: the preprocessed target image R becomes a target area of the original gray image, the background is a pure white image, and the target object is segmented.
3. An orbital-arch net according to claim 1The point cloud intelligent monitoring method is characterized by comprising the following steps of: automatically intercepting a rectangular area where a target is located by adopting a cloud computer, and performing stereo matching on the rectangular area to obtain the parallax of the measured object; the second step comprises the following steps: 2a Algorithm for automatically bounding a target object with a minimum rectangle for left and right images R after image pre-processing, the left image minimum rectangle box being represented by Rect-L (X L, Y L, W L, H L), the right image minimum rectangle box being represented by Rect-R (X R, Y R, W R, H R): where X, Y represent the abscissa and ordinate of the upper left corner of the rectangular frame in the image, and W and H represent the width and height of the rectangular frame: 2b, in the process of stereo matching, an effective matching area of an image to be matched (a right image) starts from the numdisparities column, so that the numdisparities column needs to be left of the target object of the image to be matched: the regions automatically intercepted by the algorithm are represented by Rect _ ROI (X ROI, Y ROI, W ROI, H ROI), and the numerical values of the variables are calculated as follows:
Figure RE-RE-802868DEST_PATH_IMAGE001
2c, performing stereo matching on the preprocessed left and right images by adopting a semi-global block matching (SGBM) algorithm to obtain parallax; and 2d, acquiring the parallax on the basis of the left image, modifying the parallax matrix on the basis of the left image to eliminate partial mismatching at the edge, and modifying the parallax:
Figure RE-RE-DEST_PATH_IMAGE002
in the above formula: l (x, y) represents the gray value of an arbitrary point (x, y) of the left image; d (x, y) represents a parallax value of an arbitrary point (x, y) of the parallax matrix.
4. The intelligent monitoring method for the point cloud of the pantograph according to claim 1, which is characterized by comprising the following steps of: computing the parallax and a re-projection matrix obtained by binocular calibration by using a cloud computer to obtain a three-dimensional point cloud under a certain angle of the object; the third step comprises: 3a, calculating by adopting parallax obtained by stereo matching and a reprojection matrix obtained by binocular calibration to obtain coordinates of each point in the three-dimensional point cloud; wherein the reprojection matrix is:
Figure RE-RE-720008DEST_PATH_IMAGE003
in the formula: c x denotes the x coordinate of the principal point in the left image; c y denotes the y coordinate of the principal point in the left image; f represents the focal length obtained by calculation after the three-dimensional calibration; t x denotes the baseline distance of the two cameras; given a two-dimensional homogeneous point (x, y) and its associated disparity d, this point can be projected into three dimensions as follows:
Figure RE-RE-DEST_PATH_IMAGE004
the three-dimensional coordinates of this point are (X W, Y W, Z W).
5. The intelligent monitoring method for the point cloud of the pantograph according to claim 1, which is characterized by comprising the following four steps: the cloud computer performs ICP (iterative closest point) registration on the point clouds obtained in the first step to the third step at different angles to obtain a complete three-dimensional point cloud stereo image of the monitored target object; the fourth step comprises: 4 a: reading model point cloud X and target point P, the number of the point concentration points being
Figure RE-RE-499746DEST_PATH_IMAGE005
Figure RE-RE-DEST_PATH_IMAGE006
(ii) a 4 b: searching each point in the model point cloud X and the target point cloud P
Figure RE-RE-426113DEST_PATH_IMAGE007
Nearest corresponding point
Figure RE-RE-DEST_PATH_IMAGE008
And form a new point set
Figure RE-RE-240486DEST_PATH_IMAGE009
Wherein
Figure RE-RE-DEST_PATH_IMAGE010
Belongs to X; 4 c: separately computing model point cloud subsets
Figure RE-RE-594107DEST_PATH_IMAGE009
Center of gravity of target point cloud P
Figure RE-RE-861140DEST_PATH_IMAGE011
Covariance matrix of two point clouds
Figure RE-RE-DEST_PATH_IMAGE012
(ii) a 4 d: set of points
Figure RE-RE-60040DEST_PATH_IMAGE009
Covariance matrix of P
Figure RE-RE-728919DEST_PATH_IMAGE012
Constructing a 4 x 4 symmetric matrix
Figure RE-RE-722283DEST_PATH_IMAGE013
(ii) a 4 e: computing matrices
Figure RE-RE-7770DEST_PATH_IMAGE013
Wherein the feature vector corresponding to the maximum feature value is the optimal rotation vector expressed by unit quaternion
Figure RE-RE-DEST_PATH_IMAGE014
And then calculating the optimal translation vector
Figure RE-RE-744782DEST_PATH_IMAGE015
(ii) a 4 f: transforming the rigid body in the flow 5) into a matrix
Figure RE-RE-268167DEST_PATH_IMAGE014
Figure RE-RE-DEST_PATH_IMAGE016
Acting on eyesPoint cloud
Figure RE-RE-698012DEST_PATH_IMAGE009
Obtaining a new transformed point cloud set
Figure RE-RE-470796DEST_PATH_IMAGE017
Calculating a target point cloud of new locations
Figure RE-RE-745919DEST_PATH_IMAGE017
Mean sum of squared euclidean distances with the model point cloud P
Figure RE-RE-DEST_PATH_IMAGE018
(ii) a 4 g: if the target point cloud
Figure RE-RE-389390DEST_PATH_IMAGE017
Difference from model point cloud P
Figure RE-RE-724557DEST_PATH_IMAGE018
Is less than a given threshold τ, i.e.
Figure RE-RE-984637DEST_PATH_IMAGE018
<Tau, terminating the iteration, otherwise returning to the flow 4b until the condition is met
Figure RE-RE-63451DEST_PATH_IMAGE018
< τ。
CN202110201934.1A 2021-02-23 2021-02-23 Intelligent monitoring method for point cloud of rail intersection bow net Pending CN113409242A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110201934.1A CN113409242A (en) 2021-02-23 2021-02-23 Intelligent monitoring method for point cloud of rail intersection bow net

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110201934.1A CN113409242A (en) 2021-02-23 2021-02-23 Intelligent monitoring method for point cloud of rail intersection bow net

Publications (1)

Publication Number Publication Date
CN113409242A true CN113409242A (en) 2021-09-17

Family

ID=77675891

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110201934.1A Pending CN113409242A (en) 2021-02-23 2021-02-23 Intelligent monitoring method for point cloud of rail intersection bow net

Country Status (1)

Country Link
CN (1) CN113409242A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2602896A (en) * 2022-03-02 2022-07-20 Hack Partners Ltd Automatic digital inspection of railway environment
GB2620460A (en) * 2022-03-02 2024-01-10 Hack Partners Ltd Automatic digital inspection of railway environment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080123937A1 (en) * 2006-11-28 2008-05-29 Prefixa Vision Systems Fast Three Dimensional Recovery Method and Apparatus
CN101908230A (en) * 2010-07-23 2010-12-08 东南大学 Regional depth edge detection and binocular stereo matching-based three-dimensional reconstruction method
CN103955920A (en) * 2014-04-14 2014-07-30 桂林电子科技大学 Binocular vision obstacle detection method based on three-dimensional point cloud segmentation
CN103955939A (en) * 2014-05-16 2014-07-30 重庆理工大学 Boundary feature point registering method for point cloud splicing in three-dimensional scanning system
CN108765474A (en) * 2018-04-17 2018-11-06 天津工业大学 A kind of efficient method for registering for CT and optical scanner tooth model
CN111784770A (en) * 2020-06-28 2020-10-16 河北工业大学 Three-dimensional attitude estimation method in disordered grabbing based on SHOT and ICP algorithm

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080123937A1 (en) * 2006-11-28 2008-05-29 Prefixa Vision Systems Fast Three Dimensional Recovery Method and Apparatus
CN101908230A (en) * 2010-07-23 2010-12-08 东南大学 Regional depth edge detection and binocular stereo matching-based three-dimensional reconstruction method
CN103955920A (en) * 2014-04-14 2014-07-30 桂林电子科技大学 Binocular vision obstacle detection method based on three-dimensional point cloud segmentation
CN103955939A (en) * 2014-05-16 2014-07-30 重庆理工大学 Boundary feature point registering method for point cloud splicing in three-dimensional scanning system
CN108765474A (en) * 2018-04-17 2018-11-06 天津工业大学 A kind of efficient method for registering for CT and optical scanner tooth model
CN111784770A (en) * 2020-06-28 2020-10-16 河北工业大学 Three-dimensional attitude estimation method in disordered grabbing based on SHOT and ICP algorithm

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2602896A (en) * 2022-03-02 2022-07-20 Hack Partners Ltd Automatic digital inspection of railway environment
GB2602896B (en) * 2022-03-02 2023-01-04 Hack Partners Ltd Automatic digital inspection of railway environment
GB2620460A (en) * 2022-03-02 2024-01-10 Hack Partners Ltd Automatic digital inspection of railway environment

Similar Documents

Publication Publication Date Title
Fan et al. Pothole detection based on disparity transformation and road surface modeling
CN110569704B (en) Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
CN109655019B (en) Cargo volume measurement method based on deep learning and three-dimensional reconstruction
CN109544677B (en) Indoor scene main structure reconstruction method and system based on depth image key frame
CN107767456A (en) A kind of object dimensional method for reconstructing based on RGB D cameras
Kang et al. Automatic targetless camera–lidar calibration by aligning edge with gaussian mixture model
CN108597009B (en) Method for detecting three-dimensional target based on direction angle information
Gao et al. Ground and aerial meta-data integration for localization and reconstruction: A review
CN113409242A (en) Intelligent monitoring method for point cloud of rail intersection bow net
CN112164145B (en) Method for rapidly extracting indoor three-dimensional line segment structure based on point cloud data
CN113884002A (en) Pantograph slide plate upper surface detection system and method based on two-dimensional and three-dimensional information fusion
CN115222884A (en) Space object analysis and modeling optimization method based on artificial intelligence
CN111325828A (en) Three-dimensional face acquisition method and device based on three-eye camera
CN114820474A (en) Train wheel defect detection method based on three-dimensional information
CN112767459A (en) Unmanned aerial vehicle laser point cloud and sequence image registration method based on 2D-3D conversion
Aissaoui et al. Rapid and accurate face depth estimation in passive stereo systems
CN116958434A (en) Multi-view three-dimensional reconstruction method, measurement method and system
CN116843829A (en) Concrete structure crack three-dimensional reconstruction and length quantization method based on binocular video
CN116883590A (en) Three-dimensional face point cloud optimization method, medium and system
CN116402904A (en) Combined calibration method based on laser radar inter-camera and monocular camera
CN110189403B (en) Underwater target three-dimensional reconstruction method based on single-beam forward-looking sonar
CN114766039A (en) Object detection method, object detection device, terminal device, and medium
Cobzas et al. Planar patch extraction with noisy depth data
Moallem et al. Effective parameters in search space reduction used in a fast edge-based stereo matching
David Detection of building facades in urban environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210917