CN113409242A - Intelligent monitoring method for point cloud of rail intersection bow net - Google Patents
Intelligent monitoring method for point cloud of rail intersection bow net Download PDFInfo
- Publication number
- CN113409242A CN113409242A CN202110201934.1A CN202110201934A CN113409242A CN 113409242 A CN113409242 A CN 113409242A CN 202110201934 A CN202110201934 A CN 202110201934A CN 113409242 A CN113409242 A CN 113409242A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- point
- image
- target
- cloud
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Abstract
The invention discloses an intelligent monitoring method for point cloud of a rail-crossing pantograph, which comprises the following steps: acquiring left and right images of a target object by using a binocular camera, and segmenting the target object from the images by preprocessing; step two: the cloud computer automatically intercepts a rectangular area where the target is located, and performs stereo matching on the rectangular area to obtain the parallax of the measured object; step three: calculating the parallax processed by the cloud computer and a reprojection matrix obtained by binocular calibration to obtain a three-dimensional point cloud under a certain angle of the object; step four: and the cloud computer performs ICP registration on the point clouds obtained in the first step to the third step at different angles to obtain a complete three-dimensional point cloud of the bow net monitoring target object. The bow net monitoring device is suitable for installation of rail transit vehicles and carrying out bow net monitoring work, reduces cost and improves efficiency.
Description
Technical Field
The invention relates to an intelligent monitoring technology for three-dimensional point cloud of an orbital-crossing pantograph network, belonging to the technical field of monitoring of orbital-crossing equipment.
Background
The key point of the operation safety of the rail transit vehicle is the safety and the guarantee of equipment in a pantograph system, so that a wireless video monitoring system is usually adopted to monitor the operation condition in real time, a plane camera and a traditional monitoring method of two-dimensional image identification are usually adopted, and a plurality of identification technical problems exist. Therefore, an image recognition system begins to develop a more advanced technical field of three-dimensional image recognition, three-dimensional point cloud adopting stereoscopic vision is a set for describing three-dimensional information data points of an object, and a three-dimensional model of the object is a key technology in development by rapidly acquiring complete three-dimensional point cloud of a target with a complex shape and reconstructing the three-dimensional model of the target. The methods for acquiring three-dimensional image data of three-dimensional point cloud can be mainly divided into two types: contact measurements acquired by some mechanical contact, such as three coordinate measuring machine measurements; non-contact measurements, such as measurements based on machine vision techniques, acquired by mechanical contact are not required. Three-dimensional scanning is an effective means for acquiring three-dimensional point cloud, and is widely applied to various fields such as target identification, three-dimensional reconstruction, Geographic Information System (GIS) construction, medical assistance, ancient cultural relic restoration and the like. However, the high-precision scanners used in China mostly depend on imports such as laser scanning, are high in precision and cost, and are not suitable for equipment application environments of rail transit vehicles and the like.
Therefore, how to replace a complex laser three-dimensional monitoring system by a binocular vision system which is simple in structure and easy to install and quickly acquire complete three-dimensional point cloud three-dimensional monitoring data of a complex target object becomes an important problem in that the binocular vision system is easy to install on rail transit vehicles and carry out safety monitoring work of a pantograph-catenary.
Disclosure of Invention
The purpose is as follows: in order to overcome the defects in the prior art, the invention provides an intelligent monitoring method for a point cloud of an intersection bow net.
The technical scheme is as follows: in order to solve the technical problems, the technical scheme adopted by the invention is as follows:
an intelligent monitoring method for a point cloud of an orbit intersection network comprises the following steps:
the method comprises the following steps: respectively carrying out computer image preprocessing on left and right images acquired by a binocular camera, and segmenting a target object monitored by a pantograph from the images;
the first step comprises the following steps:
1 a: performing threshold segmentation binarization processing on the gray level images of the left image and the right image by using an inter-maximum class variance (OTSU) method, and then respectively overlapping the gray level images with the original gray level images to obtain target images;
1 b: the preprocessed target image R becomes a target area of an original gray image, the background is a pure white image, and a target object is segmented;
step two: automatically intercepting a rectangular area where a target is located by adopting a cloud computer, and performing stereo matching on the rectangular area to obtain the parallax of the measured object;
the second step comprises the following steps:
2a Algorithm for automatically bounding a target object with a minimum rectangle for left and right images R after image pre-processing, the left image minimum rectangle frame is represented by Rect-L (X L, Y L, W L, H L), and the right image minimum rectangle frame is represented by Rect-R (X R, Y R, W R, H R). Where X, Y denote the abscissa and ordinate of the upper left corner of the rectangular frame in the image, and W and H denote the width and height of the rectangular frame.
2b, in the process of stereo matching, the effective matching area of the image to be matched (right image) starts from the numdisparities column, so that the numdisparities column needs to be left of the target object of the image to be matched. The regions automatically intercepted by the algorithm are represented by Rect _ ROI (X ROI, Y ROI, W ROI, H ROI), and the numerical values of the variables are calculated as follows:
2c, performing stereo matching on the preprocessed left and right images by adopting a semi-global block matching (SGBM) algorithm to obtain parallax;
and 2d, acquiring the parallax on the basis of the left image, modifying the parallax matrix on the basis of the left image to eliminate partial mismatching at the edge, and modifying the parallax:
in the above formula: l (x, y) represents the gray value of an arbitrary point (x, y) of the left image; d (x, y) represents a disparity value of an arbitrary point (x, y) of the disparity matrix;
step three: computing the parallax and a re-projection matrix obtained by binocular calibration by using a cloud computer to obtain a three-dimensional point cloud under a certain angle of the object;
the third step comprises:
3a, calculating by adopting parallax obtained by stereo matching and a reprojection matrix obtained by binocular calibration to obtain coordinates of each point in the three-dimensional point cloud; wherein the reprojection matrix is:
in the formula: c x denotes the x coordinate of the principal point in the left image; c y denotes the y coordinate of the principal point in the left image; f represents the focal length obtained by calculation after the three-dimensional calibration; t x denotes the baseline distance of the two cameras;
given a two-dimensional homogeneous point (x, y) and its associated disparity d, this point can be projected into three dimensions as follows:
step four: and the cloud computer performs ICP registration on the point clouds obtained in the first step to the third step at different angles to obtain a complete three-dimensional point cloud stereo image of the monitored target object.
The fourth step comprises:
4 a: reading model point cloud X and target point P, the number of the point concentration points being,;
4 b: searching each point in the model point cloud X and the target point cloud PNearest corresponding pointAnd form a new point setWherein∈X ;
4 c: separately computing model point cloud subsetsCenter of gravity of target point cloud PCovariance matrix of two point clouds;
4 e: computing matricesWherein the feature vector corresponding to the maximum feature value is the optimal rotation vector expressed by unit quaternionAnd then calculating the optimal translation vector;
4 f: transforming the rigid body in the flow 5) into a matrix,Acting on the target point cloudObtaining a new transformed point cloud setCalculating a target point cloud of new locationsMean sum of squared euclidean distances with the model point cloud P;
4 g: if the target point cloudDifference from model point cloud PIs less than a given threshold τ, i.e.<Tau, terminating the iteration, otherwise returning to the flow 4b until the condition is met < τ。
Has the advantages that: according to the intelligent monitoring method for the point cloud of the rail-crossing pantograph network, the complete three-dimensional point cloud three-dimensional image of the target with the complex shape of the rail-crossing pantograph network monitoring target object can be quickly acquired by adopting the binocular vision system and the cloud computing equipment, the structure is simple, the requirement of the installation and operation environment of rail-crossing traffic vehicles is met, the monitoring pantograph network operation safety is facilitated, the precise and complex imported laser three-dimensional scanning system is effectively replaced, the cost is reduced, and the efficiency is improved.
Drawings
FIG. 1 is a flow chart of intelligent acquisition of point clouds in accordance with the present invention
Detailed Description
The present invention will be further described with reference to the accompanying drawings.
An intelligent monitoring method for a point cloud of an orbit intersection network comprises the following steps:
the method comprises the following steps: respectively carrying out computer image preprocessing on left and right images acquired by a binocular camera, and segmenting a target object monitored by a pantograph from the images;
the first step comprises the following steps:
1 a: performing threshold segmentation binarization processing on the gray level images of the left image and the right image by using an inter-maximum class variance (OTSU) method, and then respectively overlapping the gray level images with the original gray level images to obtain target images;
1 b: the preprocessed target image R becomes a target area of an original gray image, the background is a pure white image, and a target object is segmented;
step two: processing by a cloud computer, automatically intercepting a rectangular area where a target is located, and performing stereo matching on the rectangular area to obtain the parallax of a measured object;
the second step comprises the following steps:
2a Algorithm for automatically bounding a target object with a minimum rectangle for left and right images R after image pre-processing, the left image minimum rectangle frame is represented by Rect-L (X L, Y L, W L, H L), and the right image minimum rectangle frame is represented by Rect-R (X R, Y R, W R, H R). Where X, Y denote the abscissa and ordinate of the upper left corner of the rectangular frame in the image, and W and H denote the width and height of the rectangular frame.
2b, in the process of stereo matching, the effective matching area of the image to be matched (right image) starts from the numdisparities column, so that the numdisparities column needs to be left of the target object of the image to be matched. The regions automatically intercepted by the algorithm are represented by Rect _ ROI (X ROI, Y ROI, W ROI, H ROI), and the numerical values of the variables are calculated as follows:
2c, performing stereo matching on the preprocessed left and right images by adopting a semi-global block matching (SGBM) algorithm to obtain parallax;
and 2d, acquiring the parallax on the basis of the left image, modifying the parallax matrix on the basis of the left image to eliminate partial mismatching at the edge, and modifying the parallax:
in the above formula: l (x, y) represents the gray value of an arbitrary point (x, y) of the left image; d (x, y) represents a disparity value of an arbitrary point (x, y) of the disparity matrix;
step three: computing the parallax and a re-projection matrix obtained by binocular calibration by using a cloud computer to obtain a three-dimensional point cloud under a certain angle of the object;
the third step comprises:
3a, calculating by adopting parallax obtained by stereo matching and a reprojection matrix obtained by binocular calibration to obtain coordinates of each point in the three-dimensional point cloud; wherein the reprojection matrix is:
in the formula: c x denotes the x coordinate of the principal point in the left image; c y denotes the y coordinate of the principal point in the left image; f represents the focal length obtained by calculation after the three-dimensional calibration; t x denotes the baseline distance of the two cameras;
given a two-dimensional homogeneous point (x, y) and its associated disparity d, this point can be projected into three dimensions as follows:
step four: and the cloud computer performs ICP registration on the point clouds obtained in the first step to the third step at different angles to obtain a complete three-dimensional point cloud stereo image of the monitored target object.
The fourth step comprises:
4 a: reading model point cloud X and target point P, the number of points in the point set being respectivelyIs composed of,;
4 b: searching each point in the model point cloud X and the target point cloud PNearest corresponding pointAnd form a new point setWherein∈X ;
4 c: separately computing model point cloud subsetsCenter of gravity of target point cloud PCovariance matrix of two point clouds;
4 e: computing matricesWherein the feature vector corresponding to the maximum feature value is the optimal rotation vector expressed by unit quaternionAnd then calculating the optimal translation vector;
4 f: transforming the rigid body in the flow 5) into a matrix,Acting on the target point cloudObtaining a new transformed point cloud setCalculating a target point cloud of new locationsMean sum of squared euclidean distances with the model point cloud P;
4 g: if the target point cloudDifference from model point cloud PIs less than a given threshold τ, i.e.<Tau, terminating the iteration, otherwise returning to the flow 4b until the condition is met < τ。
The above description is only of the preferred embodiments of the present invention, and it should be noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the invention and these are intended to be within the scope of the invention.
Claims (5)
1. An intelligent monitoring method for a point cloud of an orbit intersection network is characterized by comprising the following steps: the method comprises the following steps: the method comprises the following steps: respectively carrying out computer image preprocessing on left and right images acquired by a binocular camera, and segmenting a target object monitored by a pantograph from the images; step two: automatically intercepting a rectangular area where a target is located by adopting a cloud computer, and performing stereo matching on the rectangular area to obtain the parallax of the measured object; step three: computing the parallax and a re-projection matrix obtained by binocular calibration by using a cloud computer to obtain a three-dimensional point cloud under a certain angle of the object; step four: and the cloud computer performs ICP registration on the point clouds obtained in the first step to the third step at different angles to obtain a complete three-dimensional point cloud stereo image of the monitored target object.
2. The intelligent monitoring method for the point cloud of the pantograph according to claim 1, which is characterized by comprising the following steps: respectively preprocessing a left image and a right image acquired by a binocular camera, and segmenting a target object monitored by the pantograph from the images; the first step comprises the following steps: 1 a: performing threshold segmentation binarization processing on the gray level images of the left image and the right image by using an inter-maximum class variance (OTSU) method, and then respectively overlapping the gray level images with the original gray level images to obtain target images; 1 b: the preprocessed target image R becomes a target area of the original gray image, the background is a pure white image, and the target object is segmented.
3. An orbital-arch net according to claim 1The point cloud intelligent monitoring method is characterized by comprising the following steps of: automatically intercepting a rectangular area where a target is located by adopting a cloud computer, and performing stereo matching on the rectangular area to obtain the parallax of the measured object; the second step comprises the following steps: 2a Algorithm for automatically bounding a target object with a minimum rectangle for left and right images R after image pre-processing, the left image minimum rectangle box being represented by Rect-L (X L, Y L, W L, H L), the right image minimum rectangle box being represented by Rect-R (X R, Y R, W R, H R): where X, Y represent the abscissa and ordinate of the upper left corner of the rectangular frame in the image, and W and H represent the width and height of the rectangular frame: 2b, in the process of stereo matching, an effective matching area of an image to be matched (a right image) starts from the numdisparities column, so that the numdisparities column needs to be left of the target object of the image to be matched: the regions automatically intercepted by the algorithm are represented by Rect _ ROI (X ROI, Y ROI, W ROI, H ROI), and the numerical values of the variables are calculated as follows:2c, performing stereo matching on the preprocessed left and right images by adopting a semi-global block matching (SGBM) algorithm to obtain parallax; and 2d, acquiring the parallax on the basis of the left image, modifying the parallax matrix on the basis of the left image to eliminate partial mismatching at the edge, and modifying the parallax:in the above formula: l (x, y) represents the gray value of an arbitrary point (x, y) of the left image; d (x, y) represents a parallax value of an arbitrary point (x, y) of the parallax matrix.
4. The intelligent monitoring method for the point cloud of the pantograph according to claim 1, which is characterized by comprising the following steps of: computing the parallax and a re-projection matrix obtained by binocular calibration by using a cloud computer to obtain a three-dimensional point cloud under a certain angle of the object; the third step comprises: 3a, calculating by adopting parallax obtained by stereo matching and a reprojection matrix obtained by binocular calibration to obtain coordinates of each point in the three-dimensional point cloud; wherein the reprojection matrix is:in the formula: c x denotes the x coordinate of the principal point in the left image; c y denotes the y coordinate of the principal point in the left image; f represents the focal length obtained by calculation after the three-dimensional calibration; t x denotes the baseline distance of the two cameras; given a two-dimensional homogeneous point (x, y) and its associated disparity d, this point can be projected into three dimensions as follows:the three-dimensional coordinates of this point are (X W, Y W, Z W).
5. The intelligent monitoring method for the point cloud of the pantograph according to claim 1, which is characterized by comprising the following four steps: the cloud computer performs ICP (iterative closest point) registration on the point clouds obtained in the first step to the third step at different angles to obtain a complete three-dimensional point cloud stereo image of the monitored target object; the fourth step comprises: 4 a: reading model point cloud X and target point P, the number of the point concentration points being,(ii) a 4 b: searching each point in the model point cloud X and the target point cloud PNearest corresponding pointAnd form a new point setWhereinBelongs to X; 4 c: separately computing model point cloud subsetsCenter of gravity of target point cloud PCovariance matrix of two point clouds(ii) a 4 d: set of pointsCovariance matrix of PConstructing a 4 x 4 symmetric matrix(ii) a 4 e: computing matricesWherein the feature vector corresponding to the maximum feature value is the optimal rotation vector expressed by unit quaternionAnd then calculating the optimal translation vector(ii) a 4 f: transforming the rigid body in the flow 5) into a matrix,Acting on eyesPoint cloudObtaining a new transformed point cloud setCalculating a target point cloud of new locationsMean sum of squared euclidean distances with the model point cloud P(ii) a 4 g: if the target point cloudDifference from model point cloud PIs less than a given threshold τ, i.e.<Tau, terminating the iteration, otherwise returning to the flow 4b until the condition is met < τ。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110201934.1A CN113409242A (en) | 2021-02-23 | 2021-02-23 | Intelligent monitoring method for point cloud of rail intersection bow net |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110201934.1A CN113409242A (en) | 2021-02-23 | 2021-02-23 | Intelligent monitoring method for point cloud of rail intersection bow net |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113409242A true CN113409242A (en) | 2021-09-17 |
Family
ID=77675891
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110201934.1A Pending CN113409242A (en) | 2021-02-23 | 2021-02-23 | Intelligent monitoring method for point cloud of rail intersection bow net |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113409242A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2602896A (en) * | 2022-03-02 | 2022-07-20 | Hack Partners Ltd | Automatic digital inspection of railway environment |
GB2620460A (en) * | 2022-03-02 | 2024-01-10 | Hack Partners Ltd | Automatic digital inspection of railway environment |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080123937A1 (en) * | 2006-11-28 | 2008-05-29 | Prefixa Vision Systems | Fast Three Dimensional Recovery Method and Apparatus |
CN101908230A (en) * | 2010-07-23 | 2010-12-08 | 东南大学 | Regional depth edge detection and binocular stereo matching-based three-dimensional reconstruction method |
CN103955920A (en) * | 2014-04-14 | 2014-07-30 | 桂林电子科技大学 | Binocular vision obstacle detection method based on three-dimensional point cloud segmentation |
CN103955939A (en) * | 2014-05-16 | 2014-07-30 | 重庆理工大学 | Boundary feature point registering method for point cloud splicing in three-dimensional scanning system |
CN108765474A (en) * | 2018-04-17 | 2018-11-06 | 天津工业大学 | A kind of efficient method for registering for CT and optical scanner tooth model |
CN111784770A (en) * | 2020-06-28 | 2020-10-16 | 河北工业大学 | Three-dimensional attitude estimation method in disordered grabbing based on SHOT and ICP algorithm |
-
2021
- 2021-02-23 CN CN202110201934.1A patent/CN113409242A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080123937A1 (en) * | 2006-11-28 | 2008-05-29 | Prefixa Vision Systems | Fast Three Dimensional Recovery Method and Apparatus |
CN101908230A (en) * | 2010-07-23 | 2010-12-08 | 东南大学 | Regional depth edge detection and binocular stereo matching-based three-dimensional reconstruction method |
CN103955920A (en) * | 2014-04-14 | 2014-07-30 | 桂林电子科技大学 | Binocular vision obstacle detection method based on three-dimensional point cloud segmentation |
CN103955939A (en) * | 2014-05-16 | 2014-07-30 | 重庆理工大学 | Boundary feature point registering method for point cloud splicing in three-dimensional scanning system |
CN108765474A (en) * | 2018-04-17 | 2018-11-06 | 天津工业大学 | A kind of efficient method for registering for CT and optical scanner tooth model |
CN111784770A (en) * | 2020-06-28 | 2020-10-16 | 河北工业大学 | Three-dimensional attitude estimation method in disordered grabbing based on SHOT and ICP algorithm |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2602896A (en) * | 2022-03-02 | 2022-07-20 | Hack Partners Ltd | Automatic digital inspection of railway environment |
GB2602896B (en) * | 2022-03-02 | 2023-01-04 | Hack Partners Ltd | Automatic digital inspection of railway environment |
GB2620460A (en) * | 2022-03-02 | 2024-01-10 | Hack Partners Ltd | Automatic digital inspection of railway environment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Fan et al. | Pothole detection based on disparity transformation and road surface modeling | |
CN110569704B (en) | Multi-strategy self-adaptive lane line detection method based on stereoscopic vision | |
CN109655019B (en) | Cargo volume measurement method based on deep learning and three-dimensional reconstruction | |
CN109544677B (en) | Indoor scene main structure reconstruction method and system based on depth image key frame | |
CN107767456A (en) | A kind of object dimensional method for reconstructing based on RGB D cameras | |
Kang et al. | Automatic targetless camera–lidar calibration by aligning edge with gaussian mixture model | |
CN108597009B (en) | Method for detecting three-dimensional target based on direction angle information | |
Gao et al. | Ground and aerial meta-data integration for localization and reconstruction: A review | |
CN113409242A (en) | Intelligent monitoring method for point cloud of rail intersection bow net | |
CN112164145B (en) | Method for rapidly extracting indoor three-dimensional line segment structure based on point cloud data | |
CN113884002A (en) | Pantograph slide plate upper surface detection system and method based on two-dimensional and three-dimensional information fusion | |
CN115222884A (en) | Space object analysis and modeling optimization method based on artificial intelligence | |
CN111325828A (en) | Three-dimensional face acquisition method and device based on three-eye camera | |
CN114820474A (en) | Train wheel defect detection method based on three-dimensional information | |
CN112767459A (en) | Unmanned aerial vehicle laser point cloud and sequence image registration method based on 2D-3D conversion | |
Aissaoui et al. | Rapid and accurate face depth estimation in passive stereo systems | |
CN116958434A (en) | Multi-view three-dimensional reconstruction method, measurement method and system | |
CN116843829A (en) | Concrete structure crack three-dimensional reconstruction and length quantization method based on binocular video | |
CN116883590A (en) | Three-dimensional face point cloud optimization method, medium and system | |
CN116402904A (en) | Combined calibration method based on laser radar inter-camera and monocular camera | |
CN110189403B (en) | Underwater target three-dimensional reconstruction method based on single-beam forward-looking sonar | |
CN114766039A (en) | Object detection method, object detection device, terminal device, and medium | |
Cobzas et al. | Planar patch extraction with noisy depth data | |
Moallem et al. | Effective parameters in search space reduction used in a fast edge-based stereo matching | |
David | Detection of building facades in urban environments |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20210917 |