CN110378995B - Method for three-dimensional space modeling by using projection characteristics - Google Patents

Method for three-dimensional space modeling by using projection characteristics Download PDF

Info

Publication number
CN110378995B
CN110378995B CN201910455916.9A CN201910455916A CN110378995B CN 110378995 B CN110378995 B CN 110378995B CN 201910455916 A CN201910455916 A CN 201910455916A CN 110378995 B CN110378995 B CN 110378995B
Authority
CN
China
Prior art keywords
pictures
points
camera
dimensional
modeling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910455916.9A
Other languages
Chinese (zh)
Other versions
CN110378995A (en
Inventor
崔岩
刘强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Germany Zhuhai Artificial Intelligence Institute Co ltd
4Dage Co Ltd
Original Assignee
China Germany Zhuhai Artificial Intelligence Institute Co ltd
4Dage Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Germany Zhuhai Artificial Intelligence Institute Co ltd, 4Dage Co Ltd filed Critical China Germany Zhuhai Artificial Intelligence Institute Co ltd
Priority to CN201910455916.9A priority Critical patent/CN110378995B/en
Publication of CN110378995A publication Critical patent/CN110378995A/en
Application granted granted Critical
Publication of CN110378995B publication Critical patent/CN110378995B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The invention discloses a method for performing three-dimensional space modeling by utilizing projection characteristics, and relates to the technical field of three-dimensional imaging digital modeling. The method comprises the following steps: respectively shooting a group of position pictures consisting of RGB pictures and sample projection pictures at two different positions in space, extracting feature points of the position pictures, then carrying out feature point fusion on the same group of position pictures to obtain position feature points of the position, carrying out matching calculation by using SIFT algorithm aiming at the feature points of different positions, calculating initial camera positions when different groups of position pictures are shot by using SLAM algorithm, calculating accurate camera positions and sparse point clouds by using SFM algorithm, carrying out three-dimensional structural modeling based on the camera positions and the sparse point clouds, and finally carrying out three-dimensional scene mapping. The method and the device have the advantages that the characteristic points of the sample projection picture and the RGB picture are fused with each other to form mutual complementation of the characteristic points, and the problems that the number of the extracted characteristic points in a single color space is small, the reconstruction difficulty of a three-dimensional model is high, and the effect is poor are solved.

Description

Method for three-dimensional space modeling by using projection characteristics
[ technical field ] A method for producing a semiconductor device
The invention relates to the technical field of three-dimensional imaging digital modeling, in particular to a method for carrying out three-dimensional space modeling by utilizing projection characteristics in a space with a single color.
[ background ] A method for producing a semiconductor device
In the process of using the dome camera to perform three-dimensional modeling, due to the involvement of an instant positioning and map building (SLAM) technology, the amount of data to be processed is relatively large when the dome camera (generally, binocular or multi-view) always performs shooting of video streams, so that a very large burden is caused to hardware, a large heating phenomenon is generated, and the electric quantity is exhausted in about several minutes to ten minutes. Secondly, if the dome camera is directly used for spatial localization, frame pictures of a video stream taken by the dome camera must be used for SLAM localization, but such a calculation amount is large, and a large amount of CPU resources are occupied, thereby increasing huge power consumption. In addition, with this positioning method, it is necessary to perform SLAM positioning between frame pictures of a video stream captured by a dome camera
Distortion may occur when splicing is performed; when the dome camera performs SLAM positioning, data needs to be transmitted back to the processor, and a real-time preview delay is caused by a time difference generated in data transmission back.
For this reason, on the basis of the instant location and mapping (SLAM) technology, a vision-based instant location and mapping (VSLAM) technology has been developed. The advantage of VSLAM is the rich texture information it utilizes. For example, two billboards with the same size but different contents cannot be distinguished by the laser SLAM algorithm based on point cloud, but the two billboards can be easily distinguished by vision. This brings incomparable great advantages in repositioning and scene classification. Meanwhile, visual information can be easily used for tracking and predicting dynamic objects in a scene, such as pedestrians, vehicles and the like, which is very important for application in a complex dynamic scene. Thirdly, the projection model of vision can theoretically allow objects at infinity to enter a visual picture, and positioning and mapping of a large scene can be performed under reasonable configuration (such as a long-baseline binocular camera).
That is, the traditional three-dimensional modeling method mainly aims at scenes with rich and diversified color information. Because the number of the characteristic points in the spatial scene with obvious color change is large, a large amount of characteristic information can be extracted, and therefore the reconstructed three-dimensional model is more accurate, namely more consistent with the real scene. However, in a spatial scene with a single color (such as a building site, a white wall, glass, etc.), sufficient feature information is difficult to obtain because color transitions are not obvious or even have no color transitions, and the effect of three-dimensional model reconstruction by the above-mentioned existing instant positioning and mapping (SLAM) technology and the vision-based instant positioning and mapping (VSLAM) technology is greatly reduced. The reason why the above-mentioned problems occur is that the above-mentioned conventional three-dimensional model reconstruction extracts a relatively small number of feature points in a single-color space by a Scale-invariant feature transform (SIFT) method, and the three-dimensional model reconstruction is difficult and poor in effect.
[ summary of the invention ]
In order to solve the defects of the prior art, the invention provides a method for three-dimensional space modeling by using projection features in a single-color space, which comprises the following steps:
s1, respectively shooting a group of position pictures at two different positions in the same space, wherein the position pictures comprise an RGB picture and a sample projection picture;
s2, respectively extracting characteristic points of the shot RGB pictures and the sample projection pictures, and fusing the characteristic points of the RGB pictures and the sample projection pictures of the same group of position pictures to obtain the characteristic points in the shot scene at the position;
s3, carrying out matching calculation on feature points in the shot scene at different positions and initial camera positions when two groups of position pictures are shot;
s4, calculating an accurate camera position and sparse point cloud, and performing three-dimensional structured modeling;
and S5, carrying out three-dimensional scene mapping.
Further, feature point extraction is carried out on the RGB picture and the sample projection picture in the position picture by adopting SIFT descriptors, meanwhile, the neighborhood of each feature point is analyzed, and the feature points are controlled according to the neighborhood.
Further, the sample projection picture is a sample pattern projected by a visible light projector.
Further, the visible light projector is a laser projection projector, and the camera is a dome camera.
Further, the visible light projector is turned off when the RGB picture is taken.
Further, the sample pattern contains specific feature points set in advance.
Further, the specific feature points include highlight point points or special shape point points.
Further, the step S3 further includes closed loop detection according to the camera position.
Further, the closed loop detection is: comparing the current calculated camera position with the past camera position to detect a distance difference; and if the distance difference between the two is detected to be within a certain threshold value range, the current camera position is considered to return to the past camera position, and closed loop detection is started at the moment.
Further, the three-dimensional structural modeling of step S4 specifically includes the steps of:
s4.1, preliminarily calculating the position of the camera to obtain a part of sparse point cloud with noise points, and filtering the noise points in a distance and reprojection mode;
s4.2, marking the sparse point cloud, and carrying out corresponding marking;
s4.3, taking each sparse point cloud as a starting point, taking a corresponding spherical screen camera as a virtual straight line, and interweaving spaces through which a plurality of virtual straight lines pass to form a visual space;
s4.4, digging out the space surrounded by the rays;
s4.5, a closed space is made based on a graph theory shortest path mode.
Compared with the prior art, the invention has the beneficial effects that:
when processing RGB pictures of a scene such as a pure white wall or glass, the characteristic points are less, and the sample pattern projected by the visible light projector can obtain stable characteristic points with more quantity. The range of the characteristic points of the sample projection picture is limited to the area covered by the projected visible light pattern, the whole scene cannot be covered, the characteristic points of the RGB image can be effectively supplemented, and the problem that the three-dimensional reconstruction is difficult to stably carry out in a monotonous space scene is solved.
[ description of the drawings ]
FIG. 1 is a flow chart of a method for three-dimensional space modeling using projection characteristics
FIG. 2 three-dimensional structured modeling flow chart
[ detailed description ] A
For a further understanding of the invention, reference will now be made to the following examples which are set forth in part by way of illustration:
the invention provides a method for three-dimensional space modeling by using projection characteristics, which comprises the following steps:
s1, respectively shooting a group of position pictures at two different positions in the same space, wherein the position pictures comprise an RGB picture and a sample projection picture;
s2, respectively extracting characteristic points of the shot RGB pictures and the sample projection pictures, and fusing the characteristic points of the RGB pictures and the sample projection pictures of the same group of position pictures to obtain the characteristic points in the shot scene at the position;
s3, carrying out matching calculation on feature points in the shot scene at different positions and initial camera positions when two groups of position pictures are shot;
s4, calculating an accurate camera position and sparse point cloud, and performing three-dimensional structured modeling;
and S5, carrying out three-dimensional scene mapping.
It should be noted that SIFT descriptors are used for extracting feature points from the RGB picture and the sample projection picture in the position picture, and meanwhile, the neighborhood of each feature point is analyzed, and the feature points are controlled according to the neighborhood. The characteristic point is where the curvature is mutated or the normal mutation is generated. The method for extracting the point cloud data feature points comprises a curvature-based boundary edge extraction method, a feature value-based boundary edge extraction method and a neighborhood information-based boundary edge extraction method, wherein the three methods respectively have advantages and disadvantages. In the extraction method of the three-dimensional point cloud data closed characteristic line provided by Demarsin, the normal direction of points is calculated by utilizing principal component analysis, and then the points are clustered based on the normal transformation of local neighborhood to form different clusters. In the process of judging the characteristic points, a method of comparing the normal included angle of two points with the acceptable maximum angle threshold value is adopted, and the characteristic points are judged by taking one cluster as a unit. Cluster analysis is a data detection tool that is effective for unclassified cases, where the goal is to group objects into natural classes or clusters based on similarity or distance, and where the object class is unknown, clustering techniques tend to be more effective. Therefore, such techniques have found wide application in the instant positioning and mapping (SLAM) technique.
In step S3, the SLAM algorithm is used to calculate the initial camera position when different sets of the position pictures are taken, and in step S4, the SFM algorithm is used to calculate the accurate camera position and sparse point cloud. Further accurate positioning of the camera position via the SFM algorithm may make the three-dimensional model generated in step S4 more accurate. And establishing a camera coordinate system by taking the position of the camera as an origin, and solving an internal reference matrix of the camera by the conventional camera calibration program or algorithm.
The feature points are SIFT features, the matching result often has a plurality of mismatching, in order to eliminate the errors, some existing algorithms such as a Ratio Test method and a KNN algorithm are used for searching for 2 features which are most matched with the features, if the Ratio of the matching distance of the first feature to the matching distance of the second feature is smaller than a certain threshold value, the matching is accepted, otherwise, the mismatching is regarded. After the matching point is obtained, a feature matrix can be obtained by using a function findEstimalaMat () newly added in OpenCV3.0, three-dimensional reconstruction is to restore the coordinates of the matching point in the space through the known information, and the findEstimalaMat function can be expressed as follows:
findEssentialMat(InputArray points1,InputArray points2,
InputArray cameraMatrix,int method=RANSAC,
double prob=0.999,double threshold=1.0,
OutputArray mask=noArray());
the findEssentiaMat function mainly has the function of calculating a basic matrix from corresponding points in two images, wherein points1 represent N two-dimensional pixel points of a first image, and the point coordinates are floating points with single precision or double precision; points2 represent two-dimensional pixel points of a second picture, and are the same in size and type as the points 1; the camera matrix is a camera matrix, and points1 and points2 are assumed to be feature points of cameras with the same camera matrix; method is a method for calculating the characteristic matrix, and a RANSAC algorithm is adopted in the embodiment; prob is expressed as probability and is used for parameters of a characteristic matrix calculation method, and the correct reliability of the matrix is mainly estimated; threshold is used for RANSAC parameters, and represents the maximum distance (in pixels) from a point to an epipolar line, and when the maximum distance exceeds the point, the point is regarded as an abnormal value, and is not used for calculating a final basic matrix, and can be set according to the difference of point positioning accuracy, image resolution and image noise; mask is an array that outputs N elements, where each element is set to 0 for outliers and 1 for other points. The function return is the calculated local feature matrix, which can be further transferred to decomposesesentalmat or recoverPose to recover the relative position between the cameras.
The SFM algorithm is an off-line algorithm for three-dimensional reconstruction based on various collected disordered pictures. Before the core algorithm structure-from-motion is performed, some preparation is needed to pick out the appropriate picture. Firstly, focal length information is extracted from a picture (BA requirement is initialized later), then image features are extracted by using a feature extraction algorithm such as SIFT (Scale invariant feature transform) and the like, and Euclidean distance between feature points of two pictures is calculated by using a kd-tree model to match the feature points, so that an image pair with the feature point matching number meeting the requirement is found. For each image matching pair, epipolar geometry is calculated, the F matrix is estimated and the matching pairs are improved by ransac algorithm optimization. Therefore, if characteristic points can be transmitted in a chain manner in the matching pairs and are detected all the time, tracks can be formed, and accurate camera positions and sparse point clouds can be obtained.
The point cloud is a point data set of the appearance surface of an object obtained by a measuring instrument in reverse engineering, and the number of points obtained by using a three-dimensional coordinate measuring machine is small, the distance between the points is large, and the point cloud is called as sparse point cloud; the point clouds obtained by using the three-dimensional laser scanner or the photographic scanner have larger and denser point quantities, and are called dense point clouds.
The step S3 further includes performing closed loop detection according to the camera position, where the closed loop detection is: comparing the current calculated camera position with the past camera position to detect a distance difference; and if the distance difference between the current camera position and the past camera position is detected to be within a certain threshold range, considering that the current camera position returns to the past camera position, and starting closed loop detection.
It should be further noted that the present invention is a closed loop detection based on spatial information rather than time series.
Further, in a preferred embodiment of the present invention, the sample projection picture is a pattern of samples projected by a visible light projector. The visible light projector is a laser projection projector, and the camera is a dome camera. The sample pattern comprises preset specific feature points, and the specific feature points comprise highlight point points or special shape point points. For example, the specific feature points of the invisible light emitted from the invisible light projector may be preset in the shape of a pattern such as "stars" or "dragon", and the points in the pattern have a specific distribution, such as a highlight point, which has a point on the left, a point on the right, or a point on the top, and the positions of the feature points can be found along the specific feature points set when the pattern is preset in the shooting environment. Meanwhile, since the pattern is predetermined, the number of feature points in the pattern can be known, and thus a minimum number of feature points matched to the pattern can be set, for example, 100 feature points are predetermined, and it can be set that at least 70 points or 80 points can be found during the image recognition to ensure the stability of image capture, and even to search for feature points as much as possible.
In addition, in a further implementation, different sample patterns may be projected at the same position in space to obtain two or even more sets of sample patterns at the same position, and then the calculation is performed according to the feature extraction results of the sample patterns in different sets, and the calculation results are compared with each other to obtain a result with higher accuracy.
Furthermore, the above-mentioned method can be applied to different positions in space, for example, multiple sets of sample patterns are preset, a first sample pattern is projected at a first position in space, then a second sample pattern is projected at a second position in space, 823030where the sample patterns can be used alternately or cyclically, and then the operation is performed according to the feature point extraction results obtained by different sample patterns. Because the setting modes of the characteristic points in each group of sample patterns are different, the method can effectively avoid the operation error caused by repeatedly using the same group of sample patterns when the characteristic points of one group of sample patterns are not obviously set.
In order to ensure that the RGB pictures in a group of position pictures shot at the same position and the sample projection pictures are shot at an azimuth angle, a camera for shooting the position pictures and a visible light projector are arranged on the same stand, and the visible light projector needs to be closed when the RGB pictures are shot, so that interference is avoided; more specifically, the RGB pictures or the sample projection pictures can be taken by the same camera or by different cameras, but the RGB pictures must be taken by a dome camera, and when the RGB pictures are taken, the visible light projector needs to be in a closed state, and when the sample projection pictures are taken, the visible light projector can be a dome camera or other cameras, and mainly the taken sample projection pictures are used for feature point extraction and analysis calculation, so that the visible light projector needs to be in an open state when the sample projection pictures are taken.
It should be noted that the three-dimensional structural modeling in step S4 specifically includes the steps of:
s4.1, preliminarily calculating the position of the camera to obtain a part of sparse point cloud with noise points, and filtering to remove the noise points by using a distance and re-projection mode;
s4.2, marking the sparse point cloud, and carrying out corresponding marking;
s4.3, taking each sparse point cloud as a starting point, taking a corresponding spherical screen camera as a virtual straight line, and interweaving spaces through which a plurality of virtual straight lines pass to form a visual space;
s4.4, digging out the space surrounded by the rays;
s4.5, a closed space is made based on the shortest path mode of graph theory.
Specifically, the RGB picture is a dome photo, that is, a dome camera is provided to complete shooting, and the three-dimensional scene mapping is mapping by using the dome photo.
In summary, when the sample pattern projected by the visible light projector is used for processing a scene with few characteristic points, such as a pure white wall or glass, the sample projection picture can obtain a stable and large number of characteristic points. The range of the characteristic points of the sample projection picture is limited to the area covered by the projected visible light pattern, the whole scene cannot be covered, the characteristic points on the RGB image shot by the camera can be effectively supplemented, and the problem that three-dimensional reconstruction is difficult to stably carry out in a monotonous space scene is solved.
The foregoing shows and describes the general principles, essential features, and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are merely illustrative of the principles of the invention, but various changes and modifications may be made without departing from the spirit and scope of the invention, which is defined by the appended claims and their equivalents.

Claims (7)

1. A method for three-dimensional spatial modeling using projection features, comprising the steps of:
s1, respectively shooting a group of position pictures at two different positions in the same space, wherein the position pictures comprise an RGB picture and a sample projection picture; the RGB picture is an RGB picture of a spatial scene with a single color;
s2, respectively extracting characteristic points of the shot RGB pictures and the sample projection pictures, and fusing the characteristic points of the RGB pictures and the sample projection pictures of the same group of position pictures to obtain the characteristic points in the shot scene at the position; the sample pattern comprises preset specific characteristic points, and the positions of the characteristic points are searched according to the specific characteristic points set by the preset pattern; the specific characteristic points comprise highlight point positions or special shape point positions;
s3, performing matching calculation on the feature points in the shot scene at different positions, and calculating the initial camera positions when two groups of position pictures are shot;
s4, calculating an accurate camera position and sparse point cloud, and performing three-dimensional structured modeling;
s5, carrying out three-dimensional scene mapping;
extracting feature points of the RGB pictures and the sample projection pictures in the position pictures by adopting SIFT descriptors, analyzing the neighborhood of each feature point, and controlling the feature points according to the neighborhood; in the process of judging the characteristic points, the normal included angle of the two points is compared with the acceptable maximum angle threshold value, and the characteristic points are judged by taking one cluster as a unit.
2. The method for modeling three-dimensional space using projection features of claim 1 wherein said sample projection picture is a sample pattern projected by a visible light projector.
3. The method for modeling three-dimensional space using projection features of claim 2 wherein said visible light projector is a laser projection projector and said camera is a dome camera.
4. The method of claim 3 wherein the visible light projector is turned off when the RGB pictures are taken.
5. The method for modeling three-dimensional space using projected features of claim 1, wherein said step S3 further comprises closed loop detection based on said camera position.
6. The method for modeling three-dimensional space using projection features of claim 5, wherein said closed-loop detection is: comparing the current calculated camera position with the past camera position to detect a distance difference; and if the distance difference between the current camera position and the past camera position is detected to be within a certain threshold range, considering that the current camera position returns to the past camera position, and starting closed loop detection.
7. The method for modeling three-dimensional space by using projection features as claimed in claim 6, wherein the three-dimensional structural modeling of step S4 specifically comprises the steps of:
s4.1, preliminarily calculating the position of the camera to obtain a part of sparse point cloud with noise points, and filtering the noise points in a distance and reprojection mode;
s4.2, marking the sparse point cloud, and carrying out corresponding marking;
s4.3, taking each sparse point cloud as a starting point, taking a corresponding dome camera as a virtual straight line, and interweaving spaces through which a plurality of virtual straight lines pass to form a visual space;
s4.4, digging out the space surrounded by the rays;
s4.5, a closed space is made based on the shortest path mode of graph theory.
CN201910455916.9A 2019-05-29 2019-05-29 Method for three-dimensional space modeling by using projection characteristics Active CN110378995B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910455916.9A CN110378995B (en) 2019-05-29 2019-05-29 Method for three-dimensional space modeling by using projection characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910455916.9A CN110378995B (en) 2019-05-29 2019-05-29 Method for three-dimensional space modeling by using projection characteristics

Publications (2)

Publication Number Publication Date
CN110378995A CN110378995A (en) 2019-10-25
CN110378995B true CN110378995B (en) 2023-02-17

Family

ID=68248865

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910455916.9A Active CN110378995B (en) 2019-05-29 2019-05-29 Method for three-dimensional space modeling by using projection characteristics

Country Status (1)

Country Link
CN (1) CN110378995B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115244581A (en) * 2020-03-23 2022-10-25 罗伯特·博世有限公司 Three-dimensional environment modeling method and equipment, computer storage medium and industrial robot operating platform
US11709917B2 (en) * 2020-05-05 2023-07-25 Nanjing University Point-set kernel clustering
CN112085794B (en) * 2020-09-11 2022-05-17 中德(珠海)人工智能研究院有限公司 Space positioning method and three-dimensional reconstruction method applying same

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103914874B (en) * 2014-04-08 2017-02-01 中山大学 Compact SFM three-dimensional reconstruction method without feature extraction
CN106296643A (en) * 2015-06-10 2017-01-04 西安蒜泥电子科技有限责任公司 Characteristic point replenishment system for multi-view geometry three-dimensional reconstruction
CN105763793B (en) * 2016-02-18 2017-06-16 西安科技大学 A kind of 3-d photographs acquisition method and acquisition system
CN108510434B (en) * 2018-02-12 2019-08-20 中德(珠海)人工智能研究院有限公司 The method for carrying out three-dimensional modeling by ball curtain camera
CN108566545A (en) * 2018-03-05 2018-09-21 中德(珠海)人工智能研究院有限公司 The method that three-dimensional modeling is carried out to large scene by mobile terminal and ball curtain camera

Also Published As

Publication number Publication date
CN110378995A (en) 2019-10-25

Similar Documents

Publication Publication Date Title
CN111968129B (en) Instant positioning and map construction system and method with semantic perception
Fan et al. Pothole detection based on disparity transformation and road surface modeling
US10109055B2 (en) Multiple hypotheses segmentation-guided 3D object detection and pose estimation
CN109102537B (en) Three-dimensional modeling method and system combining two-dimensional laser radar and dome camera
US9154773B2 (en) 2D/3D localization and pose estimation of harness cables using a configurable structure representation for robot operations
US8929602B2 (en) Component based correspondence matching for reconstructing cables
CN108648194B (en) Three-dimensional target identification segmentation and pose measurement method and device based on CAD model
Micusik et al. Descriptor free visual indoor localization with line segments
CN110378995B (en) Method for three-dimensional space modeling by using projection characteristics
Tang et al. ESTHER: Joint camera self-calibration and automatic radial distortion correction from tracking of walking humans
Santos et al. 3D plant modeling: localization, mapping and segmentation for plant phenotyping using a single hand-held camera
Zhu et al. Video-based outdoor human reconstruction
CN105160649A (en) Multi-target tracking method and system based on kernel function unsupervised clustering
CN106530407A (en) Three-dimensional panoramic splicing method, device and system for virtual reality
CN113095371B (en) Feature point matching method and system for three-dimensional reconstruction
Li et al. A two-view based multilayer feature graph for robot navigation
Ma et al. Line-based stereo SLAM by junction matching and vanishing point alignment
JP2532985B2 (en) Three-dimensional image evaluation device
Horaud et al. Structural matching for stereo vision
Ji et al. 3d reconstruction of dynamic textures in crowd sourced data
CN116843829A (en) Concrete structure crack three-dimensional reconstruction and length quantization method based on binocular video
CN110363806B (en) Method for three-dimensional space modeling by using invisible light projection characteristics
Price et al. Augmenting crowd-sourced 3d reconstructions using semantic detections
CN107194334B (en) Video satellite image dense Stereo Matching method and system based on optical flow estimation
Bazin et al. An original approach for automatic plane extraction by omnidirectional vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant