CN104157008A - Depth image matching method based on ASIFT (Affine Scale-invariant Feature Transform) - Google Patents

Depth image matching method based on ASIFT (Affine Scale-invariant Feature Transform) Download PDF

Info

Publication number
CN104157008A
CN104157008A CN201410369761.4A CN201410369761A CN104157008A CN 104157008 A CN104157008 A CN 104157008A CN 201410369761 A CN201410369761 A CN 201410369761A CN 104157008 A CN104157008 A CN 104157008A
Authority
CN
China
Prior art keywords
point
degree
depth picture
collection
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410369761.4A
Other languages
Chinese (zh)
Inventor
李东
田劲东
刘春阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN201410369761.4A priority Critical patent/CN104157008A/en
Publication of CN104157008A publication Critical patent/CN104157008A/en
Pending legal-status Critical Current

Links

Abstract

The invention discloses a depth image matching method based on ASIFT (Affine Scale-invariant Feature Transform). The method comprises the following steps: acquiring depth images and grayscale images of an object to be detected in two view fields, and respectively obtaining the corresponding relationship between the depth image and the grayscale image inside each of the two view fields; extracting the feature point pair sets of grayscale images inside the two view fields by adopting the ASIFT algorithm; acquiring depth image point pair sets corresponding to the feature point pair sets of grayscale images according to the corresponding relationships between the depth images and the grayscale images inside the two view fields, and then screening the depth image point pair sets according to the principle that the spatial features in rigid transformation are invariable, so as to obtain effective depth image point pair sets; computing an initial value rotation matrix and a translation matrix by adopting the least square method according to the effective depth image point pair sets; performing iteration by adopting the initial value rotation matrix and the translation matrix as iteration initial value of the ICP algorithm, so as to realize the precise matching of the depth images inside the two view fields. The depth image matching method is wide in adaptability and high in matching precision, and can be widely applied to the fields of three-dimensional digital imaging and optical three-dimensional reconstruction.

Description

A kind of method for matching depth image based on ASIFT
Technical field
The present invention relates to 3 D digital imaging and optical 3-dimensional and rebuild field, especially a kind of method for matching depth image based on ASIFT.
Background technology
3 D digital imaging and moulding (3DIM, 3D Digital Imaging and Modeling) are active research emerging interdisciplinary fields in the world in recent years.It is widely applied to all many-sides such as reverse engineering, historical relic's protection, medical diagnosis, industrial detection and virtual reality.The space coupling of degree of depth picture is unusual the key link in 3DIM technology.Due to the visual field restriction of scanning device and the hiding relation of object self, can not just draw the full detail of describing body form by single pass.Therefore in order to obtain the data model that testee is complete, need to carry out multiple scanning to object from multiple visual angles, until collect the complete depth information of object.
The degree of depth obtaining in each direction similarly is the video camera for this direction, that is to say, degree of depth image position is in the coordinate system of this video camera.For the same point on object, the coordinate in different cameras coordinate system is not identical.And in will degree of depth picture from different directions, obtain the expressed intact of body surface, and needing to determine all degree of depth to be looked like to be transformed in same coordinate system the transformation relation between all directions degree of depth picture, this process is mated exactly.Wherein, the transformation relation between degree of depth picture is the rigid body translation of a six degree of freedom.
Application at present more widely method for matching depth image is iterative closest point (ICP, Iterative Closest Point) algorithm.This algorithm need to be estimated the initial position of two degree of depth pictures, and using the initial value as iteration, it is to noise-sensitive and be easily absorbed in local optimum.If there is analog structure in scene, when this algorithm carries out iteration, tend to occur mistake coupling because being absorbed in local minimum, thereby cause whole registration failure.The thick coupling of degree of depth picture is for obtaining the initial position estimation of two degree of depth pictures, and therefore, the quality of thick matching effect has determined the final iteration result of ICP to a great extent.
At present, the thick matching process of degree of depth picture mainly contains two kinds: the thick matching process based on geometric properties and the thick matching process based on textural characteristics.But based on the thick matching process of geometric properties, for shape matching rule or symmetrical object features identification difficulty, adaptability is narrower.And matching process based on textural characteristics, as the feature extracting method based on Harris angle point with based on yardstick invariant features conversion (SIFT, Scale Invariant Feature Transform) some feature extracting method, it not complete affine constant feature extracting method, in the time there is enough large affined transformation in two width images, matching process based on textural characteristics cannot detect enough common traits in two width figure, the in the situation that of causing it to change greatly in visual field, matching precision is poor, often can not get satisfied result.
Summary of the invention
In order to solve the problems of the technologies described above, the object of the invention is: provide a kind of wide adaptability and matching precision high, based on the method for matching depth image of ASIFT.
The technical solution adopted for the present invention to solve the technical problems is: a kind of method for matching depth image based on ASIFT, comprising:
S1, obtain degree of depth picture and the gray level image of testee in two visual fields, and obtain respectively the corresponding relation of degree of depth picture and gray level image in two visual fields, described two visual fields are that visual angle is different but have the visual field of overlapping region;
The unique point that S2, employing ASIFT algorithm extract gray level image in two visual fields is to collection;
S3, according to the corresponding relation of degree of depth picture and gray level image in two visual fields, obtain with gray level image unique point collecting corresponding degree of depth picture point to collecting, then according to the space characteristics invariance principle of rigid body translation, degree of depth picture point is screened collection, reject invalid degree of depth picture point pair, thereby obtain effective degree of depth picture point to collection;
S4, basis effectively degree of depth picture point are calculated initial value rotation matrix and the translation matrix of two visual fields by least square method to centralized procurement;
S5, carry out iteration using initial value rotation matrix and translation matrix as the iterative initial value of ICP algorithm, thereby realize the essence coupling of degree of depth picture in two visual fields.
Further, described step S2, it is specially:
According to the gray level image (I of testee in two visual fields 1, I 2), absolute tilt quantity parametric t and the longitude angle parameter phi of ASIFT affined transformation physical model are sampled, then after SIFT extracting and matching feature points, obtain the unique point of gray level image in two visual fields to collection (S 1, S 2), wherein, degree of tilt parameter is pressed t=1, sample, longitude angle parameter is pressed sample, 180 ° of kb/t <, b=72 °, k is integer, S 2 = { s 1 2 , s 2 2 , . . . , s n 2 | s i 2 &Element; I 2 } , And unique point with unique point corresponding.
Further, described step S3, it comprises:
S31, according to degree of depth picture (I in two visual fields 1, I 2) and gray level image (P 1, P 2) corresponding relation Γ 1and Γ 2, calculate with gray level image unique point collection (S 1, S 2) corresponding degree of depth picture point is to collection (V 1, V 2), wherein,
V 1 = { v 1 1 , v 2 1 , . . . , v n 1 | v i 1 &Element; P 1 } = { &Gamma; 1 - 1 ( s 1 1 , s 2 1 , . . . , s n 1 ) | s i 1 &Element; I 1 } = &Gamma; 1 - 1 ( S 1 ) ,
V 2 = { v 1 2 , v 2 2 , . . . , v n 2 | v i 2 &Element; P 2 } = &Gamma; 2 - 1 ( S 2 ) ;
S32, according to the space characteristics invariance principle of rigid body translation to degree of depth picture point to collection (V 1, V 2) screen, reject Null Spot pair, thereby obtain effective degree of depth picture point to collection
Further, described step S32, it comprises:
S321, to degree of depth picture point to collection (V 1, V 2) in point right calculate respectively distance B with concentrated other points of corresponding point 1(i) and distance B with concentrated other points of corresponding point 2(i), described D 1and D (i) 2(i) computing formula is:
D 1 ( i ) = &Sigma; j = 1 , j &NotEqual; i n &Delta;V 1 ( i , j ) = &Sigma; j = 1 , j &NotEqual; i n | | v i 1 - v j 1 | | ,
D 2 ( i ) = &Sigma; j = 1 , j &NotEqual; i n &Delta;V 2 ( i , j ) = &Sigma; j = 1 , j &NotEqual; i n | | v i 2 - v j 2 | | ;
S322, according to formula Dis=||D 1(i)-D 2(i) || calculate D 1and D (i) 2(i) range difference Dis, then judges whether the range difference Dis calculating exceeds predefined threshold value, if so, by right this point be labeled as Null Spot pair, and by its from point to collection (V 1, V 2) in remove; Otherwise, by right this point retain, finally obtain effective degree of depth picture point to collection
Further, described step S4, it is specially:
According to effective degree of depth picture point to collection adopt least square method to calculate initial value rotation matrix R and the translation matrix T of two visual fields, the solution formula of described initial value rotation matrix R and translation matrix T is:
min f ( R , T ) = &Sigma; i = 1 m | | v T , i 1 - Rv T , i 2 - T | | 2 ,
Wherein, m represents that significant depth picture point is to collection in point to number, with be respectively with point is to concentrated element.
The invention has the beneficial effects as follows: adopt ASIFT algorithm to extract gray level image unique point to collection, and then obtain effective degree of depth picture point to collection and the iterative initial value of ICP algorithm, thereby realize the coupling of degree of depth picture, there is complete affine invariant feature and the good unchangeability to illumination and dimensional variation, can there is larger variation in two visual fields time, still can find the unique point of correct coupling, wide adaptability; Reject degree of depth picture point to concentrated Null Spot pair according to the space characteristics invariance principle of rigid body translation, improved the computational accuracy of noise resisting ability and ICP algorithm iteration initial value, accelerated ICP convergence of algorithm, matching precision is high.
Brief description of the drawings
Below in conjunction with drawings and Examples, the invention will be further described.
Fig. 1 is the overall flow figure of a kind of method for matching depth image based on ASIFT of the present invention;
Fig. 2 is the process flow diagram of step S3 of the present invention;
Fig. 3 is the process flow diagram of step S32 of the present invention.
Embodiment
With reference to Fig. 1, a kind of method for matching depth image based on ASIFT, comprising:
S1, obtain degree of depth picture and the gray level image of testee in two visual fields, and obtain respectively the corresponding relation of degree of depth picture and gray level image in two visual fields, described two visual fields are that visual angle is different but have the visual field of overlapping region;
The unique point that S2, employing ASIFT algorithm extract gray level image in two visual fields is to collection;
S3, according to the corresponding relation of degree of depth picture and gray level image in two visual fields, obtain with gray level image unique point collecting corresponding degree of depth picture point to collecting, then according to the space characteristics invariance principle of rigid body translation, degree of depth picture point is screened collection, reject invalid degree of depth picture point pair, thereby obtain effective degree of depth picture point to collection;
S4, basis effectively degree of depth picture point are calculated initial value rotation matrix and the translation matrix of two visual fields by least square method to centralized procurement;
S5, carry out iteration using initial value rotation matrix and translation matrix as the iterative initial value of ICP algorithm, thereby realize the essence coupling of degree of depth picture in two visual fields.
Wherein, the degree of depth picture (P under same visual field 1, P 2) and gray level image (I 1, I 2) relation be three-dimensional to two-dimentional mapping relations, can obtain the corresponding relation of degree of depth picture and gray level image: Γ by system calibrating 1(P 1)=I 1, Γ 2(P 2)=I 2.
ICP algorithm starts to carry out iteration by an initial relative position conversion, in each iterative process, need to find two corresponding point pair in degree of depth picture, and obtain the new relative position conversion (being iterative initial value) between two degree of depth pictures by the error function minimizing between corresponding point, then start next iteration by new evolution, until convergence.
Because ASIFT algorithm has complete affine invariant feature, illumination and dimensional variation are had to fine unchangeability simultaneously, can there is larger variation in two visual fields time, also can find the unique point of its a large amount of correct couplings, therefore high by initial value rotation matrix R and the translation matrix T precision of its calculating, be beneficial to follow-up ICP essence coupling.
Be further used as preferred embodiment, described step S2, it is specially:
According to the gray level image (I of testee in two visual fields 1, I 2), absolute tilt quantity parametric t and the longitude angle parameter phi of ASIFT affined transformation physical model are sampled, then after SIFT extracting and matching feature points, obtain the unique point of gray level image in two visual fields to collection (S 1, S 2), wherein, degree of tilt parameter is pressed t=1, sample, longitude angle parameter is pressed sample, 180 ° of kb/t <, b=72 °, k is integer, S 2 = { s 1 2 , s 2 2 , . . . , s n 2 | s i 2 &Element; I 2 } , And unique point with unique point corresponding.
With reference to Fig. 2, be further used as preferred embodiment, described step S3, it comprises:
S31, according to degree of depth picture (I in two visual fields 1, I 2) and gray level image (P 1, P 2) corresponding relation Γ 1and Γ 2, calculate with gray level image unique point collection (S 1, S 2) corresponding degree of depth picture point is to collection (V 1, V 2), wherein,
V 1 = { v 1 1 , v 2 1 , . . . , v n 1 | v i 1 &Element; P 1 } = { &Gamma; 1 - 1 ( s 1 1 , s 2 1 , . . . , s n 1 ) | s i 1 &Element; I 1 } = &Gamma; 1 - 1 ( S 1 ) ,
V 2 = { v 1 2 , v 2 2 , . . . , v n 2 | v i 2 &Element; P 2 } = &Gamma; 2 - 1 ( S 2 ) ;
S32, according to the space characteristics invariance principle of rigid body translation to degree of depth picture point to collection (V 1, V 2) screen, reject Null Spot pair, thereby obtain effective degree of depth picture point to collection
With reference to Fig. 3, be further used as preferred embodiment, described step S32, it comprises:
S321, to degree of depth picture point to collection (V 1, V 2) in point right calculate respectively distance B with concentrated other points of corresponding point 1(i) and distance B with concentrated other points of corresponding point 2(i), described D 1and D (i) 2(i) computing formula is:
D 1 ( i ) = &Sigma; j = 1 , j &NotEqual; i n &Delta;V 1 ( i , j ) = &Sigma; j = 1 , j &NotEqual; i n | | v i 1 - v j 1 | | ,
D 2 ( i ) = &Sigma; j = 1 , j &NotEqual; i n &Delta;V 2 ( i , j ) = &Sigma; j = 1 , j &NotEqual; i n | | v i 2 - v j 2 | | ;
S322, according to formula Dis=||D 1(i)-D 2(i) || calculate D 1and D (i) 2(i) range difference Dis, then judges whether the range difference Dis calculating exceeds predefined threshold value, if so, by right this point be labeled as Null Spot pair, and by its from point to collection (V 1, V 2) in remove; Otherwise, by right this point retain, finally obtain effective degree of depth picture point to collection
Be further used as preferred embodiment, described step S4, it is specially:
According to effective degree of depth picture point to collection adopt least square method to calculate initial value rotation matrix R and the translation matrix T of two visual fields, the solution formula of described initial value rotation matrix R and translation matrix T is:
min f ( R , T ) = &Sigma; i = 1 m | | v T , i 1 - Rv T , i 2 - T | | 2 ,
Wherein, m represents that significant depth picture point is to collection in point to number, with be respectively with point is to concentrated element.
Below in conjunction with specific embodiment, the present invention is described in further detail.
The detailed process that adopts method of the present invention to realize digital imaging system three-dimensional reconstruction is:
(1) obtain degree of depth picture, gray level image and the corresponding relation of 3 D digital imaging system.
First adopt 3 D digital imaging system to take and measure object from two different visual angles, obtain the gray level image under two different visual fields.Then, obtain its corresponding degree of depth picture according to the gray level image under two different visual fields.Finally obtain the degree of depth picture of this 3 D digital imaging system and the corresponding relation of gray level image by camera calibration technology.
(2) unique point is extracted collection.
According to the gray level image under two visual fields, utilize ASIFT algorithm to carry out extracting and matching feature points to the overlapping region in two visual fields, wherein, absolute tilt quantity parametric t and the longitude angle parameter of ASIFT affined transformation physical model sampling value as follows: get Gaussian smoothing parameter δ=1.6, degree of tilt parameter is pressed t=1, sampling, longitude angle parameter is pressed sampling, wherein 180 ° of kb/t <, b=72 °, obtains the unique point of gray level image thus to collection.
(3) degree of depth picture point is obtained and is screened collection
According to the corresponding relation of degree of depth picture and gray level image, obtain gray level image unique point to collecting corresponding degree of depth picture point to collection, and according to the space characteristics invariance principle of rigid body translation, degree of depth picture point is screened collection, wherein, and in the present embodiment, Dis=||D 1(i)-D 2(i) || distance threshold get 1.0.Reject Null Spot pair according to this distance threshold, can obtain effective degree of depth picture point to collection, improve the computational accuracy of ICP algorithm iteration initial value.
(4) initial value rotation matrix and the translation matrix of two visual fields of calculating.
Initial value rotation matrix R and the translation matrix T of according to effective degree of depth picture point, centralized procurement being calculated two visual fields by least square method, solve following formula:
min f ( R , T ) = &Sigma; i = 1 m | | v T , i 1 - Rv T , i 2 - T | | 2 .
(5) ICP coupling.
Using the initial value rotation matrix R obtaining and translation matrix T as the iterative initial value of ICP algorithm, carry out ICP exact matching.By the rotation translation matrix of rigid body translation under two visual fields, unify two visual field coordinate systems, thereby obtain the matching image of two depth of field pictures.
(6) repeating step (1)~(5) obtain measured object Global Information, complete the three-dimensional reconstruction of complete object.
Actual test result shows, method of the present invention also can realize degree of depth picture coupling accurately to the unconspicuous plaster statue of image texture characteristic, thereby obtains complete three-dimensional reconstruction data.The present invention has realized the degree of depth picture coupling of no marks point, has the advantages such as the strong and fast convergence rate of wide adaptability, noise.
More than that better enforcement of the present invention is illustrated, but the invention is not limited to described embodiment, those of ordinary skill in the art also can make all equivalent variations or replacement under the prerequisite without prejudice to spirit of the present invention, and the distortion that these are equal to or replacement are all included in the application's claim limited range.

Claims (5)

1. the method for matching depth image based on ASIFT, is characterized in that: comprising:
S1, obtain degree of depth picture and the gray level image of testee in two visual fields, and obtain respectively the corresponding relation of degree of depth picture and gray level image in two visual fields, described two visual fields are that visual angle is different but have the visual field of overlapping region;
The unique point that S2, employing ASIFT algorithm extract gray level image in two visual fields is to collection;
S3, according to the corresponding relation of degree of depth picture and gray level image in two visual fields, obtain with gray level image unique point collecting corresponding degree of depth picture point to collecting, then according to the space characteristics invariance principle of rigid body translation, degree of depth picture point is screened collection, reject invalid degree of depth picture point pair, thereby obtain effective degree of depth picture point to collection;
S4, basis effectively degree of depth picture point are calculated initial value rotation matrix and the translation matrix of two visual fields by least square method to centralized procurement;
S5, carry out iteration using initial value rotation matrix and translation matrix as the iterative initial value of ICP algorithm, thereby realize the essence coupling of degree of depth picture in two visual fields.
2. a kind of method for matching depth image based on ASIFT according to claim 1, is characterized in that: described step S2, and it is specially:
According to the gray level image (I of testee in two visual fields 1, I 2), absolute tilt quantity parametric t and the longitude angle parameter phi of ASIFT affined transformation physical model are sampled, then after SIFT extracting and matching feature points, obtain the unique point of gray level image in two visual fields to collection (S 1, S 2), wherein, degree of tilt parameter is pressed t=1, sample, longitude angle parameter is pressed sample, 180 ° of kb/t <, b=72 °, k is integer, S 2 = { s 1 2 , s 2 2 , . . . , s n 2 | s i 2 &Element; I 2 } , And unique point with unique point corresponding.
3. a kind of method for matching depth image based on ASIFT according to claim 1, is characterized in that: described step S3, and it comprises:
S31, according to degree of depth picture (I in two visual fields 1, I 2) and gray level image (P 1, P 2) corresponding relation Γ 1and Γ 2, calculate with gray level image unique point collection (S 1, S 2) corresponding degree of depth picture point is to collection (V 1, V 2), wherein,
V 1 = { v 1 1 , v 2 1 , . . . , v n 1 | v i 1 &Element; P 1 } = { &Gamma; 1 - 1 ( s 1 1 , s 2 1 , . . . , s n 1 ) | s i 1 &Element; I 1 } = &Gamma; 1 - 1 ( S 1 ) ,
V 2 = { v 1 2 , v 2 2 , . . . , v n 2 | v i 2 &Element; P 2 } = &Gamma; 2 - 1 ( S 2 ) ;
S32, according to the space characteristics invariance principle of rigid body translation to degree of depth picture point to collection (V 1, V 2) screen, reject Null Spot pair, thereby obtain effective degree of depth picture point to collection
4. a kind of method for matching depth image based on ASIFT according to claim 3, is characterized in that: described step S32, and it comprises:
S321, to degree of depth picture point to collection (V 1, V 2) in point right calculate respectively distance B with concentrated other points of corresponding point 1(i) and distance B with concentrated other points of corresponding point 2(i), described D 1and D (i) 2(i) computing formula is:
D 1 ( i ) = &Sigma; j = 1 , j &NotEqual; i n &Delta;V 1 ( i , j ) = &Sigma; j = 1 , j &NotEqual; i n | | v i 1 - v j 1 | | ,
D 2 ( i ) = &Sigma; j = 1 , j &NotEqual; i n &Delta;V 2 ( i , j ) = &Sigma; j = 1 , j &NotEqual; i n | | v i 2 - v j 2 | | ;
S322, according to formula Dis=||D 1(i)-D 2(i) || calculate D 1and D (i) 2(i) range difference Dis, then judges whether the range difference Dis calculating exceeds predefined threshold value, if so, by right this point be labeled as Null Spot pair, and by its from point to collection (V 1, V 2) in remove; Otherwise, by right this point retain, finally obtain effective degree of depth picture point to collection
5. a kind of method for matching depth image based on ASIFT according to claim 3, is characterized in that: described step S4, and it is specially:
According to effective degree of depth picture point to collection adopt least square method to calculate initial value rotation matrix R and the translation matrix T of two visual fields, the solution formula of described initial value rotation matrix R and translation matrix T is:
min f ( R , T ) = &Sigma; i = 1 m | | v T , i 1 - Rv T , i 2 - T | | 2 ,
Wherein, m represents that significant depth picture point is to collection in point to number, with be respectively with point is to concentrated element.
CN201410369761.4A 2014-07-30 2014-07-30 Depth image matching method based on ASIFT (Affine Scale-invariant Feature Transform) Pending CN104157008A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410369761.4A CN104157008A (en) 2014-07-30 2014-07-30 Depth image matching method based on ASIFT (Affine Scale-invariant Feature Transform)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410369761.4A CN104157008A (en) 2014-07-30 2014-07-30 Depth image matching method based on ASIFT (Affine Scale-invariant Feature Transform)

Publications (1)

Publication Number Publication Date
CN104157008A true CN104157008A (en) 2014-11-19

Family

ID=51882496

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410369761.4A Pending CN104157008A (en) 2014-07-30 2014-07-30 Depth image matching method based on ASIFT (Affine Scale-invariant Feature Transform)

Country Status (1)

Country Link
CN (1) CN104157008A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107564020A (en) * 2017-08-31 2018-01-09 北京奇艺世纪科技有限公司 A kind of image-region determines method and device
WO2020199563A1 (en) * 2019-04-01 2020-10-08 四川深瑞视科技有限公司 Method, device, and system for detecting depth information

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
MOREL J M ET AL.: "ASIFT:A new framework for fully affine invariant image comparison", 《SIAM JOURNAL ON IMAGING SCIENCES》 *
刘晓利 等: "结合纹理信息的深度像匹配", 《计算机辅助设计与图形学学报》 *
刘晓利: "多视场深度像造型中的若干关键技术", 《中国博士学位论文全文数据库 信息科技辑》 *
陶青松: "基于ASIFT特征的图像匹配技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107564020A (en) * 2017-08-31 2018-01-09 北京奇艺世纪科技有限公司 A kind of image-region determines method and device
WO2020199563A1 (en) * 2019-04-01 2020-10-08 四川深瑞视科技有限公司 Method, device, and system for detecting depth information

Similar Documents

Publication Publication Date Title
Fathi et al. Automated sparse 3D point cloud generation of infrastructure using its distinctive visual features
CN112686877B (en) Binocular camera-based three-dimensional house damage model construction and measurement method and system
CN107767456A (en) A kind of object dimensional method for reconstructing based on RGB D cameras
US9761008B2 (en) Methods, systems, and computer readable media for visual odometry using rigid structures identified by antipodal transform
Remondino 3-D reconstruction of static human body shape from image sequence
CN104021547A (en) Three dimensional matching method for lung CT
Wang et al. Single view metrology from scene constraints
JP6174104B2 (en) Method, apparatus and system for generating indoor 2D plan view
CN102982551B (en) Method for solving intrinsic parameters of parabolic catadioptric camera linearly by utilizing three unparallel straight lines in space
CN109373912A (en) A kind of non-contact six-freedom displacement measurement method based on binocular vision
CN108362205A (en) Space ranging method based on fringe projection
CN116129037A (en) Visual touch sensor, three-dimensional reconstruction method, system, equipment and storage medium thereof
Kannala et al. Measuring and modelling sewer pipes from video
He et al. Three-point-based solution for automated motion parameter estimation of a multi-camera indoor mapping system with planar motion constraint
CN104157008A (en) Depth image matching method based on ASIFT (Affine Scale-invariant Feature Transform)
KR101673144B1 (en) Stereoscopic image registration method based on a partial linear method
CN103489165A (en) Decimal lookup table generation method for video stitching
Hinz et al. An image engineering system for the inspection of transparent construction materials
CN109741389A (en) One kind being based on the matched sectional perspective matching process of region base
Remondino 3D reconstruction of static human body with a digital camera
CN103810697A (en) Calibration of parabolic refraction and reflection vidicon internal parameters by utilizing four unparallel straight lines in space
Paudel et al. Localization of 2D cameras in a known environment using direct 2D-3D registration
CN109308706B (en) Method for obtaining three-dimensional curved surface area through image processing
Aliakbarpour et al. Geometric exploration of virtual planes in a fusion-based 3D data registration framework
Brunken et al. Incorporating Plane-Sweep in Convolutional Neural Network Stereo Imaging for Road Surface Reconstruction.

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20141119