CN109961506A - A kind of fusion improves the local scene three-dimensional reconstruction method of Census figure - Google Patents

A kind of fusion improves the local scene three-dimensional reconstruction method of Census figure Download PDF

Info

Publication number
CN109961506A
CN109961506A CN201910190191.5A CN201910190191A CN109961506A CN 109961506 A CN109961506 A CN 109961506A CN 201910190191 A CN201910190191 A CN 201910190191A CN 109961506 A CN109961506 A CN 109961506A
Authority
CN
China
Prior art keywords
census
key
map
frame
pose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910190191.5A
Other languages
Chinese (zh)
Other versions
CN109961506B (en
Inventor
王慧青
杨哲
焦越
吴煜豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201910190191.5A priority Critical patent/CN109961506B/en
Publication of CN109961506A publication Critical patent/CN109961506A/en
Application granted granted Critical
Publication of CN109961506B publication Critical patent/CN109961506B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Abstract

The invention discloses the local scene three-dimensional reconstruction methods that a kind of fusion improves Census figure, the following steps are included: obtaining the cromogram and depth map of environment, the improvement Census figure comprising key point and its neighborhood territory pixel block will be calculated after cromogram gray processing, improvement Census figure and gray value based on these crucial block of pixels, estimate the pose of present frame, and judge whether present frame is key frame, joint bilateral filtering and the filtering of voxel lattice are carried out respectively to the depth map of the newest key frame after screening, to obtain the depth map of denoising smooth and the depth map of down-sampled cloud, the depth map for current key frame and previous keyframe by the filtering of voxel lattice uses the ICP algorithm Optimized Matching pose with scale factor s again, present frame is merged by the depth map of joint bilateral filtering with Local map again, It realizes the growth of point cloud map, and finally rebuilds partial 3 d scene, algorithm is succinct, and efficiency is higher and robustness is stronger.

Description

Local scene three-dimensional reconstruction method fused with improved Census diagram
Field of the invention
The invention belongs to the technical field of computer vision, and particularly relates to a local scene three-dimensional reconstruction method for fusing an improved Census diagram.
Background
Three-dimensional reconstruction is always a hot topic in the fields of computer graphics and computer vision, and is widely applied to the fields of virtual reality, artificial intelligence, industrial detection, cultural relic protection and the like. Specifically, three-dimensional reconstruction, particularly vision-based three-dimensional reconstruction technology, refers to acquiring a data image of a scene object by a camera, analyzing and processing the image, deriving three-dimensional information of the object in a real environment by combining with computer vision knowledge, and finally realizing real rendering in a computer.
According to the traditional three-dimensional reconstruction technology, information of three-dimensional point coordinates in an environmental object is estimated by using feature point registration and triangularization measurement probability through a monocular or binocular camera, algorithm implementation difficulty is high, a large amount of hardware resources are consumed, robustness is poor, and estimated depth data are often large in error. With the advent of RGBD cameras and the continuous development of computer graphics hardware technology, although the advanced ToF ranging technology is used to measure the depth information of an environmental object in real time, even then, it is difficult to quickly find the pose of a camera for point cloud data with large data volume, and there are still challenges in real-time and refinement for fusing point clouds and rendering in real time.
In the prior art, color image features between different frames are generally matched, pose transformation is solved, point cloud registration is realized by using an ICP (inductively coupled plasma) algorithm, and a three-dimensional scene is reconstructed, so that feature point calculation is relatively time-consuming, a large amount of point cloud registration brings huge memory overhead to a computer, the reconstruction speed is slow, and even the reconstruction failure condition can occur; compared with a characteristic point method, the direct method can obtain a robust and accurate pose estimation result when the pose of the camera is small, is less influenced by motion blur and texture information loss, but is generally based on luminosity invariance hypothesis and is greatly influenced by illumination; the traditional Census converter overcomes the defect to a certain extent, but the effect is still not ideal enough, and meanwhile, in order to improve the point cloud registration speed, the point cloud simplification before registration is very important, so that the design of a three-dimensional reconstruction algorithm with concise algorithm, higher efficiency and stronger robustness becomes necessary.
Disclosure of Invention
The invention provides a local scene three-dimensional reconstruction method fusing an improved Census diagram aiming at the problems in the prior art, which comprises the following steps: the method comprises the steps of obtaining a color image and a depth image of an environment, graying the color image, calculating an improved Census image containing key points and neighborhood pixel blocks of the key points, estimating the pose of a current frame based on the improved Census image and gray values of the key pixel blocks, judging whether the current frame is a key frame, respectively performing combined bilateral filtering and voxel grid filtering on the depth image of the latest key frame after screening so as to obtain a depth image with smooth de-noising and a depth image with a down-sampled point cloud, optimizing and matching the pose by using an ICP algorithm with a scale factor s for the depth image of the current key frame and the last key frame after voxel grid filtering, and fusing the depth image of the current frame after combined bilateral filtering and a local image to realize the increase of a map point cloud and finally reconstruct a local three-dimensional scene.
In order to achieve the purpose, the invention adopts the technical scheme that: a three-dimensional reconstruction method for a local scene with an improved Census map fused comprises the following steps:
s1, acquiring a color image and a depth image of the environment, graying the color image, and calculating an improved Census image containing key points and pixel blocks in the neighborhood of the key points;
s2, pre-judging the pose of the current frame based on the improved Census picture and the gray value of the key pixel block obtained in the step S1, judging whether the current frame is a key frame, and if so, continuing the step; if not, returning to the step S1, and repeating the steps S1-S2;
s3, respectively carrying out joint bilateral filtering and voxel grid filtering on the depth map of the latest key frame screened in the step S2;
s4, optimizing and matching the pose of the depth map obtained by filtering the current key frame and the last key frame through the voxel grid in the step S3 by using an ICP (inductively coupled plasma) algorithm added with a scale factor S;
and S5, fusing the depth map of the current frame after the joint bilateral filtering with the local map, and repeating S1 to S4 until the three-dimensional reconstruction of the local scene is completed.
As a refinement of the present invention, the step S1 further includes:
s11, acquiring a color image and a depth image of the environment through the RGB-D camera, and reading color image data and depth image data;
s12, graying the color image data read in step S11 and calculating a key point and a new Census descriptor and binary code around the key point based on the following modified Census transformation algorithm:
for each pixel X around the key point, the pixel value is compared with all the pixel values in the local 8-neighborhood pixels Ne (X) from top to bottom in a zigzag manner to obtain a description vector d (I (X)i),I(Xi+1)),
S13, 8-bit description vector d (I (X)i),I(Xi+1) Census converter c (x) constituting 8 channels, and a partial Census map was obtained.
As an improvement of the present invention, in step S2, the local Census map and the gray-scale map of the key pixel blocks of the previous frame and the current frame are combined, and the pose estimation is performed by using a direct method, where the iterative optimization method is an g2o map optimization algorithm, and the direct method optimizes an objective function by:
wherein, T is the pose transformation from the previous frame to the current frame;is the photometric error;is the hamming distance of Census binary code; omegaxIs a relative weight; and omega is a set of the first N effective gradient points which are used for carrying out non-maximum suppression and sequencing on the pixel points with the obvious gradient values and the depth of not 0 in the current frame.
As another improvement of the present invention, the determination of whether the key frame is determined in step S2 is based on: (1) the coincidence rate of the key points of the color map in the previous key frame is less than 70 percent; (2) the error of the gray-scale image or Census image between two frames is greater than the threshold value tEOr the pose estimation value is larger; (3) when the camera encounters sudden and violent motion or a characteristic-free area, the track tracking is lost, and the camera needs to be repositioned; the above criteria satisfy one, i.e., the result is yes.
As still another improvement of the present invention, the step S4 further includes:
s41, in step S3, obtaining the depth map of the previous key frame filtered by the voxel grid filter, and selecting one point in the depth map;
s42, calculating the matching point of the selected point in the step S41;
and S43, updating the pose of the current key frame and optimizing the matching pose based on the distance from the minimized matching point to the current frame plane.
As another improvement of the present invention, in step S42, a projection algorithm is used to calculate a matching point, the true projection pixel coordinate (u) in the color map of the current framei,vi) The following expression is satisfied:
wherein ,for key frame depth mapsThree-dimensional coordinates of a certain point in (b); t isrcA pose transformation matrix from the key frame to the current frame is obtained; k is the camera intrinsic parameter.
As a further improvement of the present invention, the objective function for minimizing the distance from the matching point to the current frame plane in step S43 is:
wherein ,for matching points in the current key frameThe normal vector of (a); siIs a constant; Δ R has only three degrees of freedom;
and (5) after the target function is linearized, deriving the pose parameters delta R and delta t, setting the derivative as 0, and calculating and solving to obtain the pose parameters.
Compared with the prior art, the method abandons the traditional calculation based on multi-dimensional feature points (such as SIFT), improves the traditional calculation method of a binocular matching operator Census, integrates the calculation method into a direct method to calculate two frame poses, accelerates registration, considers the influence of camera translation on the point cloud scaling of an actual object in the fine registration, adds a scale factor s to accelerate the convergence of an ICP algorithm, and further improves the reconstruction speed through parallel calculation.
Drawings
FIG. 1 is a flow chart of the steps of the method for three-dimensional reconstruction of a partial scene by fusing and improving Census images according to the present invention;
FIG. 2 is a schematic diagram showing the calculation of a modified Census map in example 1 of the present invention;
fig. 3 is a schematic diagram of an RGBD camera pose estimation step in embodiment 2 of the present invention.
Detailed Description
The invention will be explained in more detail below with reference to the drawings and examples.
Example 1
A three-dimensional reconstruction method for a local scene by fusing an improved Census diagram is disclosed, as shown in fig. 1, and comprises the following steps:
s1, acquiring a color image and a depth image of an environment through an RGB-D camera, reading color image data and depth image data, graying the color image, and calculating key points and a new Census description image and binary code around the key points based on the following improved Census transformation algorithm, wherein the improved Census transformation algorithm specifically comprises the following steps:
for each pixel X around the key point, the pixel value is compared with all the pixel values in the local 8-neighborhood pixels ne (X) from top to bottom in a zigzag manner, as shown in fig. 2, and a description vector d (I (X) is obtainedi),I(Xi+1)),
8-bit description vector d (I (X)i),I(Xi+1) The Census transform C (X) of 8 channels is formed, a local Census graph is obtained, the essence is the relation and difference of luminosity information between pixels in each key pixel block, the traditional Census transform only depends on the relative size information of the central pixel point and the gray level of surrounding pixels, the description is rough, the confusion caused by the fact that different pixel blocks have the same Census transform graph is possible, compared with the traditional Census transform, the improved Census transform does not depend on the gray level of the central pixel, the influence of the gray noise of the central pixel point is reduced, and the matching accuracy is improved.
S2, pre-judging the pose of the current frame based on the improved Census picture and the gray value of the key pixel block obtained in the step S1, and judging whether the current frame is a key frame according to the following judgment criteria: (1) the coincidence rate of the key points of the color map in the previous key frame is less than 70 percent; (2) the error of the gray-scale image or Census image between two frames is greater than the threshold value tEOr the pose estimation value is larger; (3) when the camera encounters sudden and violent motion or a characteristic-free area, the track tracking is lost, and the camera needs to be repositioned; if the result is yes, the process continues(ii) a If the above criteria are not satisfied, if the result is no, the process returns to step S1, and the steps S1-S2 are repeated.
And S3, respectively carrying out joint bilateral filtering and voxel grid filtering on the depth map of the latest key frame screened in the step S2.
And S4, optimizing and matching the pose of the depth map obtained by filtering the current key frame and the last key frame through the voxel grid in the step S3 by using an ICP (inductively coupled plasma) algorithm added with the scale factor S.
And S5, fusing the depth map of the current frame after the joint bilateral filtering with the local map, and repeating S1 to S4 until the three-dimensional reconstruction of the local scene is completed.
The method improves the traditional calculation method of the binocular matching operator Census, extracts a description image with illumination change invariance in a key pixel block area, weights a combined gray image, estimates the pose transformation of two key frames by a direct method, accelerates registration, considers the influence of camera translation on the point cloud scaling of an actual object in the fine registration, adds a scale factor s to accelerate the convergence of an ICP algorithm, performs parallel calculation to further improve the reconstruction speed, and better considers the problems of reconstruction effect and reconstruction efficiency.
Example 2
The difference between this embodiment and embodiment 1 is that, in step S2, the local Census map and the grayscale map of the key pixel blocks of the previous frame and the current frame are combined, and the pose estimation is performed by using a direct method, where the iterative optimization method is an optimization algorithm shown in g2o, and as shown in fig. 3, the direct method optimizes an objective function by:
wherein T is the pose transformation from the previous frame to the current frame,in order to be a light intensity error,hamming distance, omega, for Census binary codesxThe relative weights calculated for the iterative weighted least squares (IRLS) method are normalized between 0 and 1 in 0.1 step units. To reduce the algorithm complexity, 0.3 can also be directly taken, so the calculation formula for defining the error of the Census transform is as follows:
calculating the difference value of two pixel points (blocks) normalized to 0-255 after solving the Hamming distance H (·), wherein T is the pose transformation from the previous frame to the current frame, C (·) is a binary code transformed by Census, and P isxA pixel block consisting of the key point x and its surrounding pixels, w (P)xT) is a defined projective transformation function of the form:
w(Px,T)=π(T(π-1(Px,Z(X))))
the function represents the projection transformation from a certain pixel point (block) of a reference frame to a current frame in a local graph, pi (·) is a camera projection function which represents the projection of a space point from a camera coordinate system to an image plane, and pi (·)-1(-) is a camera backprojection function, representing the backprojection of pixels of known depth values from the image plane to the camera coordinate system.
Similarly, the photometric error can be calculated according to the following expression
In this embodiment, the non-maximum suppression and sorting are performed on the pixel points having an obvious gradient value and a depth of not 0 in the current frame, and the set of the first 200 effective gradient points forms Ω.
In this embodiment, the depth map is further processed in a double manner, and the voxel grid filtered depth map ensures that no false high-frequency information appears in the down-sampled sample image, which is mainly performed in step S3 as follows:
s31, obtaining the depth map P of the current framecPerforming combined bilateral filtering with the gray image of the current frame, repairing holes, and smoothly denoising to obtain a depth image P1 cWherein the standard deviation σ of the spatial domain Gaussian functionsStandard deviation sigma from image gray level thresholdrAll are taken as 4, and the neighborhood range is 11 multiplied by 11;
s32, the voxel filter has the function of directional down-sampling without destroying the geometrical structure of the point cloud, and in order to improve the matching speed of the step S4, the voxel filter is used for the depth map PcDown sampling, noise reduction and smoothing are carried out to obtain a depth mapWhere the down-sampling rate is 0.7.
The method combines the bilateral filtering and the voxel grid filtering original depth map, and realizes point cloud down-sampling while denoising, so that the efficiency of the ICP algorithm with the scale factor s for fusing point clouds is improved.
Example 3
This embodiment differs from embodiments 1 and 2 in that: in this embodiment, two frames of point clouds are accurately registered, a PCA principal component analysis method is used for calculating a normal vector of the point clouds, a scale factor S is added to a classical ICP algorithm, and the step S4 mainly includes:
s41, recording the original depth map of a key frame (reference key frame) as PrFiltered by a voxel filterThe depth map after wave isSelecting a depth mapAt a certain point in
S42, calculating a depth map according to a projection algorithmThe matching point inSuppose the true projected pixel coordinates (u) in the current frame color mapi,vi) For (u, v) ∈ Ne ((u)i,vi) Then (u, v) approximately satisfies the following expression:
wherein For reference key frame depth mapThree-dimensional coordinate of a certain point in (1), TrcAnd K is an internal parameter of the camera. It is also easy to know that (u)i,vi) e.Ne ((u, v)), (u, v)) and the three-dimensional coordinate points with the nearest three-dimensional coordinate values, normal vectors and gray labels around the (u, v) can be used as the depth mapThe matching point in
S43, adopting a classical ICP algorithm, minimizing the distance from the matching point to the plane as an objective function, and updating the pose of the current keyframe, wherein the objective function for minimizing the distance from the matching point to the plane is in the following specific form:
the target function E calculates the distance from the matching point added with the scale factor s to the current frame plane, whereinFor matching points in the current key frameThe normal vector of the three-dimensional point is calculated by adopting a Principal Component Analysis (PCA) method.
Suppose that point cloud data P ═ P (P)1,p2,...,pn) For a certain point p thereiniTaking h adjacent points in the neighborhood, wherein the covariance matrix is as follows:
in the formulaRepresenting the gravity center of the point set, and the eigenvalue and eigenvector of the matrix C are respectively:
CVj=λjVj(j=0,1,2)
in the formulaλjRepresents the jth characteristic value, VjRepresenting the jth eigenvector, the eigenvector corresponding to the minimum eigenvalue being the point piAnd fitting the normal vector of the surface by least squares.
Calculating the scale factor s of each point from the similar trianglesiAt this timeΔ R has only three degrees of freedom, using a rotation vector Δ R ═ Δ Rx,Δry,Δrz) In this case, the translation vector Δ t is expressed as (Δ t)x,Δty,Δtz),x=[Δr,Δt]。
By linearizing Δ R, the objective function can be linearized to El
Deriving the pose parameter to be solved and making the derivative be 0, so as to obtain Ax + b as 0,
and is
For a plurality of matching points, the least square method is used for calculating and solving pose parameters delta R and delta t in parallel, so that the target function ElThe value of the pose parameter is minimum, the pose parameters delta R and delta t are obtained, and the matching pose can be optimized.
Finally, the current frame is processed by the depth map P of the joint bilateral filtering1 cAnd (5) merging with the local graph, and repeating S1 to S4 until the three-dimensional reconstruction of the local scene is completed. The method considers the influence of the translation of the camera on the point cloud zooming of the actual object, adds the scale factor s so as to accelerate the convergence of the ICP algorithm, quickly obtains the position of the camera, fuses the point cloud and renders the point cloud in real time, and has the advantages of real-time performance, refinement and higher efficiency.
The foregoing shows and describes the general principles, essential features, and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited by the foregoing examples, which are provided to illustrate the principles of the invention, and that various changes and modifications may be made without departing from the spirit and scope of the invention, which is also intended to be covered by the appended claims. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (7)

1. A three-dimensional reconstruction method for a local scene by fusing an improved Census diagram is characterized by comprising the following steps:
s1, acquiring a color image and a depth image of the environment, graying the color image, and calculating an improved Census image containing key points and pixel blocks in the neighborhood of the key points;
s2, pre-judging the pose of the current frame based on the improved Census picture and the gray value of the key pixel block obtained in the step S1, judging whether the current frame is a key frame, and if so, continuing the step; if not, returning to the step S1, and repeating the steps S1-S2;
s3, respectively carrying out joint bilateral filtering and voxel grid filtering on the depth map of the latest key frame screened in the step S2;
s4, optimizing and matching the pose of the depth map obtained by filtering the current key frame and the last key frame through the voxel grid in the step S3 by using an ICP (inductively coupled plasma) algorithm added with a scale factor S;
and S5, fusing the depth map of the current frame after the joint bilateral filtering with the local map, and repeating S1 to S4 until the three-dimensional reconstruction of the local scene is completed.
2. The method according to claim 1, wherein said step S1 further comprises:
s11, acquiring a color image and a depth image of the environment through the RGB-D camera, and reading color image data and depth image data;
s12, graying the color image data read in step S11 and calculating a key point and a new Census descriptor and binary code around the key point based on the following modified Census transformation algorithm:
for each pixel X around the key point, the pixel value is compared with all the pixel values in the local 8-neighborhood pixels Ne (X) from top to bottom in a zigzag manner to obtain a description vector d (I (X)i),I(Xi+1)),
S13, 8-bit description vector d (I (X)i),I(Xi+1) Census converter c (x) constituting 8 channels, and a partial Census map was obtained.
3. The method as claimed in claim 2, wherein in step S2, the Census map and the gray map of the key pixel blocks of the previous frame and the current frame are combined, and pose estimation is performed by using a direct method, wherein the iterative optimization method is g2o map optimization algorithm, and the direct method optimizes an objective function as follows:
wherein, T is the pose transformation from the previous frame to the current frame;is the photometric error;is the hamming distance of Census binary code; omegaxIs a relative weight; and omega is a set of the first N effective gradient points which are used for carrying out non-maximum suppression and sequencing on the pixel points with the obvious gradient values and the depth of not 0 in the current frame.
4. The method according to claim 1, 2 or 3, wherein the determination of whether the key frame is determined in step S2 is based on: (1) the coincidence rate of the key points of the color map in the previous key frame is less than 70 percent; (2) the error of the gray-scale image or Census image between two frames is greater than the threshold value tEOr the pose estimation value is larger; (3) when the camera encounters sudden and violent motion or a characteristic-free area, the track tracking is lost, and the camera needs to be repositioned; the above criteria satisfy one, i.e., the result is yes.
5. The method according to claim 4, wherein said step S4 further comprises:
s41, in step S3, obtaining the depth map of the previous key frame filtered by the voxel grid filter, and selecting one point in the depth map;
s42, calculating the matching point of the selected point in the step S41;
and S43, updating the pose of the current key frame and optimizing the matching pose based on the distance from the minimized matching point to the current frame plane.
6. The method as claimed in claim 5, wherein in step S42, the matching points are calculated by projection algorithm, and the real projected pixel coordinates (u) in the color map of the current frame are calculatedi,vi) The following expression is satisfied:
wherein ,for key frame depth mapsThree-dimensional coordinates of a certain point in (b); t isrcA pose transformation matrix from the key frame to the current frame is obtained; k is the camera intrinsic parameter.
7. The method for three-dimensional reconstruction of local scene with fusion of improved Census map as claimed in claim 5 or 6, wherein said objective function of minimizing the distance from the matching point to the current frame plane in step S43 is:
wherein ,for matching points in the current key frameThe normal vector of (a); siIs a constant; Δ R is only threeDegree of freedom;
and (5) after the target function is linearized, deriving the pose parameters delta R and delta t, setting the derivative as 0, and calculating and solving to obtain the pose parameters.
CN201910190191.5A 2019-03-13 2019-03-13 Local scene three-dimensional reconstruction method for fusion improved Census diagram Active CN109961506B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910190191.5A CN109961506B (en) 2019-03-13 2019-03-13 Local scene three-dimensional reconstruction method for fusion improved Census diagram

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910190191.5A CN109961506B (en) 2019-03-13 2019-03-13 Local scene three-dimensional reconstruction method for fusion improved Census diagram

Publications (2)

Publication Number Publication Date
CN109961506A true CN109961506A (en) 2019-07-02
CN109961506B CN109961506B (en) 2023-05-02

Family

ID=67024412

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910190191.5A Active CN109961506B (en) 2019-03-13 2019-03-13 Local scene three-dimensional reconstruction method for fusion improved Census diagram

Country Status (1)

Country Link
CN (1) CN109961506B (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110704562A (en) * 2019-09-27 2020-01-17 Oppo广东移动通信有限公司 Map fusion method and device, equipment and storage medium
CN110793441A (en) * 2019-11-05 2020-02-14 北京华捷艾米科技有限公司 High-precision object geometric dimension measuring method and device
CN111105460A (en) * 2019-12-26 2020-05-05 电子科技大学 RGB-D camera pose estimation method for indoor scene three-dimensional reconstruction
CN111145331A (en) * 2020-01-09 2020-05-12 深圳市数字城市工程研究中心 Cloud rendering image fusion method and system for massive urban space three-dimensional data
CN111260713A (en) * 2020-02-13 2020-06-09 青岛联合创智科技有限公司 Depth calculation method based on image
CN111476907A (en) * 2020-04-14 2020-07-31 青岛小鸟看看科技有限公司 Positioning and three-dimensional scene reconstruction device and method based on virtual reality technology
CN111652933A (en) * 2020-05-06 2020-09-11 Oppo广东移动通信有限公司 Monocular camera-based repositioning method and device, storage medium and electronic equipment
CN111798505A (en) * 2020-05-27 2020-10-20 大连理工大学 Monocular vision-based dense point cloud reconstruction method and system for triangularized measurement depth
CN111899345A (en) * 2020-08-03 2020-11-06 成都圭目机器人有限公司 Three-dimensional reconstruction method based on 2D visual image
CN112258658A (en) * 2020-10-21 2021-01-22 河北工业大学 Augmented reality visualization method based on depth camera and application
CN112422848A (en) * 2020-11-17 2021-02-26 深圳市歌华智能科技有限公司 Video splicing method based on depth map and color map
CN112446836A (en) * 2019-09-05 2021-03-05 浙江舜宇智能光学技术有限公司 Data processing method and system for TOF depth camera
CN112767481A (en) * 2021-01-21 2021-05-07 山东大学 High-precision positioning and mapping method based on visual edge features
CN113012212A (en) * 2021-04-02 2021-06-22 西北农林科技大学 Depth information fusion-based indoor scene three-dimensional point cloud reconstruction method and system
CN113920254A (en) * 2021-12-15 2022-01-11 深圳市其域创新科技有限公司 Monocular RGB (Red Green blue) -based indoor three-dimensional reconstruction method and system thereof
CN113702941B (en) * 2021-08-09 2023-10-13 哈尔滨工程大学 Point cloud speed measuring method based on improved ICP
CN113012212B (en) * 2021-04-02 2024-04-16 西北农林科技大学 Depth information fusion-based indoor scene three-dimensional point cloud reconstruction method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107833270A (en) * 2017-09-28 2018-03-23 浙江大学 Real-time object dimensional method for reconstructing based on depth camera
CN108010081A (en) * 2017-12-01 2018-05-08 中山大学 A kind of RGB-D visual odometry methods based on Census conversion and Local map optimization

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107833270A (en) * 2017-09-28 2018-03-23 浙江大学 Real-time object dimensional method for reconstructing based on depth camera
CN108010081A (en) * 2017-12-01 2018-05-08 中山大学 A kind of RGB-D visual odometry methods based on Census conversion and Local map optimization

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112446836A (en) * 2019-09-05 2021-03-05 浙江舜宇智能光学技术有限公司 Data processing method and system for TOF depth camera
CN112446836B (en) * 2019-09-05 2023-11-03 浙江舜宇智能光学技术有限公司 Data processing method and system for TOF depth camera
CN110704562A (en) * 2019-09-27 2020-01-17 Oppo广东移动通信有限公司 Map fusion method and device, equipment and storage medium
CN110704562B (en) * 2019-09-27 2022-07-19 Oppo广东移动通信有限公司 Map fusion method and device, equipment and storage medium
CN110793441A (en) * 2019-11-05 2020-02-14 北京华捷艾米科技有限公司 High-precision object geometric dimension measuring method and device
CN110793441B (en) * 2019-11-05 2021-07-27 北京华捷艾米科技有限公司 High-precision object geometric dimension measuring method and device
CN111105460A (en) * 2019-12-26 2020-05-05 电子科技大学 RGB-D camera pose estimation method for indoor scene three-dimensional reconstruction
CN111145331A (en) * 2020-01-09 2020-05-12 深圳市数字城市工程研究中心 Cloud rendering image fusion method and system for massive urban space three-dimensional data
CN111145331B (en) * 2020-01-09 2023-04-07 深圳市数字城市工程研究中心 Cloud rendering image fusion method and system for massive urban space three-dimensional data
CN111260713A (en) * 2020-02-13 2020-06-09 青岛联合创智科技有限公司 Depth calculation method based on image
CN111476907A (en) * 2020-04-14 2020-07-31 青岛小鸟看看科技有限公司 Positioning and three-dimensional scene reconstruction device and method based on virtual reality technology
CN111652933B (en) * 2020-05-06 2023-08-04 Oppo广东移动通信有限公司 Repositioning method and device based on monocular camera, storage medium and electronic equipment
CN111652933A (en) * 2020-05-06 2020-09-11 Oppo广东移动通信有限公司 Monocular camera-based repositioning method and device, storage medium and electronic equipment
CN111798505A (en) * 2020-05-27 2020-10-20 大连理工大学 Monocular vision-based dense point cloud reconstruction method and system for triangularized measurement depth
CN111899345A (en) * 2020-08-03 2020-11-06 成都圭目机器人有限公司 Three-dimensional reconstruction method based on 2D visual image
CN111899345B (en) * 2020-08-03 2023-09-01 成都圭目机器人有限公司 Three-dimensional reconstruction method based on 2D visual image
CN112258658A (en) * 2020-10-21 2021-01-22 河北工业大学 Augmented reality visualization method based on depth camera and application
CN112422848A (en) * 2020-11-17 2021-02-26 深圳市歌华智能科技有限公司 Video splicing method based on depth map and color map
CN112422848B (en) * 2020-11-17 2024-03-29 深圳市歌华智能科技有限公司 Video stitching method based on depth map and color map
CN112767481A (en) * 2021-01-21 2021-05-07 山东大学 High-precision positioning and mapping method based on visual edge features
CN113012212A (en) * 2021-04-02 2021-06-22 西北农林科技大学 Depth information fusion-based indoor scene three-dimensional point cloud reconstruction method and system
CN113012212B (en) * 2021-04-02 2024-04-16 西北农林科技大学 Depth information fusion-based indoor scene three-dimensional point cloud reconstruction method and system
CN113702941B (en) * 2021-08-09 2023-10-13 哈尔滨工程大学 Point cloud speed measuring method based on improved ICP
CN113920254A (en) * 2021-12-15 2022-01-11 深圳市其域创新科技有限公司 Monocular RGB (Red Green blue) -based indoor three-dimensional reconstruction method and system thereof

Also Published As

Publication number Publication date
CN109961506B (en) 2023-05-02

Similar Documents

Publication Publication Date Title
CN109961506A (en) A kind of fusion improves the local scene three-dimensional reconstruction method of Census figure
Menze et al. Object scene flow
CN106910242B (en) Method and system for carrying out indoor complete scene three-dimensional reconstruction based on depth camera
CN108564616B (en) Fast robust RGB-D indoor three-dimensional scene reconstruction method
WO2022088982A1 (en) Three-dimensional scene constructing method, apparatus and system, and storage medium
CN106204572B (en) Road target depth estimation method based on scene depth mapping
US9426444B2 (en) Depth measurement quality enhancement
CN110334762B (en) Feature matching method based on quad tree combined with ORB and SIFT
CN108225319B (en) Monocular vision rapid relative pose estimation system and method based on target characteristics
CN109472820B (en) Monocular RGB-D camera real-time face reconstruction method and device
JP2016508652A (en) Determining object occlusion in image sequences
Lo et al. Joint trilateral filtering for depth map super-resolution
Hua et al. Extended guided filtering for depth map upsampling
Gedik et al. 3-D rigid body tracking using vision and depth sensors
WO2018133119A1 (en) Method and system for three-dimensional reconstruction of complete indoor scene based on depth camera
CN113744315B (en) Semi-direct vision odometer based on binocular vision
Lo et al. Depth map super-resolution via Markov random fields without texture-copying artifacts
O'Byrne et al. A stereo‐matching technique for recovering 3D information from underwater inspection imagery
Chen et al. A color-guided, region-adaptive and depth-selective unified framework for Kinect depth recovery
CN110717934A (en) Anti-occlusion target tracking method based on STRCF
CN114331879A (en) Visible light and infrared image registration method for equalized second-order gradient histogram descriptor
CN114782628A (en) Indoor real-time three-dimensional reconstruction method based on depth camera
Pan et al. Depth map completion by jointly exploiting blurry color images and sparse depth maps
CN115222884A (en) Space object analysis and modeling optimization method based on artificial intelligence
CN112927251A (en) Morphology-based scene dense depth map acquisition method, system and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant