CN103607584B - Real-time registration method for depth maps shot by kinect and video shot by color camera - Google Patents

Real-time registration method for depth maps shot by kinect and video shot by color camera Download PDF

Info

Publication number
CN103607584B
CN103607584B CN201310609865.3A CN201310609865A CN103607584B CN 103607584 B CN103607584 B CN 103607584B CN 201310609865 A CN201310609865 A CN 201310609865A CN 103607584 B CN103607584 B CN 103607584B
Authority
CN
China
Prior art keywords
depth
kinect
camera
video
degree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310609865.3A
Other languages
Chinese (zh)
Other versions
CN103607584A (en
Inventor
童若锋
琚旋
成可立
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201310609865.3A priority Critical patent/CN103607584B/en
Publication of CN103607584A publication Critical patent/CN103607584A/en
Application granted granted Critical
Publication of CN103607584B publication Critical patent/CN103607584B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention provides a stable and fast real-time registration method for depth maps shot by the kinect in real time and video shot by a color camera. According to the method, the step of estimating internal parameters of a depth camera is removed, and the stability of an algorithm is enhanced while the parameters to be solved are reduced. A linear optimization frame is used for solution, the globally optimal solution can be obtained in one step, and thus the calculation efficiency of the algorithm is greatly improved. Although the step of estimating the internal parameters of the depth camera in a traditional algorithm is removed, the executing efficiency for mapping depth information to the video is not affected, and the depth information captured by the kinect can still be mapped to the video in real time. Moreover, because the defined hybrid parameters have good mathematical properties, the internal parameters of the depth camera can still be determined in a reverse mode with a matrix QR decomposition technology to be used by other algorithms.

Description

A kind of depth map of kinect shooting and the Real-time Registration of colour TV camera capture video
Technical field
The present invention relates to the depth map of a kind of kinect shooting and the Real-time Registration of colour TV camera capture video.
Background technology
Along with data storage, the raising of transmission technology and the progress of camera lens technique, high-definition camera and high definition network head have progressed into the visual field of people.Use these equipment, people can obtain high-quality video material like a cork, and carry out HD video with other people and communicate.But in recent years, along with the development of augmented reality and stereo display technique, traditional two-dimentional HD video can not meet the demand of people, people thirst for carry out high-quality editor to video material more simply, stereo display, or the dummy object synthesized with computer in live video communication process carries out the interaction of high realism, and everything all depends on the degree of depth generation technique of video.
The degree of depth generation technique of video is the classic problem of computer vision field.In order to recover the depth information lost in the process of video capture, countless scientists throws oneself into this field, and proposes the algorithm of a series of classics.But up to the present, neither one algorithm can ensure the correctness to the generation of the complex scene degree of depth while ensureing the real-time that video depth generates.And current institute all shows on evidence, distance meets the generation of the video depth generating algorithm of " real-time " and " correctness " simultaneously, also has long time.
Therefore, people catch while tending to use special equipment to carry out video and the degree of depth, although can meet like this " real-time " and " correctness " of depth information simultaneously, this kind of special installation often bulky and expensive is that common user is unaffordable.
In recent years, Microsoft was proposed a kind of degree of depth capture device kinect of lightweight, and its volume is little, low price, can the capturing scenes degree of depth in real time, and this equipment allows people generate video depth again to have lighted hope.But unfortunately, although kinect equipment itself provides colour imagery shot in order to capture video data, the resolution of this camera is very low, cannot meet the demand of people to HD video.And use extra high-definition camera and kinect to catch HD video and depth information respectively, then because the visual angle of two equipment there are differences and cause the video of catching and depth data spatially cannot align.
In recent years, many scholars propose the algorithm overcoming this spatial diversity in succession.But all there is the noise effect that is easily subject to depth data and the low problem of computational efficiency.
Summary of the invention
Technology contents to be solved by this invention proposes video that a kind of depth map of being caught in real time by kinect and high-definition camera take to carry out registration, stable, Real-time Registration fast.
The present invention is by the following technical solutions for this reason, and it comprises the following steps:
(1), high clear colorful camera and kinect depth camera are fixed respectively, position fixing is respectively optional position, Real-time Registration of the present invention can process the accurate registration under any difference condition with great visual angle of existence between kinect and depth camera, and arranges sync pulse jamming signal;
(2) before, by demarcation gridiron pattern being placed in color camera and kinect depth camera at random, and carry out sync pulse jamming, obtain the tessellated coloured image of synchronization and depth image.By this operation repeatedly, until obtain one group of different angles, the sync pulse jamming result of different distance;
(3) manually arbitrarily one piece or some pieces of arbitrary shaped regions are marked in the region that the gridiron pattern, in the depth map that kinect takes at every turn is corresponding, user only needs at one or more arbitrary shaped region of the inner arbitrarily mark of depth map flat area, and unlike needing the corner of precise marking flat board in conventional method, significantly reduce the impact of depth camera noise on algorithm, and ensure that the number of pixels summation of often opening the region that depth map marks is no less than 10 pixels.Calculate in the depth map of the kinect of i-th shooting, the homogeneous coordinates of the two-dimensional coordinate of the jth pixel in the arbitrary region of manual mark represent and the product of the degree of depth of this point, are denoted as .And the set of all coordinates is denoted as ;
(4), take for i-th time the coordinate set obtained , bring its all elements into space plane equation: carry out plane fitting.And the normal vector of the plane calculated .For set in all elements , calculate average , will in all meet some filtering, using remaining point as new coordinate set ;
(5) the classical color Camera Calibration Algorithm of Zhang Zhengyou, is used to calculate high clear colorful camera internal reference with color camera and tessellated relative position relation in each shooting, use spin matrix and translation vector represent the relative position relation of the shooting of i-th time;
(6) hybrid parameter of 3*3 size, is established with 3 dimensional vectors for unknown quantity; By the point coordinates through noise filtering step obtained in (4), (5) two steps with the rotation parameter of i-th shooting as known quantity, set up plane restriction linear equation: ;
(7), according to point coordinates the corresponding degree of depth applies penalty coefficient to the equation in (6) , the penalty coefficient that depth value is less is larger, and the constraint equation of the some correspondence making the degree of depth less has larger weight in equation group;
(8), Constrained equations is set up , use linear optimization framework to solve: 1) to represent the translation parameters T between color camera and kinect; 2) by depth camera internal reference with relatively between color camera with kinect rotate the hybrid parameter H that forms;
(9) on, use in (6) H and T two parameters obtained the depth signal that kinect catches to be mapped in real time video that high-definition camera catches.Concrete grammar is: use kinect to take depth map continuously, to each pixel on every width image, the homogeneous coordinates calculating its two-dimensional coordinate represent with the degree of depth of this point product , then calculate this some three-dimensional coordinate under camera coordinate system: , by this spot projection on coloured image, obtain its projected position on coloured image with the degree of depth of correspondence , complete and in real time the depth signal that kinect catches is mapped on video that high-definition camera catches, wherein: .May be mapped to the same coordinate on coloured image due to the different pixels on depth map under, for this situation, the degree of depth of minimum-depth this point on coloured image will be preserved.
On the basis adopting above technical scheme, the present invention can also adopt following further scheme:
In step (7) a metallic, according to point coordinates the corresponding degree of depth decides the weight of described plane restriction linear equation in whole equation group, penalty coefficient as follows:
(unit rice)
Wherein depth refers to the degree of depth corresponding to this pixel.
In step (8), the concrete make setting up constraint equation AX=b is: by the hybrid parameter of 3*3 size with 3 dimensional vectors be combined into 12 dimensional vectors , set up the Constrained equations under least square meaning: , use linear optimization framework to solve: wherein, note , , , then at matrix the row of middle correspondence be configured to:
Correspondingly, column vector for:
In step (9), the described method H and T two parameters obtained the depth signal that kinect catches be mapped in real time on video that high-definition camera catches is: use kinect to take depth map continuously, to each pixel on every width image, the homogeneous coordinates calculating its two-dimensional coordinate represent with the degree of depth of this point product , then calculate this some three-dimensional coordinate under camera coordinate system: , by this spot projection on coloured image, obtain its projected position on coloured image with the degree of depth of correspondence , complete and in real time the depth signal that kinect catches is mapped on video that high-definition camera catches, wherein: .May be mapped to the same coordinate on coloured image due to the different pixels on depth map under, for this situation, the degree of depth of minimum-depth this point on coloured image will be preserved
Owing to have employed technical scheme of the present invention, the present invention has following beneficial effect:
(1), traditional algorithm needs estimating depth camera internal reference, but the depth signal of catching due to kinect comprises much noise, and this step can be subject to noise effect, causes the instability of total algorithm.Invention removes the step that depth camera internal reference is estimated, while reducing parameter to be asked, enhance the stability of algorithm.
(2), the error impact that algorithm is caused estimated to reduce depth camera internal reference of traditional algorithm, have employed nonlinear optimization framework and iteratively the parameters such as depth camera internal reference revised.But this nonlinear optimization speed is slow, and is often absorbed in local optimum.The present invention uses linear optimization framework to solve, and a step can obtain globally optimal solution, algorithm computational efficiency obtains great lifting.
(3) although, the present invention eliminates the step that depth camera internal reference in traditional algorithm is estimated, therefore the execution efficiency that depth information is mapped to video is not affected, and still can be mapped on video by the depth information that kinect catches in real time.
(4) although, the present invention eliminates the step that depth camera internal reference in traditional algorithm is estimated, but because the hybrid parameter of definition has good mathematical property, still Matrix QR Decomposition technology reverse can be used to go out depth camera internal reference, for other algorithm application.
Accompanying drawing explanation
Fig. 1 is the overall flow figure of method provided by the present invention.
Fig. 2 is effect exemplary plot of the present invention, and the depth map (upper left) of sync pulse jamming is mapped to (lower-left) on coloured image by the present invention.Conveniently observe, depth map is carried out colorize expression by the present invention.The result mapped as shown at right, can be clear that coloured image and depth map are fitted closely on edge, from depth map, the ear of people takes out a pixel, and this pixel has been mapped on the ear of coloured image personage exactly by algorithm of the present invention.
Embodiment
First the abbreviation will used in the following explanation of definition:
: video camera internal reference
: gridiron pattern coordinate system is relative to 3 dimension spin matrixs of camera coordinate system;
: gridiron pattern coordinate system is relative to the 3rd column vector of 3 dimension spin matrixs of camera coordinate system;
: gridiron pattern coordinate system is relative to the translation of camera coordinate system;
: in the depth map that in i-th sample, kinect catches, the homogeneous coordinates being positioned at the two-dimensional coordinate of the jth point on gridiron pattern flat board represent with the degree of depth of this point product;
: by the rotation of camera coordinate system and kinect coordinate system with and kinect internal reference the hybrid parameter formed, ;
: the translation between camera coordinate system and kinect coordinate system.。
Below in conjunction with accompanying drawing, technical solution of the present invention is described further.
Fig. 1 is basic flow sheet of the present invention, invents by setting up plane restriction system of linear equations, and solve hybrid parameter H and translation parameters T, the depth map of being caught in real time by kinect is mapped to the video that high-definition camera is caught.Below each flow process of the present invention is described in detail:
(1), by high clear colorful camera and kinect depth camera fix respectively, and sync pulse jamming signal be set:
The position relationship of application claims video camera and kinect keeps fixing.Simultaneously in order to the depth map taken by kinect is mapped on the image of synchronization video camera shooting, need to be arrange sync pulse jamming signal between two collecting devices.Use software synchronization, send 60HZ square-wave signal by main frame, taken by rising edge clock triggering synchronous.
(2), sync pulse jamming gridiron pattern:
Before demarcation gridiron pattern is placed in camera at random, and carry out sync pulse jamming, obtain the tessellated coloured image of synchronization and depth image.By this operation repeatedly, until obtain one group of different angles, the sync pulse jamming result of different distance.Have employed the gridiron pattern of 13*9 size in an experiment, have chosen 15 distances, and under each distance, have taken the gridiron pattern of 3 ~ 5 angles respectively, obtain 48 samples.
(3), manually flat board is marked:
Owing to needing to use plane restriction when calculation of parameter, so need artificially to be that the region at the dull and stereotyped place of gridiron pattern that kinect gathers in step (2) marks.It is to be understood that, do not need intactly to mark whole flat area, but mark dull and stereotyped inner region with the closed curve of an arbitrary shape.The randomness of this mark effectively reduces the difficulty of man-machine interactively.The number of pixels summation ensureing often to open the region that depth map marks is needed to be no less than 10 pixels during mark.Calculate in the depth map of the kinect of i-th shooting, the homogeneous coordinates of the two-dimensional coordinate of the jth pixel in the arbitrary region of manual mark represent and the product of the degree of depth of this point, are denoted as .And the set of all coordinates is denoted as .
(4), filtering noise point:
The depth map of Kinect shooting comprises a large amount of noises, although method of the present invention does not need accurately to mark tessellated corner in picture conventional method, but mark arbitrary shaped region in gridiron pattern region, but the noise of kinect still can impact the mapping correctness of algorithm of the present invention.So before using the foundation of the point in arbitrary region constraint, need to carry out filtering noise process to it.Particularly, take for i-th time the coordinate set obtained , bring its all elements into space plane equation: carry out plane fitting.And the normal vector of the plane calculated .For set in all elements , calculate average , will in all meet some filtering, using remaining point as new coordinate set .
(5), camera is demarcated:
The classical Camera Calibration Algorithm of Zhang Zhengyou is used to calculate video camera internal reference ; And the gridiron pattern coordinate system in each sample recorded in calculation procedure (2) ties up spin matrixs relative to 3 of camera coordinate system , and record its 3rd column vector , as the expression of tessellated normal direction under camera coordinates in i-th sample.Record the translation of the gridiron pattern coordinate system in each sample relative to camera coordinate system simultaneously .
(6) plane restriction equation, is set up:
For each point be arranged on kinect depth map gridiron pattern flat board , can equation be set up:
Wherein, represent the expression of tessellated normal direction under camera coordinates in i-th sample; represent the translation of the gridiron pattern coordinate system in i-th sample relative to camera coordinate system; represent in the depth map that in i-th sample, kinect catches, the homogeneous coordinates being positioned at the two-dimensional coordinate of the jth point on gridiron pattern flat board represent with the degree of depth of this point product, wherein flat area is determined in step (3); for the rotation by camera coordinate system and kinect coordinate system with and kinect internal reference the hybrid parameter formed, ; represent the translation between camera coordinate system and kinect coordinate system. with for parameter to be asked, wherein be 3 dimension matrixes, be 3 dimensional vectors, totally 12 unknown quantitys.
For the foundation of equation group, should be noted that 2 points: first, because the point in each plane can set up an equation, so equation number is obviously more than unknown number, even only just can solve with a sample, but in order to not be in some positions, but meet plane restriction in whole space, still need all samples to participate in computing.The second, for the point on the flat board that some are far away, the depth survey that kinect measures can be more coarse.When equation is set up for these points, add penalty coefficient punish.
In step (8), the concrete make setting up constraint equation AX=b is: by the hybrid parameter of 3*3 size with 3 dimensional vectors be combined into 12 dimensional vectors .Set up the Constrained equations under least square meaning: , use linear optimization framework to solve.Wherein, remember , , , then at matrix the row of middle correspondence be configured to:
Correspondingly, column vector for:
Use least-squares algorithm solving equation, obtain with .
(7) depth map generates in real time:
Use kinect to take depth map continuously, to each pixel on every width image, the homogeneous coordinates calculating its two-dimensional coordinate represent with the degree of depth of this point product .Then this some three-dimensional coordinate under camera coordinate system can namely be obtained: , by this spot projection on coloured image, obtain its projected position on coloured image with the degree of depth of correspondence , wherein: .May be mapped to the same coordinate on coloured image due to the different pixels on depth map under, for this situation, preserve the degree of depth of minimum-depth as this point on coloured image.
As shown in Figure 2, the depth map (upper left) of sync pulse jamming is mapped to (lower-left) on coloured image to the result of algorithm by the present invention.The result mapped as shown at right, can be clear that coloured image and depth map are fitted closely on edge, from depth map, the ear of people takes out a pixel 1, this pixel 1 has been mapped on the ear 2 of coloured image personage by algorithm of the present invention exactly, and by this step, acquire the degree of depth of people's ear in colour TV camera, what Reference numeral 3 represented is the degree of depth of this pixel 1 in depth camera, and what Reference numeral 4 represented is the degree of depth of this pixel in colour TV camera.

Claims (4)

1. the depth map of kinect shooting and a Real-time Registration for colour TV camera capture video, is characterized in that it comprises the following steps:
(1), by high clear colorful camera and kinect depth camera fix respectively, and sync pulse jamming signal is set;
(2) before, by demarcation gridiron pattern being placed in color camera and kinect depth camera at random, and carry out sync pulse jamming, obtain the tessellated coloured image of synchronization and depth image; By this operation repeatedly, until obtain one group of different angles, the sync pulse jamming result of different distance;
(3) manually arbitrarily mark one piece or some pieces of arbitrary shaped regions in the region that the gridiron pattern, in the depth map that kinect takes at every turn is corresponding, ensure that the number of pixels summation of often opening the region that depth map marks is no less than 10 pixels; Calculate in the depth map of the kinect of i-th shooting, the homogeneous coordinates of the two-dimensional coordinate of the jth pixel in the arbitrary region of manual mark represent and the product of the degree of depth of this point, are denoted as , and the set of all coordinates is denoted as ;
(4), take for i-th time the coordinate set obtained , bring its all elements into space plane equation: carry out plane fitting, and the normal vector of the plane calculated ; For set in all elements , calculate average , will in all meet some filtering, using remaining point as new coordinate set ;
(5) the classical color Camera Calibration Algorithm of Zhang Zhengyou, is used to calculate high clear colorful camera internal reference with color camera and tessellated relative position relation in each shooting, use spin matrix and translation vector represent the relative position relation of the shooting of i-th time;
(6) hybrid parameter of 3*3 size, is established with 3 dimensional vectors for unknown quantity; By the point coordinates through noise filtering step that (4), (5) two steps obtain with the rotation parameter of i-th shooting as known quantity, set up plane restriction linear equation: ;
(7), according to point coordinates the corresponding degree of depth applies penalty coefficient to the equation in (6) , the penalty coefficient that depth value is less is larger, and the constraint equation of the some correspondence making the degree of depth less has larger weight in equation group;
(8), Constrained equations is set up , use linear optimization framework to solve: 1) to represent the translation parameters T between color camera and kinect; 2) by depth camera internal reference with relatively between color camera with kinect rotate the hybrid parameter H that forms;
(9) on, use H and T two parameters obtained in step (6) depth signal that kinect catches to be mapped in real time video that high-definition camera catches.
2. a kind of depth map of kinect shooting as claimed in claim 1 and the Real-time Registration of colour TV camera capture video, is characterized in that: in step (7) a metallic, according to point coordinates the corresponding degree of depth decides the weight of described plane restriction linear equation in whole equation group, penalty coefficient as follows:
(unit rice)
Wherein depth refers to the degree of depth corresponding to this pixel.
3. a kind of depth map of kinect shooting as claimed in claim 1 and the Real-time Registration of colour TV camera capture video, it is characterized in that: in step (8), the concrete make setting up constraint equation AX=b is: by the hybrid parameter of 3*3 size with 3 dimensional vectors be combined into 12 dimensional vectors , set up the Constrained equations under least square meaning: , use linear optimization framework to solve: wherein, note , , , then at matrix the row of middle correspondence be configured to:
Correspondingly, column vector for:
4. a kind of depth map of kinect shooting as claimed in claim 1 and the Real-time Registration of colour TV camera capture video, it is characterized in that: in step (9), the described method H and T two parameters obtained the depth signal that kinect catches be mapped in real time on video that high-definition camera catches is: use kinect to take depth map continuously, to each pixel on every width image, the homogeneous coordinates calculating its two-dimensional coordinate represent with the degree of depth of this point product , then calculate this some three-dimensional coordinate under camera coordinate system: , by this spot projection on coloured image, obtain its projected position on coloured image with the degree of depth of correspondence , complete and in real time the depth signal that kinect catches is mapped on video that high-definition camera catches, wherein: , the same coordinate on coloured image may be mapped to due to the different pixels on depth map under, for this situation, will the degree of depth of minimum-depth this point on coloured image be preserved.
CN201310609865.3A 2013-11-27 2013-11-27 Real-time registration method for depth maps shot by kinect and video shot by color camera Active CN103607584B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310609865.3A CN103607584B (en) 2013-11-27 2013-11-27 Real-time registration method for depth maps shot by kinect and video shot by color camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310609865.3A CN103607584B (en) 2013-11-27 2013-11-27 Real-time registration method for depth maps shot by kinect and video shot by color camera

Publications (2)

Publication Number Publication Date
CN103607584A CN103607584A (en) 2014-02-26
CN103607584B true CN103607584B (en) 2015-05-27

Family

ID=50125782

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310609865.3A Active CN103607584B (en) 2013-11-27 2013-11-27 Real-time registration method for depth maps shot by kinect and video shot by color camera

Country Status (1)

Country Link
CN (1) CN103607584B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104616284B (en) * 2014-12-09 2017-08-25 中国科学院上海技术物理研究所 Pixel-level alignment methods of the coloured image of color depth camera to depth image
CN106254854B (en) * 2016-08-19 2018-12-25 深圳奥比中光科技有限公司 Preparation method, the apparatus and system of 3-D image
CN106548489B (en) * 2016-09-20 2019-05-10 深圳奥比中光科技有限公司 A kind of method for registering, the three-dimensional image acquisition apparatus of depth image and color image
CN106651794B (en) * 2016-12-01 2019-12-03 北京航空航天大学 A kind of projection speckle bearing calibration based on virtual camera
CN106780474B (en) * 2016-12-28 2020-01-10 浙江工业大学 Kinect-based real-time depth map and color map registration and optimization method
CN107016704A (en) * 2017-03-09 2017-08-04 杭州电子科技大学 A kind of virtual reality implementation method based on augmented reality
CN107440712A (en) * 2017-04-13 2017-12-08 浙江工业大学 A kind of EEG signals electrode acquisition method based on depth inductor
CN107564067A (en) * 2017-08-17 2018-01-09 上海大学 A kind of scaling method suitable for Kinect
CN109559349B (en) * 2017-09-27 2021-11-09 虹软科技股份有限公司 Method and device for calibration
CN109754427A (en) * 2017-11-01 2019-05-14 虹软科技股份有限公司 A kind of method and apparatus for calibration
CN109816731B (en) * 2017-11-21 2021-08-27 西安交通大学 Method for accurately registering RGB (Red Green blue) and depth information
CN109255819B (en) * 2018-08-14 2020-10-13 清华大学 Kinect calibration method and device based on plane mirror
CN109801333B (en) * 2019-03-19 2021-05-14 北京华捷艾米科技有限公司 Volume measurement method, device and system and computing equipment
CN110288657B (en) * 2019-05-23 2021-05-04 华中师范大学 Augmented reality three-dimensional registration method based on Kinect
CN112183378A (en) * 2020-09-29 2021-01-05 北京深睿博联科技有限责任公司 Road slope estimation method and device based on color and depth image
CN115375827B (en) * 2022-07-21 2023-09-15 荣耀终端有限公司 Illumination estimation method and electronic equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8134637B2 (en) * 2004-01-28 2012-03-13 Microsoft Corporation Method and system to increase X-Y resolution in a depth (Z) camera using red, blue, green (RGB) sensing
CN102163331A (en) * 2010-02-12 2011-08-24 王炳立 Image-assisting system using calibration method

Also Published As

Publication number Publication date
CN103607584A (en) 2014-02-26

Similar Documents

Publication Publication Date Title
CN103607584B (en) Real-time registration method for depth maps shot by kinect and video shot by color camera
CN103868460B (en) Binocular stereo vision method for automatic measurement based on parallax optimized algorithm
CN104599243B (en) A kind of virtual reality fusion method of multiple video strems and three-dimensional scenic
CN103810685B (en) A kind of super-resolution processing method of depth map
CN104463880B (en) A kind of RGB D image acquiring methods
CN107016704A (en) A kind of virtual reality implementation method based on augmented reality
CN105243637B (en) One kind carrying out full-view image joining method based on three-dimensional laser point cloud
US20110216160A1 (en) System and method for creating pseudo holographic displays on viewer position aware devices
CN104599317B (en) A kind of mobile terminal and method for realizing 3D scanning modeling functions
CN103337094A (en) Method for realizing three-dimensional reconstruction of movement by using binocular camera
CN104349155B (en) Method and equipment for displaying simulated three-dimensional image
CN106651794A (en) Projection speckle correction method based on virtual camera
CN110009672A (en) Promote ToF depth image processing method, 3D rendering imaging method and electronic equipment
CN107113416A (en) The method and system of multiple views high-speed motion collection
CN108053373A (en) One kind is based on deep learning model fisheye image correcting method
CN105654547B (en) Three-dimensional rebuilding method
CN109712232B (en) Object surface contour three-dimensional imaging method based on light field
WO2019085022A1 (en) Generation method and device for optical field 3d display unit image
CN206563985U (en) 3-D imaging system
CN106780629A (en) A kind of three-dimensional panorama data acquisition, modeling method
CN106033614B (en) A kind of mobile camera motion object detection method under strong parallax
CN107862718B (en) 4D holographic video capture method
US11146727B2 (en) Method and device for generating a panoramic image
CN103093426B (en) Method recovering texture and illumination of calibration plate sheltered area
CN111027415A (en) Vehicle detection method based on polarization image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant