CN103581653B - Method for non-interference depth extraction of optical coding depth camera system according to luminous intensity modulation - Google Patents

Method for non-interference depth extraction of optical coding depth camera system according to luminous intensity modulation Download PDF

Info

Publication number
CN103581653B
CN103581653B CN201310535395.0A CN201310535395A CN103581653B CN 103581653 B CN103581653 B CN 103581653B CN 201310535395 A CN201310535395 A CN 201310535395A CN 103581653 B CN103581653 B CN 103581653B
Authority
CN
China
Prior art keywords
matrix
demodulation
light intensity
camera
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310535395.0A
Other languages
Chinese (zh)
Other versions
CN103581653A (en
Inventor
刘荣科
葛帅
袁鑫
关博深
潘宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201310535395.0A priority Critical patent/CN103581653B/en
Publication of CN103581653A publication Critical patent/CN103581653A/en
Application granted granted Critical
Publication of CN103581653B publication Critical patent/CN103581653B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Length Measuring Devices By Optical Means (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

The invention discloses a method for non-interference depth extraction of an optical coding depth camera system according to luminous intensity modulation. The method comprises the steps that firstly, a modulation signal is selected; secondly, a luminous intensity amplitude vector is generated according to the modulation signal; thirdly, speckle patterns are collected; fourthly, demodulation is conducted on the collected speckle patterns, so that non-interference speckle patterns are obtained; finally, depth extraction is conducted on the obtained non-interference speckle patterns. According to the method for non-interference depth extraction of the optical coding depth camera system according to luminous intensity modulation, non-interference extraction of the scene depth of the optical coding depth camera system according to luminous intensity modulation is achieved, and the luminous intensity modulation method applied to the method can support that different numbers of cameras work simultaneously and is suitable for multiple application conditions; good expansibility is achieved by the method, influence on other cameras is avoided when the number of the depth cameras in the system is increased or decreased, and good adaptability is achieved; all the cameras in the system are dispatched through a dispatching device of the non-interference optical coding depth camera system, so that communication between the cameras is not needed, and therefore the cost of the cameras is reduced.

Description

A kind of noiseless depth extraction method of the pumped FIR laser depth camera system based on modulation light intensity
Technical field
The invention belongs to the sampling of many depth cameras complex scene multiple views and reconstruction field in multi-viewpoint three-dimensional video, the interference particularly between many pumped FIR laser depth camera is eliminated.
Background technology
Visual system perceives is the important way of the mankind from outside obtaining information, there are some researches show, the information of individual study has 80% to come from vision system.The spatial vision of the mankind has the ability two-dimensional projections in the external world being reconstructed into three-dimensional world, to make people can produce third dimension when observing things, obtains good visual experience and telepresenc.Along with the development of digital television techniques and Display Technique, spectators have been not content with the flat panel high-definition television pursuing the single viewpoint of viewing, and the stereoscopic video images immersively watched certain scene from multi-angle becomes the pursuit of spectators gradually.The information of three-dimensional world is comprised in stereo-picture, spectators are enable to produce sensation on the spot in person and visual experience true to nature, stereo-picture technology is also more and more extensive in the application of the numerous areas such as scientific research, military affairs, education, industry, medical treatment simultaneously, achieves great successes.Compared with plane picture, the generation of stereo-picture needs in conjunction with the apperceive characteristic of the mankind for steric information, and the subjective feeling situation of stereoeffect and people is closely related, and sense of stereoscopic vision is human physiological and the coefficient result of psychological multiple factors.This makes the research of stereo-picture more more complicated than plane picture.
In stereoscopic vision perception, stereoscopic imaging technology utilizes the binocular parallax of people to form third dimension.Binocular parallax refers to the locus difference (the pupil of both eyes distance of adult is approximately 65mm) of the eyes due to people.Due to eyes alternate position spike during observation object, the angle of two eye observation is different, like this on the plane picture of right and left eyes formation, not identical apart from different its positions of point, the angle of observing is also incomplete same, and the brain of people is analyzed the different images that eyes obtain, and obtains the distance of object in scene, produce third dimension, binocular parallax that Here it is.Apply binocular parallax principle in stereoscopic imaging technology, the image of Same Scene different angles is sent into eyes, and these two two dimensional images are synthesized single 3-D view by brain, make people produce third dimension.
The multiple views system of many depth cameras is adopted to have many advantages:
(1) many depth cameras are utilized can to reduce texture camera quantity while providing the careful depth of field;
(2) relative to single depth camera, many depth cameras can effectively avoid the problem of blocking scene, improve scene rebuilding quality;
(3) depth map has higher compressibility relative to texture maps, after compression, effectively can reduce code check, is easier to storage and the transmission of data;
(4) many depth maps are easy to realize visual angle reconstruction, are convenient to user and freely switch visual angle;
(5) depth information that many depth cameras obtain can be used for the complex scene such as augmented reality and natural user interface, is more conducive to third dimension and the interactivity of enhancing system.
Current pumped FIR laser depth camera system is made up of the depth camera array that a component is vertical, and be the system schematic of pumped FIR laser depth camera system as shown in Figure 1, many pumped FIR laser camera carries out depth-sampling to scene simultaneously, obtains scene depth value.
But the infrared speckle image sent due to the pumped FIR laser depth camera taking Kinect as representative does not possess separability, when many depth cameras work simultaneously, the speckle launched due to different cameral in scene is overlapping in space, receive when image carries out degree of depth coupling and cannot distinguish Received signal strength, therefore the speckle pattern that this camera receives can receive other depth camera signal disturbing.
Summary of the invention
The object of the invention is to eliminate in multiple pumped FIR laser depth camera (abbreviation camera) system and disturb between camera, when polyphaser works simultaneously, the glitch-free speckle pattern of each camera can be obtained respectively, and then obtain depth information accurately.
The present invention proposes a kind of noiseless depth extraction method of the pumped FIR laser depth camera system based on modulation light intensity, comprise the steps:
Step one, choice of modulation signal;
Step 2, generates light intensity amplitude vector according to modulation signal;
Step 3, gathers speckle pattern;
Step 4, carries out demodulation to the speckle pattern collected, and obtains noiseless speckle pattern;
Step 5, carries out depth extraction to the noiseless speckle pattern obtained.
Beneficial effect of the present invention is as follows:
(1) method provided by the invention achieves the extraction of the glitch-free pumped FIR laser depth camera system scene depth based on modulation light intensity, and the modulation light intensity mode that described method adopts can support that the camera of varying number works simultaneously, adapts to multiple applicable cases;
(2) autgmentability that had of described method, can not impact other cameras during depth camera quantity in increase and decrease system, adaptability is good;
(3) in system, each camera unification is allocated by glitch-free pumped FIR laser depth camera system controlling equipment, without the need to communication between camera, reduces the cost of camera.
Accompanying drawing explanation
Fig. 1 is pumped FIR laser depth camera system schematic diagram in the present invention;
Fig. 2 is the theory diagram of the noiseless depth extraction method of the pumped FIR laser depth camera system based on modulation light intensity of the present invention;
The dividing mode of ratio amplitude vector and relation schematic diagram between vector element and camera frame number in the noiseless depth extraction method of Fig. 3 pumped FIR laser depth camera system based on modulation light intensity of the present invention;
Fig. 4 is of the present invention a kind of based on demodulation window schematic diagram in the glitch-free pumped FIR laser depth camera system demodulating process of modulation light intensity;
Fig. 5 A is the first frame speckle pattern schematic diagram of the camera collection of camera;
Fig. 5 B is the second frame speckle pattern schematic diagram of the camera collection of camera;
Fig. 6 is Fig. 5 A and the rear noiseless speckle pattern of Fig. 5 B homographic solution modulation.
Embodiment
Below in conjunction with drawings and Examples, the present invention is described in detail.
As depicted in figs. 1 and 2, the invention provides a kind of noiseless depth extraction method of the pumped FIR laser depth camera system based on modulation light intensity, in described depth extraction method, the amplitude amplitude vector change to scale of the infrared laser projector of pumped FIR laser depth camera (hereinafter all referred to as camera), to realize intensity modulation; Thereafter, gather the speckle pattern having the interference degree of depth of the camera shooting of camera, and demodulation is carried out to the speckle pattern received, obtain the speckle pattern of the noiseless degree of depth.Wherein ratio amplitude vector is by being generated by non-singular matrix, and the mode that demodulation then adopts the multiframe speckle pattern of acquisition to be weighted summation is carried out.Described depth extraction method realizes especially by following steps:
Step one, choice of modulation signal:
For each pumped FIR laser depth camera in pumped FIR laser depth camera system, first need to be connected with dispatching patcher, connection between the two can adopt wireless or wired mode.Dispatching patcher is responsible for coordinating all pumped FIR laser depth cameras (hereinafter all referred to as camera), and provides the relevant parameter needed for modulation light intensity to each camera, comprises intensity, cycle, phase place etc.
Described camera comprises the infrared laser projector and camera two parts, as shown in Figure 2, the modulation signal that the infrared laser projector provides according to dispatching patcher projects scene, the speckle pattern of camera received in frames scene, and demodulation is carried out to multiframe speckle pattern, demodulation vector has dispatching patcher to provide, and finally obtains noiseless speckle pattern.
Described dispatching patcher can adopt FPGA+ROM structure, and ROM is responsible for storing the light intensity amplitude vector of different cameral, and FPGA is responsible for different light intensity amplitude vectors to be sent to the projector of each camera and provides the synchronizing signal of projector.Synchronizing signal can adopt trailing edge to trigger.
Choose suitable modulation signal and comprise two steps: first, generate non-singular matrix according to camera number, then non-singular matrix is carried out by row segmentation acquisition ratio amplitude vector.
(1) generate non-singular matrix, and determine ratio amplitude matrix;
In order to guarantee to carry out demodulation to whole pumped FIR laser depth camera, weight vector must be made to generate non-singular matrix.The non-singular matrix generated should be n rank square formation, and its exponent number is identical with camera sum.
Non-singular matrix can carry out the conversion of elementary row (column) to unit matrix and generate, and elementary row (column) conversion comprises following three kinds of conversion:
Change method conversion: exchange two row (column).
Times method conversion: by all elements of a certain row (column) of determinant with being multiplied by several k.
Elimination converts: all elements of a certain row (column) of determinant is multiplied by a number k and is added on the corresponding element of another row (column).
Especially, in order to make light intensity be all choose positive number on the occasion of, times method conversion adopted in the present invention and the multiple k during elimination converts.In generative process, Applying Elementary Row Operations or elementary rank transform can be adopted separately, also can adopt two kinds of elementary transformations simultaneously.As formula (1):
In formula (1), matrix A is that unit battle array E is produced by elementary row (column) conversion, is non-singular matrix.Easy in order to calculate, carry out greatest member normalization to matrix A, obtain matrix B, matrix B is also non-singular matrix.
(2) ratio amplitude matrix by rows divides;
The matrix B of generation is called " ratio amplitude matrix ".Ratio amplitude matrix by rows is divided into n vector, as Fig. 3, i.e. the every corresponding vector of row, as the ratio amplitude vector of the infrared laser projector of each camera, waits until in step 2 to generate light intensity amplitude vector.Described ratio amplitude vector is also referred to as modulation signal.
Step 2, generates light intensity amplitude vector according to ratio amplitude vector;
In the pumped FIR laser depth camera system that the present invention relates to, each camera without communication, is only connected with dispatching patcher, can not obtains the modulation intelligence of other cameras between camera each other.Therefore need dispatching patcher according to the ratio amplitude vector of each camera, calculate actual infrared laser intensity, obtain light intensity amplitude vector, the light intensity amplitude vector of each two field picture is sent respectively to corresponding camera, camera changes the transmitting light intensity of the infrared laser projector according to light intensity amplitude vector again.Meanwhile, dispatching patcher provides frame synchronizing signal, and each camera, according to frame synchronizing signal, is taken to scene simulation light intensity by the infrared laser projector.
Due to the relative scale between the infrared laser intensity that ratio amplitude vector only represents each frame of camera, but not actual value, so calculate the actual infrared laser intensity of each frame according to ratio amplitude vector, i.e. light intensity amplitude vector C, as formula (1).
In shooting process, the light intensity of the infrared laser projector projection of each camera selectes corresponding Amplitude Ration according to frame number.Even the transmitting rated power of the infrared laser projector is I max, then in ratio amplitude vector size to be element size in the light intensity amplitude vector that the element of a is corresponding be: I max× a.With vector representation, if light intensity amplitude vector is C, ratio amplitude vector is β, then have:
C=I max×β
After pumped FIR laser depth camera obtains light intensity amplitude vector C, the transmitting power of the infrared laser projector is according to element variation in light intensity amplitude vector.Light intensity is by circulation change, namely be in n(system, there is n depth camera for element number in light intensity amplitude vector) situation, the light intensity of 1 to n frame is launched by 1 to n element in light intensity amplitude vector, the light intensity of the (n+1)th to 2n frame is launched by 1 to n element in light intensity amplitude vector equally, the like, as shown in Figure 3, dividing mode and the relation between element and camera frame number of light intensity amplitude vector in the noiseless depth extraction method of the pumped FIR laser depth camera system of modulation light intensity of the present invention is.
Step 3, gathers speckle pattern;
Depth camera carries out the collection of speckle pattern according to the frame synchronizing signal that dispatching patcher provides, and speckle pattern is stored, and waits until in next step and carries out demodulation.
Step 4, carries out demodulation to the speckle pattern collected, and obtains noiseless speckle pattern;
According to step 3, pumped FIR laser depth camera can obtain a series of speckle pattern.Camera carries out demodulation frame by frame according to this series of speckle pattern obtained, and demodulation adopts the mode of weighted sum, as follows:
According to current selected ratio amplitude matrix, dispatching patcher sends the demodulation weight table of correspondence to each camera simultaneously, and camera, according to demodulation weight table, can carry out demodulation.
Described demodulation weight table calculates by simple matrix inversion and obtains, and according to step one, ratio amplitude matrix is non-singular matrix, then it must exist inverse matrix, is called weight matrix.
Be n vector by weight matrix divided by row, depth camera is sent to as weight vector, the corresponded manner of the corresponded manner between weight vector element and speckle pattern frame and light intensity amplitude vector and photographed frame is similar: be there is n depth camera in n(system to element number in weight vector) situation, 1 to n element in the corresponding weight vector of 1 to n frame, 1 to n element in the same corresponding weight vector of the (n+1)th to 2n frame, the like, can see Fig. 3.Adopt corresponding weight vector element to do weighted sum with the corresponding speckle pattern frame of acquisition, corresponding noiseless speckle pattern can be obtained.
Demodulation window is adopted to carry out demodulation during demodulation, demodulation window length is camera sum in system, centered by current demodulated frames, the grayscale image sequence in demodulation window is asked weighted sum according to the weights in its corresponding weight vector, the demodulation result of current demodulated frames can be obtained, as Fig. 4.Weighted sum ask method as follows, for obtain the i-th frame infrared hybrid optical system, its gray value can use matrix P irepresent, corresponding weights are a i, then L frame to the weighted sum R of M frame is:
R = Σ i = L M a i · P i
When carrying out demodulation to m frame, demodulation window is selected extremely , namely have demodulation result to be:
R mfor noiseless speckle pattern gray matrix.
Step 5, carries out depth extraction to the noiseless speckle pattern obtained;
After demodulation completes, the mode of Region Matching is adopted to carry out depth extraction.Each depth camera can complete separately depth extraction work.
In the process extracted, to certain pixel of the speckle pattern that will photograph, get the neighborhood of an one fixed size, the calculating of cross-correlation function is carried out in the region identical with size in the speckle pattern of launching, obtain the center of transmitting speckle pattern corresponding to corresponding cross-correlation function maximum, the position according to reference speckle pattern center and pixel to be matched can obtain depth value.
embodiment
Below with reference to accompanying drawing, be further elaborated to one embodiment of the present of invention, in this embodiment, in system, pumped FIR laser depth camera adds up to 3, adopt noiseless depth extraction method provided by the invention to carry out depth extraction to the speckle pattern of camera in system, concrete steps are as follows:
Step one, choose suitable modulation signal:
Modulation intelligence needed for could correctly obtaining after pumped FIR laser depth camera must be connected with whole glitch-free pumped FIR laser depth camera system, dispatching patcher finds out Installed System Memory at three pumped FIR laser depth cameras.
Choose suitable modulation signal according to execution mode step one and comprise two steps: first, generate non-singular matrix according to pumped FIR laser depth camera number in system, then non-singular matrix is carried out by row segmentation and obtain weight vector.
(1) non-singular matrix is generated
In order to easy, choose the generator matrix that following matrix is weight vector, i.e. amplitude matrix.
2 1 1 1 2 1 1 1 2
(2) non-singular matrix divided by row
The matrix by rows of generation is divided, obtains 3 amplitude proportional amplitude vectors:
(2,1,1),(1,2,1),(1,1,2)
First ratio amplitude vector is selected to be sent to camera 1.
Step 2, generates light intensity amplitude vector according to ratio amplitude vector and transmits:
Assuming that infrared laser projector projecting laser power is specified 60mw, then select principle according to the light intensity of projection, in amplitude matrix, greatest member is 2, then have, its light intensity amplitude vector is (60,30,30) the infrared laser projector projecting laser power of camera 1, is namely had according to 60mw, 30mw, 30mw circulation change.
In shooting process, the light intensity of each camera infrared laser projector projection selectes corresponding light intensity amplitude according to frame number.
Step 3, gathers speckle pattern:
Depth camera carries out the collection of speckle pattern according to the synchronizing signal that system call equipment provides, and speckle pattern is stored, and waits until in next step and carries out demodulation.
Step 4, demodulation is carried out to the signal speckle pattern collected, obtains noiseless speckle pattern:
According to step 2, pumped FIR laser depth camera can obtain a series of speckle pattern.Depth camera carries out solution interference according to this series of speckle pattern frame obtained.Demodulation mode is as follows, for situation shown in step one:
According to current selected weight vector, system call equipment sends the demodulation weight table of correspondence to each depth camera simultaneously, and depth camera, according to weight table, can carry out demodulation.
The calculating of weight table, weight table calculates by simple matrix inversion and obtains, and because amplitude matrix is full rank battle array, it must have inverse matrix, according to shown in step one:
Amplitude matrix is:
2 1 1 1 2 1 1 1 2
Its inverse matrix is:
2 1 1 1 2 1 1 1 2 - 1 = 0.75 - 0.25 - 0.25 - 0.25 0.75 - 0.25 - 0.25 - 0.25 0.75 = a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33
This matrix by rows piecemeal can be obtained weight vector:
α 1=(a 11,a 12,a 13)
α 2=(a 21,a 22,a 23)
α 3=(a 31,a 32,a 33)
Wherein, α 1, α 2, α 3be respectively the demodulation weight vector of camera 1, camera 2 and camera 3.For the multiple image that camera 1 obtains, the weight vector α of its correspondence can be utilized 1carry out demodulation:
Now weights corresponding with this two field picture for the gray value of each two field picture of camera 1 are multiplied, the grayscale image sequence of Weighted Coefficients can be obtained.Wherein the 1st, 4,7 ... the corresponding weights of frame are a 11, wherein the 2nd, 5,8 ... the corresponding weights of frame are a 12, wherein the 3rd, 6,9 ... the corresponding weights of frame are a 13, by that analogy.
Adopt demodulation window to carry out demodulation during demodulation, demodulation window length is 3, centered by current demodulated frames, as shown in Figure 3, the grayscale image sequence of the Weighted Coefficients in demodulation window is added, can obtains the demodulation result of current demodulated frames.
Namely have:
R 9=P 8×a 12+P 9×a 13+P 10×a 11
Collected by camera arrives when being 2 two frame speckle pattern that Fig. 5 is camera number, Fig. 6 is the demodulation pattern of its correspondence.
Step 5, depth extraction is carried out to the noiseless speckle pattern obtained:
After demodulation completes, the mode of Region Matching is adopted to carry out depth extraction.Each depth camera can complete separately depth extraction work.In the process extracted, to certain pixel of the speckle pattern that will photograph, get the neighborhood of an one fixed size, the calculating of cross-correlation function is carried out in the region identical with size in the speckle pattern of launching, obtain the center of transmitting speckle pattern corresponding to corresponding cross-correlation function maximum, the position according to reference speckle pattern center and pixel to be matched can obtain depth value.

Claims (3)

1., based on a noiseless depth extraction method for the pumped FIR laser depth camera system of modulation light intensity, it is characterized in that: comprise the steps:
Step one, choice of modulation signal, comprises two steps:
First, generate non-singular matrix A according to camera number, then non-singular matrix A is carried out greatest member normalization, obtain matrix B, matrix B is non-singular matrix, also referred to as ratio amplitude matrix, comparative example amplitude matrix by rows carries out segmentation acquisition ratio amplitude vector, is called modulation signal; Non-singular matrix A is n rank square formations, and its exponent number is identical with camera sum, and non-singular matrix A is that unit battle array E obtains by adopting separately Applying Elementary Row Operations or elementary rank transform, or adopts two kinds of elementary transformations to obtain simultaneously;
Step 2, generate light intensity amplitude vector according to modulation signal, the transmitting power of the infrared laser projector is according to element variation in light intensity amplitude vector; Described light intensity amplitude vector C, as follows:
C=I max×β,
I maxfor the transmitting rated power of the infrared laser projector, β is ratio amplitude vector;
Step 3, gathers speckle pattern;
Step 4, carries out demodulation to the speckle pattern collected, and obtains noiseless speckle pattern; Described demodulation adopts the mode of weighted sum, as follows:
According to current selected ratio amplitude matrix, dispatching patcher sends the demodulation weight table of correspondence to each camera simultaneously, camera is according to demodulation weight table, and corresponding weight vector element does weighted sum with the corresponding speckle pattern frame of acquisition, namely obtains corresponding noiseless speckle pattern; Demodulation window is adopted to carry out demodulation during demodulation, demodulation window length is camera sum in system, centered by current demodulated frames, grayscale image sequence in demodulation window is asked weighted sum according to the weights in its corresponding weight vector, namely the demodulation result of current demodulated frames is obtained, weighted sum ask method as follows, for obtain the i-th frame infrared hybrid optical system, its gray value matrix P irepresent, corresponding weights are a i, then L frame to the weighted sum R of M frame is:
R = Σ i = L M a i · P i
When carrying out demodulation to m frame, demodulation window is selected extremely namely demodulation result is had to be:
R mfor noiseless speckle pattern gray matrix;
Step 5, carries out depth extraction to the noiseless speckle pattern obtained.
2. the noiseless depth extraction method of a kind of pumped FIR laser depth camera system based on modulation light intensity according to claim 1, it is characterized in that: the light intensity of the projector of infrared laser described in step 2 is by circulation change, namely be the situation of n for element number in light intensity amplitude vector, the light intensity of 1 to n frame is launched by 1 to n element in light intensity amplitude vector, the light intensity of the (n+1)th to 2n frame is launched by 1 to n element in light intensity amplitude vector equally, the like.
3. the noiseless depth extraction method of a kind of pumped FIR laser depth camera system based on modulation light intensity according to claim 1, it is characterized in that: described demodulation weight table matrix is obtained by inversion calculation, ratio amplitude matrix is non-singular matrix, then its inverse matrix, is called weight matrix; Be n vector by weight matrix divided by row, each vector is sent to camera as a weight vector, is the situation of n to element number in weight vector, 1 to n element in the corresponding weight vector of 1 to n frame, 1 to n element in the same corresponding weight vector of the (n+1)th to 2n frame, the like.
CN201310535395.0A 2013-11-01 2013-11-01 Method for non-interference depth extraction of optical coding depth camera system according to luminous intensity modulation Active CN103581653B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310535395.0A CN103581653B (en) 2013-11-01 2013-11-01 Method for non-interference depth extraction of optical coding depth camera system according to luminous intensity modulation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310535395.0A CN103581653B (en) 2013-11-01 2013-11-01 Method for non-interference depth extraction of optical coding depth camera system according to luminous intensity modulation

Publications (2)

Publication Number Publication Date
CN103581653A CN103581653A (en) 2014-02-12
CN103581653B true CN103581653B (en) 2015-03-11

Family

ID=50052435

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310535395.0A Active CN103581653B (en) 2013-11-01 2013-11-01 Method for non-interference depth extraction of optical coding depth camera system according to luminous intensity modulation

Country Status (1)

Country Link
CN (1) CN103581653B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104917938B (en) * 2014-03-14 2018-08-31 联想(北京)有限公司 Depth camera device for mobile communication equipment
CN113888614B (en) * 2021-09-23 2022-05-31 合肥的卢深视科技有限公司 Depth recovery method, electronic device, and computer-readable storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103152139A (en) * 2013-03-04 2013-06-12 哈尔滨工程大学 Multi-base sonar space-time channel multiplexing method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103152139A (en) * 2013-03-04 2013-06-12 哈尔滨工程大学 Multi-base sonar space-time channel multiplexing method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Depth Acquisition from Density Modulated Binary Patterns;Zhe Yang等;《Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on》;20130628;全文 *
Reducing interference between multiple structured light depth sensors using motion;Andrew Maimone等;《Virtual Reality Short Papers and Posters (VRW), 2012 IEEE》;20120831;摘要,正文第3部分,图2、4 *

Also Published As

Publication number Publication date
CN103581653A (en) 2014-02-12

Similar Documents

Publication Publication Date Title
US10715782B2 (en) 3D system including a marker mode
US6556236B1 (en) Intelligent method and system for producing and displaying stereoscopically-multiplexed images of three-dimensional objects for use in realistic stereoscopic viewing thereof in interactive virtual reality display environments
CN101282492B (en) Method for regulating display depth of three-dimensional image
Cheng et al. A novel 2Dd-to-3D conversion system using edge information
CN103974055B (en) 3D photo generation system and method
US8063930B2 (en) Automatic conversion from monoscopic video to stereoscopic video
US20040036763A1 (en) Intelligent method and system for producing and displaying stereoscopically-multiplexed images of three-dimensional objects for use in realistic stereoscopic viewing thereof in interactive virtual reality display environments
WO1996022660A1 (en) Intelligent method and system for producing and displaying stereoscopically-multiplexed images in virtual reality environments
US20040189796A1 (en) Apparatus and method for converting two-dimensional image to three-dimensional stereoscopic image in real time using motion parallax
CN101729920B (en) Method for displaying stereoscopic video with free visual angles
JP5673032B2 (en) Image processing apparatus, display apparatus, image processing method, and program
CN111988593B (en) Three-dimensional image color correction method and system based on depth residual optimization
US20150187132A1 (en) System and method for three-dimensional visualization of geographical data
US20170171534A1 (en) Method and apparatus to display stereoscopic image in 3d display system
IJsselsteijn et al. Human factors of 3D displays
CN107360416A (en) Stereo image quality evaluation method based on local multivariate Gaussian description
US10122987B2 (en) 3D system including additional 2D to 3D conversion
CN103581653B (en) Method for non-interference depth extraction of optical coding depth camera system according to luminous intensity modulation
US9258546B2 (en) Three-dimensional imaging system and image reproducing method thereof
Ideses et al. New methods to produce high quality color anaglyphs for 3-D visualization
WO2019017290A1 (en) Stereoscopic image display device
KR20050078737A (en) Apparatus for converting 2d image signal into 3d image signal
CN116996654A (en) New viewpoint image generation method, training method and device for new viewpoint generation model
CN108154549A (en) A kind of three dimensional image processing method
CN108833893A (en) A kind of 3D rendering bearing calibration shown based on light field

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant