CN102999910A - Image depth calculating method - Google Patents

Image depth calculating method Download PDF

Info

Publication number
CN102999910A
CN102999910A CN2012104902570A CN201210490257A CN102999910A CN 102999910 A CN102999910 A CN 102999910A CN 2012104902570 A CN2012104902570 A CN 2012104902570A CN 201210490257 A CN201210490257 A CN 201210490257A CN 102999910 A CN102999910 A CN 102999910A
Authority
CN
China
Prior art keywords
speckle pattern
depth information
image block
depth
block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012104902570A
Other languages
Chinese (zh)
Other versions
CN102999910B (en
Inventor
葛晨阳
姚慧敏
李倩敏
葛瑞龙
李伟
江豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NINGBO YINGXIN INFORMATION SCIENCE & TECHNOLOGY CO., LTD.
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN201210490257.0A priority Critical patent/CN102999910B/en
Publication of CN102999910A publication Critical patent/CN102999910A/en
Application granted granted Critical
Publication of CN102999910B publication Critical patent/CN102999910B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides an image depth calculating method. The image depth calculating method is based on an active visual pattern of structure light and adopts a block matching motion estimation method so as to improve matching accuracy. High-resolution depth information of a target object is acquired through the laser trigonometry or a table lookup method, and the image depth calculating method avoids a complicated depth calculating formula, achieves rapid and accurate acquisition of the depth information, simplifies a hardware structure, is easy to achieve, and facilitates large-scale popularization.

Description

A kind of picture depth computing method
Technical field
The invention belongs to image processing, laser triangulation, natural interaction technical field, be specifically related to a kind of picture depth computing method.
 
Background technology
The man-machine interaction mode of natural harmony is human to controlling the dreamboat of machine, makes machine can understand the order that the people transmits in state of nature.Utilize image processing techniques to obtain Real time identification and motion capture that depth information carries out 3-D view, make the people can move with expression, gesture, body the natural way such as work and terminal becomes possibility alternately.Also be one of technological difficulties of Vision Builder for Automated Inspection development by high-precision picture depth acquisition of information.The picture depth technology for information acquisition progressively is extended to other intelligent terminal from the game machine peripheral hardware, comprise intelligent television, smart mobile phone, PC, panel computer etc., experience for the user brings control mode as " science fiction " and brand-new man-machine interaction, all have wide practical use in fields such as Entertainment, consumer electronics, medical treatment, education.
Can obtain comparatively exactly the depth information of image based on the active vision pattern of structured light, this pattern is compared the binocular solid camera, have the depth map information of obtaining more reliable and more stable, be not subjected to the advantages such as ambient light effects, Stereo matching process are simple, the algorithm calculated amount is little.Body sense interactive device Kinect such as Microsoft is exactly the active vision pattern that adopts infrared structure light, namely throw fixed mode image to body surface by infrared laser, diffuse reflection through body surface forms the speckle point, obtain speckle image by the imageing sensor collection, calculate the depth information that obtains object by the picture depth sensor chip again.
Although said method can obtain depth information exactly, yet the realization of its algorithm depends on expensive hardware device, brings difficulty to practical application.
 
Summary of the invention
The invention provides a kind of picture depth computing method, avoided complicated depth calculation formula, the hardware configuration that needs is simple, can realize obtaining real-time of high-resolution depth information.
For solving the problems of the technologies described above, the technical solution used in the present invention is: a kind of picture depth computing method are provided, it is characterized in that, may further comprise the steps:
1) gathers the standard speckle pattern with known depth information, as benchmark;
2) gathered the input speckle pattern sequence of target object by imageing sensor;
3) with in the described input speckle pattern sequence respectively input speckle pattern and the standard speckle pattern mates, generate the motion vector of each image block in the input speckle pattern;
4) the corresponding depth information of motion vector of each image block in the acquisition input speckle pattern;
5) depth information of all image blocks in the described input speckle pattern made up, obtain the depth map of described target object.
Picture depth computing method provided by the invention, active vision pattern based on structured light, adopt block matching motion estimation method to improve matching precision, obtain the high-resolution depth information of target object by the mode of laser triangulation method or look-up table, avoid complicated depth calculation formula, realized quick and precisely obtaining of depth information, simplified hardware configuration, be easy to realize, be conducive to large-scale promotion.
 
Description of drawings
Fig. 1 is the laser triangulation principle schematic of one embodiment of the invention;
Fig. 2 is the depth calculation look-up table mode synoptic diagram of another embodiment of the present invention;
Fig. 3 is the displacement of another embodiment of the present invention and the curve-fitting method synoptic diagram of depth distance;
Fig. 4 is that the block matching motion of another embodiment of the present invention is estimated synoptic diagram.
 
Embodiment
Below in conjunction with specific embodiment the present invention is further described in detail.
In one embodiment, provide a kind of picture depth computing method, may further comprise the steps:
1) gathers the standard speckle pattern with known depth information, as benchmark;
2) gathered the input speckle pattern sequence of target object by imageing sensor;
3) respectively input speckle pattern and standard speckle pattern in the described input speckle pattern sequence are carried out block matching motion and estimate, generate the motion vector of image block in the input speckle pattern;
4) the corresponding depth information of motion vector of each image block in the acquisition input speckle pattern;
5) depth information of all image blocks in the described input speckle pattern made up, obtain the depth map of described target object.
Obtaining of its Plays speckle pattern can realize in the following way: with the laser beam of fixed pattern (infrared, visible light, ultraviolet ray, invisible light) project with the central shaft (Z axis) of the speckle projector perpendicular and known depth information be
Figure 2012104902570100002DEST_PATH_IMAGE001
Standard flat, be the standard speckle pattern by formed speckle pattern on the imageing sensor acquisition plane.The input speckle pattern also can adopt said method to obtain, and contains the target object of the information that will fathom in the input speckle pattern, and its depth information is unknown.
Above-mentioned standard flat and target object need in the coverage scope of speckle irradiation, and contain as far as possible the view picture speckle image that fixed pattern forms.
Among the present invention, as not specifying the degree of depth of image block dRefer to that all plane this image block place, vertical with speckle projector central shaft (Z axis) is to the vertical range of speckle projector front end.
Preferably, in another embodiment, described input speckle pattern and described standard speckle pattern are to be projected to respectively target object and the known body surface of depth information obtains by the laser beam with fixed pattern.With regard to this embodiment, it only is used for limiting a kind of acquisition pattern of speckle pattern.
Preferably, in another embodiment, for described step 4), it is characterized in that: according to the motion vector of each image block, at known image sensor focal distance and sensor pixel point point in the situation of parameter, the relative changing value who utilizes this motion vector to ask for the degree of depth in conjunction with the laser triangulation method, this relative changing value can just can bear, and this relative changing value adds that the known depth information of standard speckle pattern can obtain depth information corresponding to this image block.
Fig. 1 is the laser triangulation principle schematic of this embodiment.According to the displacement of each image block (
Figure 600077DEST_PATH_IMAGE002
,
Figure 981380DEST_PATH_IMAGE003
), in the known image sensor focal distance With sensor pixel point point in the situation of parameter, utilize this displacement to ask for the relative changing value of the degree of depth in conjunction with the laser triangulation method , this changing value
Figure 548256DEST_PATH_IMAGE005
Can just can bear,
Figure 202091DEST_PATH_IMAGE005
Add the known depth information of standard speckle pattern
Figure 443848DEST_PATH_IMAGE001
Can obtain depth information corresponding to this image block
Figure 892147DEST_PATH_IMAGE006
, wherein: when this relative changing value be on the occasion of the time, depth information corresponding to this image block is greater than the known depth information of standard speckle pattern; When this relative changing value was negative value, depth information corresponding to this image block was less than the known depth information of standard speckle pattern; When this relative changing value was zero, the depth information that this image block is corresponding equaled the known depth information of standard speckle pattern.
For further simplifying the obtain manner of depth information, in another embodiment, in above-mentioned steps 3) and step 4) between increase the step of the look-up table of a corresponding relation of setting up motion vector and depth information, utilize loop up table to ask for the degree of depth of image block in the input speckle pattern, this look-up table is set up according to the different displacements of standard speckle pattern and corresponding depth value thereof.
Particularly, horizontal displacement in the look-up table
Figure 750512DEST_PATH_IMAGE002
Or perpendicular displacement amount and the degree of depth
Figure 942459DEST_PATH_IMAGE006
Corresponding relation can be obtained by following methods: by a plurality of different depth distances
Figure 22411DEST_PATH_IMAGE007
(as
Figure 923502DEST_PATH_IMAGE008
) the standard speckle pattern carry out in twos block matching motion and estimate, obtain the displacement between the various criterion speckle pattern, such as horizontal displacement
Figure DEST_PATH_IMAGE009
Or perpendicular displacement amount
Figure 800322DEST_PATH_IMAGE010
Can obtain horizontal displacement through curve-fitting method
Figure 61539DEST_PATH_IMAGE002
Or perpendicular displacement amount
Figure 12309DEST_PATH_IMAGE003
With the Object Depth distance Relation; Can generate any level displacement according to this curvilinear equation
Figure 619057DEST_PATH_IMAGE002
Or any perpendicular displacement amount
Figure 903538DEST_PATH_IMAGE003
With depth distance Corresponding look-up table.Can obtain any level displacement according to this look-up table
Figure 184664DEST_PATH_IMAGE002
Or perpendicular displacement amount
Figure 985478DEST_PATH_IMAGE003
The depth information of corresponding this image block
Figure 57340DEST_PATH_IMAGE007
Fig. 2 is the synoptic diagram of present embodiment depth calculation look-up table mode.
Through the degree of depth of all image blocks of input speckle pattern is searched and made up, can obtain the corresponding depth map of this input speckle pattern.
Preferably, depth map represents with gray-scale map, and is nearer such as the larger expression of gray-scale value, namely
Figure 497548DEST_PATH_IMAGE006
Be worth approximately little; The less expression of gray-scale value is far away, namely Be worth larger.Also can adopt opposite way to represent depth map with gray-scale map.
That is to say that the present invention has realized that the look-up table mode asks for the degree of depth
Figure 905844DEST_PATH_IMAGE006
, for example: in implementation, with horizontal displacement
Figure 532129DEST_PATH_IMAGE002
Or perpendicular displacement amount As the input value of look-up table, obtain the output valve degree of depth
Figure 395228DEST_PATH_IMAGE006
Thereby, avoided complicated depth calculation formula, realized the simplification of hardware configuration and the saving of hardware resource.
Further, in another embodiment, provide the curve-fitting method (referring to Fig. 3) of a kind of displacement and depth information Relations Among, with horizontal displacement
Figure 158916DEST_PATH_IMAGE002
Be example, in the effective range of the speckle projector, will be through speckle projector projection speckle pattern to the depth distance perpendicular and certain with speckle projector central shaft (Z axis)
Figure DEST_PATH_IMAGE011
The plane on, be parallel to each other and plane that spacing equates by choosing one group, through the standard speckle pattern of one group of different depth distance corresponding to imageing sensor collection acquisition, then adjacent standard speckle pattern is carried out block matching motion between any two and estimate, obtain one group of horizontal displacement
Figure 651208DEST_PATH_IMAGE012
, by formula
Figure DEST_PATH_IMAGE013
Conversion, thus one group of depth distance obtained with the data of horizontal displacement pair
Figure 879059DEST_PATH_IMAGE014
Figure 883924DEST_PATH_IMAGE015
, the analytical expression to adapting with a class and data Reflect horizontal displacement
Figure 352262DEST_PATH_IMAGE002
With depth distance Dependence.In this example
Figure 16910DEST_PATH_IMAGE002
With
Figure 654390DEST_PATH_IMAGE007
Nonlinear relationship, undetermined parameter in the formula Standard (such as least square method) with certain measurement goodness of fit calculates.Expression formula
Figure 409037DEST_PATH_IMAGE016
After determining, just can obtain any level displacement
Figure 37595DEST_PATH_IMAGE002
The depth distance of lower correspondence
Figure 981281DEST_PATH_IMAGE007
, can generate look-up table thus.
In computing method proposed by the invention, the motion vector of input speckle pattern (is used horizontal displacement
Figure 291170DEST_PATH_IMAGE002
With the perpendicular displacement amount Expression) precision is directly connected to depth distance
Figure 521480DEST_PATH_IMAGE007
Precision.
Best, in following embodiment, provide one can obtain the block matching motion estimation method that high-resolution obtains input speckle pattern motion vector.
Specifically may further comprise the steps: extraction is big or small in the input speckle pattern is ( ) image block
Figure 597201DEST_PATH_IMAGE019
In the standard speckle pattern, with image block
Figure 446339DEST_PATH_IMAGE019
Centered by institute's correspondence position, size be (
Figure 869230DEST_PATH_IMAGE020
) the search window
Figure 69399DEST_PATH_IMAGE021
In, seek the match block of the optimum of this image block by search strategy and similarity measurement index, wherein, M, N, n, mAll be integer, and M〉m, N〉nThereby, obtain this image block displacement ( ,
Figure 923271DEST_PATH_IMAGE003
), i.e. motion vector.
Preferably, the search strategy of match block is first by the horizontal direction movable block, increase in the vertical direction line number again in the search window, and match block is searched for one by one, and its matching interpolation precision can reach sub-pixel-level.
In the present embodiment, speckle pattern in the input speckle pattern sequence can be thought to be obtained through operations such as convergent-divergent, translations by the speckle pattern of standard speckle pattern, ask for the motion vector (being displacement) of corresponding speckle image piece, can obtain the depth information of this speckle image piece in conjunction with look-up table.
This block matching motion estimation method is different from traditional block matching algorithm.In matching process, the step-length of traditional its match block of estimation matching algorithm Equal to mate block size, the step-length of the image block that from the input speckle pattern, extracts
Figure 673369DEST_PATH_IMAGE022
Also can be less than the coupling block size, the motion vector of asking for through the piece coupling has only represented moving mass central area, step-length
Figure 643600DEST_PATH_IMAGE022
The motion vector of pixel (shadow region among Fig. 2) in the scope, accuracy rate and the small objects motion vector mistake that can obtain motion vector with the method are mated the compromise of phenomenon.
Although the above embodiments are finished in specific apparatus system, so its also non-limiting the present invention, those skilled in the art are easy to expect, above-mentioned similar method is applied in similar the pattern projection or other picture depth computing systems.Thereby modification without departing from the spirit and scope of the present invention and perfect, all should be included in the above-mentioned claim scope.

Claims (9)

1. picture depth computing method is characterized in that, may further comprise the steps:
1) gathers the standard speckle pattern with known depth information, as benchmark;
2) gathered the input speckle pattern sequence of target object by imageing sensor;
3) respectively input speckle pattern and standard speckle pattern in the described input speckle pattern sequence are carried out block matching motion and estimate, generate the motion vector of image block in the input speckle pattern;
4) the corresponding depth information of motion vector of each image block in the acquisition input speckle pattern;
5) depth information of all image blocks in the described input speckle pattern made up, obtain the depth map of described target object.
2. the method for claim 1, wherein described input speckle pattern and described standard speckle pattern are to be projected to respectively target object and the known body surface of depth information obtains by the laser beam with fixed pattern.
3. the method for claim 1, for described step 4), it is characterized in that: according to the motion vector of each image block, at known image sensor focal distance and sensor pixel point point in the situation of parameter, utilize this motion vector to ask for the relative changing value of the degree of depth in conjunction with the laser triangulation method, this relative changing value adds that the known depth information of standard speckle pattern can obtain depth information corresponding to this image block, wherein: when this relative changing value be on the occasion of the time, depth information corresponding to this image block is greater than the known depth information of standard speckle pattern; When this relative changing value was negative value, depth information corresponding to this image block was less than the known depth information of standard speckle pattern; When this relative changing value was zero, the depth information that this image block is corresponding equaled the known depth information of standard speckle pattern.
4. the method for claim 1 is characterized in that: also have a step of setting up the look-up table of corresponding relation between motion vector and the depth information between step 3) and the step 4), the depth information of described image block is asked for by loop up table.
5. method as claimed in claim 4 is characterized in that: described look-up table is set up according to the different displacements of standard speckle pattern and corresponding depth information thereof.
6. such as each described method of claim 4~5, it is characterized in that: described look-up table is set up by following methods: a plurality of standard speckle patterns with different depth information are carried out block matching motion in twos estimate, obtain the displacement between the corresponding standard speckle pattern of different depth information; Through curve, the calculated level displacement
Figure 777496DEST_PATH_IMAGE001
Or perpendicular displacement amount
Figure 558501DEST_PATH_IMAGE002
The depth information corresponding with it
Figure 716950DEST_PATH_IMAGE003
Curvilinear equation; Set up any level displacement according to this curvilinear equation
Figure 976024DEST_PATH_IMAGE001
Or any perpendicular displacement amount With depth information
Figure 855435DEST_PATH_IMAGE003
Corresponding look-up table.
7. such as each described method of claim 1~5, wherein, described block matching motion estimate to comprise the steps: in the input speckle pattern, to extract size for (
Figure 235601DEST_PATH_IMAGE004
) image block
Figure 298366DEST_PATH_IMAGE005
In the standard speckle pattern, with image block
Figure 345957DEST_PATH_IMAGE005
Centered by institute's correspondence position, size be (
Figure 999923DEST_PATH_IMAGE006
) the search window
Figure 132964DEST_PATH_IMAGE007
In, seek blocks and optimal matching blocks corresponding to this image block by search strategy and similarity measurement index, wherein, M, N, n, mAll be integer, and M〉m, N〉nThereby, obtain between this image block and its match block displacement (
Figure 530578DEST_PATH_IMAGE001
,
Figure 167096DEST_PATH_IMAGE002
), i.e. the motion vector of this image block.
8. method as claimed in claim 7, wherein said search strategy is: first by the horizontal direction movable block, increase in the vertical direction line number again, match block is searched for one by one.
9. such as claim 7 or 8 described methods, it is characterized in that: the step-length of the image block that extracts from the input speckle pattern is less than the coupling block size of its correspondence.
CN201210490257.0A 2012-11-27 2012-11-27 Image depth calculating method Active CN102999910B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210490257.0A CN102999910B (en) 2012-11-27 2012-11-27 Image depth calculating method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210490257.0A CN102999910B (en) 2012-11-27 2012-11-27 Image depth calculating method

Publications (2)

Publication Number Publication Date
CN102999910A true CN102999910A (en) 2013-03-27
CN102999910B CN102999910B (en) 2015-07-22

Family

ID=47928444

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210490257.0A Active CN102999910B (en) 2012-11-27 2012-11-27 Image depth calculating method

Country Status (1)

Country Link
CN (1) CN102999910B (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103810708A (en) * 2014-02-13 2014-05-21 西安交通大学 Method and device for perceiving depth of laser speckle image
CN103824318A (en) * 2014-02-13 2014-05-28 西安交通大学 Multi-camera-array depth perception method
CN103839258A (en) * 2014-02-13 2014-06-04 西安交通大学 Depth perception method of binarized laser speckle images
CN104537657A (en) * 2014-12-23 2015-04-22 西安交通大学 Laser speckle image depth perception method implemented through parallel search GPU acceleration
CN104691416A (en) * 2013-12-10 2015-06-10 通用汽车环球科技运作有限责任公司 A distance determination system for a vehicle using holographic techniques
CN104952074A (en) * 2015-06-16 2015-09-30 宁波盈芯信息科技有限公司 Deep perception calculation storage control method and device
CN105306922A (en) * 2014-07-14 2016-02-03 联想(北京)有限公司 Method and device for obtaining depth camera reference diagram
CN105844623A (en) * 2016-03-21 2016-08-10 西安电子科技大学 Target object depth information obtaining method based on De sequence hybrid coding
CN106254738A (en) * 2016-08-24 2016-12-21 深圳奥比中光科技有限公司 Dual image acquisition system and image-pickup method
CN106331453A (en) * 2016-08-24 2017-01-11 深圳奥比中光科技有限公司 Multi-image acquisition system and image acquisition method
CN108474652A (en) * 2016-01-04 2018-08-31 高通股份有限公司 Depth map in structured light system generates
CN108955641A (en) * 2018-04-23 2018-12-07 维沃移动通信有限公司 A kind of depth camera method, depth camera equipment and mobile terminal
CN109615652A (en) * 2018-10-23 2019-04-12 西安交通大学 A kind of depth information acquisition method and device
CN109870126A (en) * 2017-12-05 2019-06-11 宁波盈芯信息科技有限公司 A kind of area computation method and a kind of mobile phone for being able to carry out areal calculation
CN109903328A (en) * 2017-12-11 2019-06-18 宁波盈芯信息科技有限公司 A kind of device and method that the object volume applied to smart phone measures
CN113720275A (en) * 2021-08-11 2021-11-30 江西联创电子有限公司 Three-dimensional morphology measuring method and system and method for establishing depth information calibration table

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101501442A (en) * 2006-03-14 2009-08-05 普莱姆传感有限公司 Depth-varying light fields for three dimensional sensing
CN102710951A (en) * 2012-05-09 2012-10-03 天津大学 Multi-view-point computing and imaging method based on speckle-structure optical depth camera

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101501442A (en) * 2006-03-14 2009-08-05 普莱姆传感有限公司 Depth-varying light fields for three dimensional sensing
CN102710951A (en) * 2012-05-09 2012-10-03 天津大学 Multi-view-point computing and imaging method based on speckle-structure optical depth camera

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JIAN CHEN ET AL: "A New Block-Matching Based Approach for Automatic 2D to 3D Conversion", 《2012 4TH INTERNATIONAL CONFERENCE ON COMPUTER ENGINEERING AND TECHNOLOGY》 *
宋诗超 等: "基于Kinect的三维人体扫描、重建及测量技术的研究", 《天津工业大学学报》 *
邹小平 等: "激光片光三维传感中提高深度分辨率的方法", 《激光技术》 *

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104691416A (en) * 2013-12-10 2015-06-10 通用汽车环球科技运作有限责任公司 A distance determination system for a vehicle using holographic techniques
CN103824318B (en) * 2014-02-13 2016-11-23 西安交通大学 A kind of depth perception method of multi-cam array
CN103824318A (en) * 2014-02-13 2014-05-28 西安交通大学 Multi-camera-array depth perception method
CN103839258A (en) * 2014-02-13 2014-06-04 西安交通大学 Depth perception method of binarized laser speckle images
CN103810708A (en) * 2014-02-13 2014-05-21 西安交通大学 Method and device for perceiving depth of laser speckle image
CN103810708B (en) * 2014-02-13 2016-11-02 西安交通大学 A kind of laser speckle image depth perception method and device
CN105306922B (en) * 2014-07-14 2017-09-29 联想(北京)有限公司 Acquisition methods and device of a kind of depth camera with reference to figure
CN105306922A (en) * 2014-07-14 2016-02-03 联想(北京)有限公司 Method and device for obtaining depth camera reference diagram
CN104537657A (en) * 2014-12-23 2015-04-22 西安交通大学 Laser speckle image depth perception method implemented through parallel search GPU acceleration
CN104952074B (en) * 2015-06-16 2017-09-12 宁波盈芯信息科技有限公司 Storage controlling method and device that a kind of depth perception is calculated
CN104952074A (en) * 2015-06-16 2015-09-30 宁波盈芯信息科技有限公司 Deep perception calculation storage control method and device
US11057608B2 (en) 2016-01-04 2021-07-06 Qualcomm Incorporated Depth map generation in structured light system
CN108474652A (en) * 2016-01-04 2018-08-31 高通股份有限公司 Depth map in structured light system generates
CN105844623A (en) * 2016-03-21 2016-08-10 西安电子科技大学 Target object depth information obtaining method based on De sequence hybrid coding
CN106331453A (en) * 2016-08-24 2017-01-11 深圳奥比中光科技有限公司 Multi-image acquisition system and image acquisition method
CN106254738A (en) * 2016-08-24 2016-12-21 深圳奥比中光科技有限公司 Dual image acquisition system and image-pickup method
CN109870126A (en) * 2017-12-05 2019-06-11 宁波盈芯信息科技有限公司 A kind of area computation method and a kind of mobile phone for being able to carry out areal calculation
CN109903328A (en) * 2017-12-11 2019-06-18 宁波盈芯信息科技有限公司 A kind of device and method that the object volume applied to smart phone measures
CN109903328B (en) * 2017-12-11 2021-12-21 宁波盈芯信息科技有限公司 Object volume measuring device and method applied to smart phone
CN108955641A (en) * 2018-04-23 2018-12-07 维沃移动通信有限公司 A kind of depth camera method, depth camera equipment and mobile terminal
CN108955641B (en) * 2018-04-23 2020-11-17 维沃移动通信有限公司 Depth camera shooting method, depth camera shooting equipment and mobile terminal
CN109615652A (en) * 2018-10-23 2019-04-12 西安交通大学 A kind of depth information acquisition method and device
CN109615652B (en) * 2018-10-23 2020-10-27 西安交通大学 Depth information acquisition method and device
CN113720275A (en) * 2021-08-11 2021-11-30 江西联创电子有限公司 Three-dimensional morphology measuring method and system and method for establishing depth information calibration table

Also Published As

Publication number Publication date
CN102999910B (en) 2015-07-22

Similar Documents

Publication Publication Date Title
CN102999910B (en) Image depth calculating method
CN102970548B (en) Image depth sensing device
US10518414B1 (en) Navigation method, navigation system, movement control system and mobile robot
CN106780618B (en) Three-dimensional information acquisition method and device based on heterogeneous depth camera
US9432593B2 (en) Target object information acquisition method and electronic device
CN103424112B (en) A kind of motion carrier vision navigation method auxiliary based on laser plane
CN104463108A (en) Monocular real-time target recognition and pose measurement method
CN103824318A (en) Multi-camera-array depth perception method
CN103020988B (en) Method for generating motion vector of laser speckle image
CN103839258A (en) Depth perception method of binarized laser speckle images
CN103796004A (en) Active binocular depth sensing method of structured light
CN104317391A (en) Stereoscopic vision-based three-dimensional palm posture recognition interactive method and system
CN104776815A (en) Color three-dimensional profile measuring device and method based on Dammann grating
CN103796001A (en) Method and device for synchronously acquiring depth information and color information
CN103810708A (en) Method and device for perceiving depth of laser speckle image
CN102519434A (en) Test verification method for measuring precision of stereoscopic vision three-dimensional recovery data
CN102221331A (en) Measuring method based on asymmetric binocular stereovision technology
CN105277144A (en) Land area rapid detection method based on binocular vision and detection device thereof
Shahnewaz et al. Color and depth sensing sensor technologies for robotics and machine vision
Beltran et al. A comparison between active and passive 3d vision sensors: Bumblebeexb3 and Microsoft Kinect
Yang et al. Vision system of mobile robot combining binocular and depth cameras
CN101777182B (en) Video positioning method of coordinate cycling approximation type orthogonal camera system and system thereof
Piérard et al. I-see-3d! an interactive and immersive system that dynamically adapts 2d projections to the location of a user's eyes
CN101794444B (en) Coordinate cyclic approach type dual orthogonal camera system video positioning method and system
Jiao et al. A smart post-rectification algorithm based on an ANN considering reflectivity and distance for indoor scenario reconstruction

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
ASS Succession or assignment of patent right

Owner name: NINGBO YINGXIN INFORMATION SCIENCE + TECHNOLOGY CO

Free format text: FORMER OWNER: XI AN JIAOTONG UNIV.

Effective date: 20150121

C41 Transfer of patent application or patent right or utility model
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 710049 XI AN, SHAANXI PROVINCE TO: 315199

TA01 Transfer of patent application right

Effective date of registration: 20150121

Address after: 315199 room 298, No. 412, bachelor Road, Ningbo, Zhejiang, Yinzhou District

Applicant after: NINGBO YINGXIN INFORMATION SCIENCE & TECHNOLOGY CO., LTD.

Address before: Beilin District Xianning West Road 710049, Shaanxi city of Xi'an province No. 28

Applicant before: Xi'an Jiaotong University

C14 Grant of patent or utility model
GR01 Patent grant