CN102999910B - Image depth calculating method - Google Patents

Image depth calculating method Download PDF

Info

Publication number
CN102999910B
CN102999910B CN201210490257.0A CN201210490257A CN102999910B CN 102999910 B CN102999910 B CN 102999910B CN 201210490257 A CN201210490257 A CN 201210490257A CN 102999910 B CN102999910 B CN 102999910B
Authority
CN
China
Prior art keywords
speckle pattern
depth information
block
image block
standard
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210490257.0A
Other languages
Chinese (zh)
Other versions
CN102999910A (en
Inventor
葛晨阳
姚慧敏
李倩敏
葛瑞龙
李伟
江豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NINGBO YINGXIN INFORMATION SCIENCE & TECHNOLOGY CO., LTD.
Original Assignee
NINGBO YINGXIN INFORMATION SCIENCE & TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NINGBO YINGXIN INFORMATION SCIENCE & TECHNOLOGY Co Ltd filed Critical NINGBO YINGXIN INFORMATION SCIENCE & TECHNOLOGY Co Ltd
Priority to CN201210490257.0A priority Critical patent/CN102999910B/en
Publication of CN102999910A publication Critical patent/CN102999910A/en
Application granted granted Critical
Publication of CN102999910B publication Critical patent/CN102999910B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides an image depth calculating method. The image depth calculating method is based on an active visual pattern of structure light and adopts a block matching motion estimation method so as to improve matching accuracy. High-resolution depth information of a target object is acquired through the laser trigonometry or a table lookup method, and the image depth calculating method avoids a complicated depth calculating formula, achieves rapid and accurate acquisition of the depth information, simplifies a hardware structure, is easy to achieve, and facilitates large-scale popularization.

Description

A kind of picture depth computing method
Technical field
The invention belongs to image procossing, laser triangulation, natural interaction technical field, be specifically related to a kind of picture depth computing method.
Background technology
The man-machine interaction mode of natural harmony is the dreamboats of the mankind to manipulation machine, the order making machine can understand people to transmit in state of nature.Utilize image processing techniques acquisition depth information to carry out Real time identification and the motion capture of 3-D view, make people can become possibility alternately with the natural ways such as expression, gesture, the moved work of body and terminal.Being obtained by high-precision image depth information is also one of technological difficulties of Vision Builder for Automated Inspection development.Image depth information acquiring technology is progressively extended to other intelligent terminal from game machine peripheral hardware, comprise intelligent television, smart mobile phone, PC, panel computer etc., for user brings the control mode as " science fiction " and brand-new man-machine interaction experience, all have wide practical use in fields such as Entertainment, consumer electronics, medical treatment, education.
The active vision pattern of structure based light more adequately can obtain the depth information of image, this pattern compares binocular solid camera, and the depth map information with acquisition is more reliable and more stable, by ambient light effects, the advantage such as Stereo matching process is simple, algorithm calculated amount is little.Body sense interactive device Kinect as Microsoft is exactly the active vision pattern adopting infrared structure light, namely fixed mode image is projected to body surface by infrared laser, diffuse reflection through body surface forms speckle point, obtain speckle image by imageing sensor collection, then calculated the depth information obtaining object by picture depth sensor chip.
Although said method can obtain depth information exactly, but the realization of its algorithm depends on expensive hardware device, brings difficulty to practical application.
Summary of the invention
The invention provides a kind of picture depth computing method, avoid complicated depth calculation formula, the hardware configuration of needs is simple, can realize the acquisition real-time of high-resolution depth information.
For solving the problems of the technologies described above, the technical solution used in the present invention is: provide a kind of picture depth computing method, it is characterized in that, comprise the following steps:
1) the standard speckle pattern with known depth information is gathered, as benchmark;
2) the input speckle pattern sequence of target object is gathered by imageing sensor;
3) each input speckle pattern in described input speckle pattern sequence is mated with standard speckle pattern, generate the motion vector of each image block in input speckle pattern;
4) depth information corresponding to motion vector of each image block in input speckle pattern is obtained;
5) depth information of all image blocks in described input speckle pattern is combined, obtain the depth map of described target object.
Picture depth computing method provided by the invention, the active vision pattern of structure based light, block matching motion estimation method is adopted to improve matching precision, the high-resolution depth information of target object is obtained by the mode of laser triangulation method or look-up table, avoid complicated depth calculation formula, achieve the quick and precisely acquisition of depth information, simplify hardware configuration, be easy to realize, be conducive to large-scale promotion.
Accompanying drawing explanation
Fig. 1 is the laser triangulation principle schematic of one embodiment of the invention;
Fig. 2 is the depth calculation look-up table mode schematic diagram of another embodiment of the present invention;
Fig. 3 is the displacement of another embodiment of the present invention and the curve-fitting method schematic diagram of depth distance;
Fig. 4 is the block-based motion estimation schematic diagram of another embodiment of the present invention.
Embodiment
Below in conjunction with specific embodiment, the present invention is described in further detail.
In one embodiment, a kind of picture depth computing method are provided, comprise the following steps:
1) the standard speckle pattern with known depth information is gathered, as benchmark;
2) the input speckle pattern sequence of target object is gathered by imageing sensor;
3) each input speckle pattern in described input speckle pattern sequence and standard speckle pattern are carried out block-based motion estimation, generate the motion vector of image block in input speckle pattern;
4) depth information corresponding to motion vector of each image block in input speckle pattern is obtained;
5) depth information of all image blocks in described input speckle pattern is combined, obtain the depth map of described target object.
The acquisition of its Plays speckle pattern can realize in the following way: the laser beam (infrared, visible ray, ultraviolet, invisible light) of fixed pattern being projected and known depth information perpendicular with the central shaft of the speckle projector (Z axis) is standard flat, be standard speckle pattern by the speckle pattern that imageing sensor acquisition plane is formed.Input speckle pattern also can adopt said method to obtain, and the target object containing the information that will fathom in input speckle pattern, its depth information is unknown.
Within the scope of the coverage that above-mentioned standard flat and target object need irradiate at speckle, and contain the view picture speckle image of fixed pattern formation as far as possible.
In the present invention, if not otherwise specified, the degree of depth of image block dthat all refer to this image block place, vertical with speckle projector central shaft (Z axis) plane is to the vertical range of speckle projector front end.
Preferably, in another embodiment, described input speckle pattern and described standard speckle pattern are by the laser beam of fixed pattern being projected to respectively target object and the known body surface of depth information and obtaining.With regard to this embodiment, it is only for limiting a kind of acquisition pattern of speckle pattern.
Preferably, in another embodiment, for described step 4), it is characterized in that: according to the motion vector of each image block, when known image sensor focal distance and sensor pixel point point distance parameter, utilize this motion vector to ask for the relative changing value of the degree of depth in conjunction with laser triangulation method, this relative changing value can just can bear, and this relative changing value adds that the known depth information of standard speckle pattern can obtain depth information corresponding to this image block.
Fig. 1 is the laser triangulation principle schematic of this embodiment.According to the displacement of each image block ( , ), in known image sensor focal distance when with sensor pixel point point apart from parameter, this displacement is utilized to ask for the relative changing value of the degree of depth in conjunction with laser triangulation method , this changing value can just can bear, add the known depth information of standard speckle pattern the depth information that this image block is corresponding can be obtained , wherein: when this relative changing value be on the occasion of time, the depth information that this image block is corresponding is greater than the known depth information of standard speckle pattern; When this relative changing value is negative value, the depth information that this image block is corresponding is less than the known depth information of standard speckle pattern; When this relative changing value is zero, the depth information that this image block is corresponding equals the known depth information of standard speckle pattern.
For simplifying the obtain manner of depth information further, in another embodiment, in above-mentioned steps 3) and step 4) between increase a step setting up the look-up table of the corresponding relation of motion vector and depth information, utilize loop up table to ask for the degree of depth of image block in input speckle pattern, this look-up table is set up according to the depth value of the different displacement of standard speckle pattern and correspondence thereof.
Particularly, horizontal displacement in look-up table or perpendicular displacement amount and the degree of depth corresponding relation can be obtained by following methods: by multiple different depth distance (as ) standard speckle pattern carry out block-based motion estimation between two, obtain the displacement between various criterion speckle pattern, as horizontal displacement or perpendicular displacement amount ; Horizontal displacement can be obtained through curve-fitting method or perpendicular displacement amount with Object Depth distance relation; Any level displacement can be generated according to this curvilinear equation or any perpendicular displacement amount with depth distance corresponding look-up table.Any level displacement can be obtained according to this look-up table or perpendicular displacement amount the depth information of this corresponding image block .Fig. 2 is the schematic diagram of the present embodiment depth calculation look-up table mode.
Through searching the degree of depth of input speckle pattern all image blocks and combine, this depth map corresponding to input speckle pattern can be obtained.
Preferably, depth map gray-scale map represents, the larger expression of such as gray-scale value is nearer, namely be worth about little; The less expression of gray-scale value far, namely be worth larger.Also opposite way gray-scale map can be adopted to represent depth map.
That is, present invention achieves look-up table mode and ask for the degree of depth , such as: in concrete enforcement, by horizontal displacement or perpendicular displacement amount as the input value of look-up table, obtain the output valve degree of depth , thus avoid complicated depth calculation formula, achieve the simplification of hardware configuration and the saving of hardware resource.
Further, in another embodiment, the curve-fitting method (see Fig. 3) of relation between a kind of displacement and depth information is provided, with horizontal displacement for example, in the effective range of the speckle projector, will through speckle projector projection speckle pattern to the depth distance perpendicular and certain with speckle projector central shaft (Z axis) plane on, be parallel to each other and the equal plane of spacing by choosing one group, obtain the standard speckle pattern of one group of corresponding different depth distance through imageing sensor collection, then between any two block-based motion estimation is carried out to adjacent standard speckle pattern, obtain one group of horizontal displacement , by formula conversion, thus obtain the data pair of one group of depth distance with horizontal displacement ... , by a class and data to the analytical expression adapted reflect horizontal displacement with depth distance dependence.In this example with nonlinear relationship, undetermined parameter in formula calculate by the standard (as least square method) of certain measurement goodness of fit.Expression formula after determining, just can obtain any level displacement the depth distance of lower correspondence , can look-up table be generated thus.
In computing method proposed by the invention, the motion vector of input speckle pattern (uses horizontal displacement with perpendicular displacement amount represent) precision be directly connected to depth distance precision.
Best, in the following example, provide one can obtain high-resolution and obtain the block matching motion estimation method inputting speckle pattern motion vector.
Specifically comprise the following steps: input speckle pattern in extract size for ( ) image block ; In standard speckle pattern, with image block centered by corresponding position, size be ( ) search window in, the match block of the optimum of this image block is found by search strategy and similarity measurement index, wherein, m, N, n, mall integer, and m>m, N>n, thus obtain this image block displacement ( , ), i.e. motion vector.
Preferably, in search window, the search strategy of match block is that match block is searched for one by one, and its matching interpolation precision can reach sub-pixel-level first by horizontal direction movable block, increase line number in the vertical direction again.
In the present embodiment, speckle pattern in input speckle pattern sequence can be thought to be obtained through the operation such as convergent-divergent, translation by the speckle pattern of standard speckle pattern, ask for the motion vector (i.e. displacement) of corresponding speckle image block, the depth information of this speckle image block can be obtained in conjunction with look-up table.
This block matching motion estimation method is different from traditional block matching algorithm.In the matching process, the step-length of traditional its match block of estimation matching algorithm equal the size of match block, the step-length of the image block extracted from input speckle pattern also can be less than the size of match block, the motion vector asked for through Block-matching represent just moving mass central area, step-length the motion vector of pixel (shadow region in Fig. 2) in scope, can obtain the accuracy rate of motion vector and the compromise of small objects motion vector error hiding phenomenon in this approach.
Although the above embodiments complete in specific apparatus system, so itself and non-limiting the present invention, those skilled in the art are easy to expect, above-mentioned similar method are applied in similar pattern projection or other picture depth computing systems.Thus amendment without departing from the spirit and scope of the present invention and perfect, all should be included in above-mentioned right.

Claims (9)

1. picture depth computing method, is characterized in that, comprise the following steps:
1) the standard speckle pattern with known depth information is gathered, as benchmark;
2) the input speckle pattern sequence of target object is gathered by imageing sensor;
3) each input speckle pattern in described input speckle pattern sequence and standard speckle pattern are carried out block-based motion estimation, generate the motion vector of image block in input speckle pattern;
4) according to step 3) in each input speckle pattern and standard speckle pattern carry out the motion vector of image block in the input speckle pattern that block-based motion estimation obtains, and the known depth information of above-mentioned standard speckle pattern, in conjunction with laser triangulation method, obtain the depth information that this image block is corresponding;
5) depth information of all image blocks in described input speckle pattern is combined, obtain the depth map of described target object.
2. the method for claim 1, wherein described input speckle pattern and described standard speckle pattern are by the laser beam of fixed pattern being projected to respectively target object and the known body surface of depth information and obtaining.
3. the method for claim 1, for described step 4), it is characterized in that: according to the motion vector of each image block, when known image sensor focal distance and sensor pixel point point distance parameter, this motion vector is utilized to ask for the relative changing value of the degree of depth in conjunction with laser triangulation method, this relative changing value adds that the known depth information of standard speckle pattern can obtain depth information corresponding to this image block, wherein: when this relative changing value be on the occasion of time, the depth information that this image block is corresponding is greater than the known depth information of standard speckle pattern; When this relative changing value is negative value, the depth information that this image block is corresponding is less than the known depth information of standard speckle pattern; When this relative changing value is zero, the depth information that this image block is corresponding equals the known depth information of standard speckle pattern.
4. the method for claim 1, is characterized in that: step 3) and step 4) between also have a step setting up the look-up table of corresponding relation between motion vector and depth information, the depth information of described image block is asked for by loop up table.
5. method as claimed in claim 4, is characterized in that: described look-up table is set up according to the depth information of the different displacement of standard speckle pattern and correspondence thereof.
6. the method as described in any one of claim 4 ~ 5, it is characterized in that: described look-up table is set up by following methods: multiple standard speckle pattern with different depth information is carried out block-based motion estimation between two, obtains the displacement between the standard speckle pattern corresponding to different depth information; Through curve, the curvilinear equation of the depth information d that calculated level displacement Δ x or perpendicular displacement amount Δ y is corresponding with it; Any level displacement Δ x or any perpendicular displacement amount Δ y look-up table corresponding with depth information d is set up according to this curvilinear equation.
7. the method as described in any one of Claims 1 to 5, wherein, described block-based motion estimation comprises the steps: to extract the image block block that size is m × n in input speckle pattern m × n; In standard speckle pattern, with image block block m × ncentered by corresponding position, size is the search window search_block of M × N m × Nin, blocks and optimal matching blocks corresponding to this image block is found by search strategy and similarity measurement index, wherein, M, N, n, m are integers, and M > m, N > n, thus the displacement (Δ x, Δ y) obtained between this image block and its match block, i.e. the motion vector of this image block.
8. method as claimed in claim 7, wherein said search strategy is: first by horizontal direction movable block, increase line number in the vertical direction again, match block is searched for one by one.
9. method as claimed in claim 7, is characterized in that: the step-length of the image block extracted from input speckle pattern is less than the size of the match block of its correspondence.
CN201210490257.0A 2012-11-27 2012-11-27 Image depth calculating method Active CN102999910B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210490257.0A CN102999910B (en) 2012-11-27 2012-11-27 Image depth calculating method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210490257.0A CN102999910B (en) 2012-11-27 2012-11-27 Image depth calculating method

Publications (2)

Publication Number Publication Date
CN102999910A CN102999910A (en) 2013-03-27
CN102999910B true CN102999910B (en) 2015-07-22

Family

ID=47928444

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210490257.0A Active CN102999910B (en) 2012-11-27 2012-11-27 Image depth calculating method

Country Status (1)

Country Link
CN (1) CN102999910B (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9404742B2 (en) * 2013-12-10 2016-08-02 GM Global Technology Operations LLC Distance determination system for a vehicle using holographic techniques
CN103824318B (en) * 2014-02-13 2016-11-23 西安交通大学 A kind of depth perception method of multi-cam array
CN103810708B (en) * 2014-02-13 2016-11-02 西安交通大学 A kind of laser speckle image depth perception method and device
CN103839258A (en) * 2014-02-13 2014-06-04 西安交通大学 Depth perception method of binarized laser speckle images
CN105306922B (en) * 2014-07-14 2017-09-29 联想(北京)有限公司 Acquisition methods and device of a kind of depth camera with reference to figure
CN104537657A (en) * 2014-12-23 2015-04-22 西安交通大学 Laser speckle image depth perception method implemented through parallel search GPU acceleration
CN104952074B (en) * 2015-06-16 2017-09-12 宁波盈芯信息科技有限公司 Storage controlling method and device that a kind of depth perception is calculated
US11057608B2 (en) * 2016-01-04 2021-07-06 Qualcomm Incorporated Depth map generation in structured light system
CN105844623A (en) * 2016-03-21 2016-08-10 西安电子科技大学 Target object depth information obtaining method based on De sequence hybrid coding
CN106254738A (en) * 2016-08-24 2016-12-21 深圳奥比中光科技有限公司 Dual image acquisition system and image-pickup method
CN106331453A (en) * 2016-08-24 2017-01-11 深圳奥比中光科技有限公司 Multi-image acquisition system and image acquisition method
CN109870126A (en) * 2017-12-05 2019-06-11 宁波盈芯信息科技有限公司 A kind of area computation method and a kind of mobile phone for being able to carry out areal calculation
CN109903328B (en) * 2017-12-11 2021-12-21 宁波盈芯信息科技有限公司 Object volume measuring device and method applied to smart phone
CN108955641B (en) * 2018-04-23 2020-11-17 维沃移动通信有限公司 Depth camera shooting method, depth camera shooting equipment and mobile terminal
CN109615652B (en) * 2018-10-23 2020-10-27 西安交通大学 Depth information acquisition method and device
CN112926367B (en) * 2019-12-06 2024-06-21 杭州海康威视数字技术股份有限公司 Living body detection equipment and method
CN113720275A (en) * 2021-08-11 2021-11-30 江西联创电子有限公司 Three-dimensional morphology measuring method and system and method for establishing depth information calibration table
CN114972262A (en) * 2022-05-26 2022-08-30 昆山丘钛微电子科技股份有限公司 Depth image calculation method, device, equipment and medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101501442A (en) * 2006-03-14 2009-08-05 普莱姆传感有限公司 Depth-varying light fields for three dimensional sensing
CN102710951A (en) * 2012-05-09 2012-10-03 天津大学 Multi-view-point computing and imaging method based on speckle-structure optical depth camera

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101501442A (en) * 2006-03-14 2009-08-05 普莱姆传感有限公司 Depth-varying light fields for three dimensional sensing
CN102710951A (en) * 2012-05-09 2012-10-03 天津大学 Multi-view-point computing and imaging method based on speckle-structure optical depth camera

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
A New Block-Matching Based Approach for Automatic 2D to 3D Conversion;Jian Chen et al;《2012 4th International Conference on Computer Engineering and Technology》;20120513;全文 *
基于Kinect的三维人体扫描、重建及测量技术的研究;宋诗超 等;《天津工业大学学报》;20121031;第31卷(第5期);全文 *
激光片光三维传感中提高深度分辨率的方法;邹小平 等;《激光技术》;20040430;第28卷(第2期);全文 *

Also Published As

Publication number Publication date
CN102999910A (en) 2013-03-27

Similar Documents

Publication Publication Date Title
CN102999910B (en) Image depth calculating method
CN102970548B (en) Image depth sensing device
CN109993793B (en) Visual positioning method and device
CN106780618B (en) Three-dimensional information acquisition method and device based on heterogeneous depth camera
CN103824318B (en) A kind of depth perception method of multi-cam array
CN104317391B (en) A kind of three-dimensional palm gesture recognition exchange method and system based on stereoscopic vision
CN112894832A (en) Three-dimensional modeling method, three-dimensional modeling device, electronic equipment and storage medium
CN103839258A (en) Depth perception method of binarized laser speckle images
CN103020988B (en) Method for generating motion vector of laser speckle image
CN104463108A (en) Monocular real-time target recognition and pose measurement method
Luo et al. A simple calibration procedure for structured light system
CN104034269B (en) A kind of monocular vision measuring method and device
CN103810708A (en) Method and device for perceiving depth of laser speckle image
CN107358633A (en) Join scaling method inside and outside a kind of polyphaser based on 3 points of demarcation things
CN103900583A (en) Device and method used for real-time positioning and map building
CN104776815A (en) Color three-dimensional profile measuring device and method based on Dammann grating
CN102519434A (en) Test verification method for measuring precision of stereoscopic vision three-dimensional recovery data
CN115880555B (en) Target detection method, model training method, device, equipment and medium
CN103841406B (en) A kind of depth camera device of plug and play
CN111429571B (en) Rapid stereo matching method based on spatio-temporal image information joint correlation
CN104537657A (en) Laser speckle image depth perception method implemented through parallel search GPU acceleration
CN112184914A (en) Method and device for determining three-dimensional position of target object and road side equipment
Pal et al. 3D point cloud generation from 2D depth camera images using successive triangulation
Chang et al. YOLOv4‐tiny‐based robust RGB‐D SLAM approach with point and surface feature fusion in complex indoor environments
CN112965052A (en) Monocular camera target ranging method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
ASS Succession or assignment of patent right

Owner name: NINGBO YINGXIN INFORMATION SCIENCE + TECHNOLOGY CO

Free format text: FORMER OWNER: XI AN JIAOTONG UNIV.

Effective date: 20150121

C41 Transfer of patent application or patent right or utility model
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 710049 XI AN, SHAANXI PROVINCE TO: 315199

TA01 Transfer of patent application right

Effective date of registration: 20150121

Address after: 315199 room 298, No. 412, bachelor Road, Ningbo, Zhejiang, Yinzhou District

Applicant after: NINGBO YINGXIN INFORMATION SCIENCE & TECHNOLOGY CO., LTD.

Address before: Beilin District Xianning West Road 710049, Shaanxi city of Xi'an province No. 28

Applicant before: Xi'an Jiaotong University

C14 Grant of patent or utility model
GR01 Patent grant