CN106683133A - Method for acquiring target depth image - Google Patents
Method for acquiring target depth image Download PDFInfo
- Publication number
- CN106683133A CN106683133A CN201611128911.8A CN201611128911A CN106683133A CN 106683133 A CN106683133 A CN 106683133A CN 201611128911 A CN201611128911 A CN 201611128911A CN 106683133 A CN106683133 A CN 106683133A
- Authority
- CN
- China
- Prior art keywords
- image
- target
- depth
- speckle
- collection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
Landscapes
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention discloses a method for acquiring a target depth image. The method includes steps of applying at least two working modes, and collecting targeted image information at the same time; respectively processing the collected image information and further acquiring at least two depth images; comparing detailed information of at least two depth images, outputting the depth image with few detail defaults as the targeted depth image, wherein the first working mode uses two RGB vidicons; the second working mode uses an IR vidicon and a laser projector. Through the method, the accuracy and convenience of acquiring the target depth image can be improved.
Description
Technical field
The present invention relates to 3D technology field, more particularly to a kind of method for obtaining target depth image.
Background technology
With continuing to develop for 3D technology, the acquisition to target depth image is more and more easier, traditional utilization plan
Progressively be changed into as carrying out graphical analysis carries out graphical analysis using depth image.In order to ensure the accuracy of analysis, to obtaining
The quality requirement more and more higher of the target depth image for taking.
Due to the otherness of scene residing for target, there is larger field in the target such as in interior and the target in outdoor
Scape difference, and current 3D sensors are not suitable for the application of many scenes mostly, cause what is obtained due to scene difference
There is error in depth image.
The content of the invention
The present invention solves the technical problem of a kind of method for obtaining target depth image of offer, including:Take to
Few two kinds of mode of operations, while gathering the image information of target, are processed the described image information for gathering, and then obtain respectively
Obtain at least two depth images;At least two detailed information of the depth image of contrast, export the default less depth of details
Image is the depth image of the target.
Wherein, at least two detailed information of the depth image of the contrast, export the default less depth map of details
As the depth image for the target includes, the same area for contrasting the target is corresponding at least two depth images
The brightness in region, the luminance contrast for exporting adjacent area is big, and the less depth image of the texture being repeated cyclically is the target
Depth image.
When the mode of operation is the first mode of operation, the image information of the collection target, respectively to the institute of collection
Stating image information to be processed, and then obtain at least two depth images includes, by two RGB video cameras simultaneously to the mesh
Mark carries out IMAQ, and the corresponding first collection image of the target and the second collection image are obtained respectively;Adopted to described first
Collection image and the second collection image are processed, and then obtain the first depth image.
Wherein, it is described to pass through two RGB video cameras while carrying out IMAQ to the target, the target is obtained respectively
Include before corresponding first collection image and the second collection image, two RGB video cameras are calibrated, so that described
The pixel of the same position for corresponding to the target in the first collection image and the second collection image is sat in vertical direction
Mark is identical.
Wherein, it is described that described first collection image and the second collection image are processed, and then it is deep to obtain first
Spending image includes, the same position of the target is corresponded in the calculating first collection image and the second collection image
Pixel side-play amount in the horizontal direction;According to the side-play amount, the depth of the pixel is obtained using computing formula
Information;Depth information according to all pixels point in the described first collection image or the second collection image obtains described first
Depth image.
Wherein, the computing formula is that Z=f*t/ δ x, Z are the depth information of the pixel;F is the shooting
The focal length of machine, the focal length of two RGB video cameras is adjusted to identical during pre-adjusting;T is two RGB
The distance of camera center;δ x are the pixel side-play amount in the horizontal direction of same position.
When the mode of operation is the second mode of operation, the image information of the collection target, respectively to the institute of collection
Image information is stated to be processed, so obtain at least two depth images include, laser-projector projection speckle to the target,
IR camera acquisition speckle images;The speckle image is processed with the reference speckle image of advance collection, and then is obtained
Second depth image.
Wherein, the laser-projector projection speckle includes, often to the target before IR camera acquisition speckle images
It is spaced a distance, chooses a reference planes;The laser-projector projects the speckle to the reference planes;The IR
Reference speckle pattern in all reference planes of camera acquisition;
Wherein, the laser-projector projection speckle also includes to the target before IR camera acquisition speckle images:
The position of the laser-projector and the IR video cameras is adjusted, so that the laser-projector keeps pre- with the IR video cameras
Set a distance and in same level.
Wherein, it is described that the speckle image is processed with the reference speckle image of advance collection, and then obtain second
Depth image includes that set default infrared speckle regions according to the speckle image, the infrared speckle regions can travel through whole
The individual speckle image;It is corresponding described infrared each pixel to be searched according to the infrared speckle regions and the reference speckle pattern
The nearest reference planes of speckle regions, and calculate the described infrared speckle regions corresponding to each pixel and nearest institute
State the deviation value of reference planes;Depth value according to the deviation value and the nearest reference planes calculates each pixel
Depth information;The depth information and then acquisition second depth image according to all pixels point.
The beneficial effects of the invention are as follows:The situation of prior art is different from, the present invention takes at least two mode of operations, together
When collection target image information, respectively to gather described image information carry out treatment obtain at least two depth images, will
The default less depth image output final as target of details in above-mentioned at least two depth images, the present invention is adopted
The method for taking needs not distinguish between the environment residing for target, improves the accuracy and convenience for obtaining target depth image.
Brief description of the drawings
Fig. 1 is the schematic flow sheet of the implementation method of method one that the present invention obtains target depth image;
Fig. 2 is the first mode of operation schematic flow sheet;
Fig. 3 is the second mode of operation schematic flow sheet;
Fig. 4 is the structural representation of the device for obtaining target depth image.
Specific embodiment
Fig. 1 is referred to, Fig. 1 is the schematic flow sheet of the implementation method of method one that the present invention obtains target depth image, bag
Include:
S101:At least two mode of operations are taken, while the image information of target is gathered, respectively to the image information of collection
Processed, and then obtained at least two depth images;
S102:At least two detailed information of depth image of contrast, the default less depth image of output details is target
Depth image:
Specifically, the brightness of the same area corresponding region at least two depth images of contrast target, exports adjacent
The luminance contrast in region is big, and the less depth image of the texture being repeated cyclically is the depth image of target.
To be further described in detail with regard to step S101 below.
Taking two kinds of mode of operations in the present embodiment simultaneously carries out IMAQ to target, respectively the first mode of operation and
Second mode of operation, in other embodiments, can also additionally increase other mode of operations, and this is not limited by the present invention.
Fig. 2 is referred to, when mode of operation is the first mode of operation, step S101 is specially:
S201:Two RGB video cameras are calibrated;
Specifically, calibration content includes position, angle, so that the first collection image of two RGB video cameras collection and the
The pixel of the same position for corresponding to target in two collection images is identical in vertical direction coordinate;
S202:IMAQ is carried out to target simultaneously by two video cameras, corresponding first collection of target is obtained respectively
Image and the collection images of RGB second;
S203:First collection image and the second collection image are processed, and then obtains the corresponding depth image of target;
Specifically, calculate the first pixel for gathering the same position for corresponding to target in image and the second collection image existing
The side-play amount of horizontal direction;According to side-play amount, the depth information of pixel is obtained using computing formula;According to the first collection image
Or second the depth information of all pixels point in collection image obtain the corresponding depth image of target.
Wherein, above-mentioned computing formula is:Z=f*t/ δ x, Z are the depth information of pixel;F is the focal length of video camera, two
The focal length of individual RGB video camera is adjusted to identical during pre-adjusting;T is two distances of camera center;δ x are identical
The pixel of position side-play amount in the horizontal direction.
In other embodiments, the first collection image that further can also be obtained to depth image and RGB video camera or
Second collection image enters row interpolation matching, and then obtains the corresponding three-dimensional color image of target.
Fig. 3 is referred to, when mode of operation is the second mode of operation, above-mentioned steps S101 is specially:
S301:Laser-projector and IR video cameras are calibrated;
Specifically, the position of adjustment laser-projector and IR video cameras, so that laser-projector keeps pre- with IR video cameras
Set a distance and in same level.In the present embodiment, the conduct of laser-projector is to project infrared light to target, so that IR
Shooting function obtains the infrared information of target.In other embodiments, other infrared light supplies can be also used, the present invention does not make to this
Limit.
S302:Laser-projector projection speckle is to target, IR camera acquisition speckle images;
S303:Speckle image is processed with the reference speckle image of advance collection, and then is obtained the corresponding depth of target
Degree image.
Specifically, setting default infrared speckle regions according to speckle image, infrared speckle regions can travel through whole speckle
Figure;The nearest reference planes of the corresponding infrared speckle regions of each pixel are searched according to infrared speckle regions and with reference to speckle pattern,
And calculate the deviation value of the infrared speckle regions corresponding to each pixel and nearest reference planes;According to deviation value and recently
The depth value of reference planes calculate the depth information of each pixel;Depth information according to all pixels point obtains target
Corresponding depth image.
Wherein, the preparation method of above-mentioned reference speckle image is:At interval of a segment distance, a reference planes are chosen;Swash
Light projector's projection speckle is to reference planes;Reference speckle pattern in all reference planes of IR camera acquisitions;Reference planes can
It is axially moved with respect to IR video cameras.
Refer to Fig. 4, Fig. 4 is the structural representation of the implementation method of device one for obtaining target depth image, including image
Collection module 401, laser-projector 402, processor 403, calibration module 404, memory 405.
Specifically, IMAQ module 401 is used to gather the colour information and infrared information of target, schemes in the present embodiment
As collection module 101 includes at least two RGB video cameras and an IR video camera, in other embodiments, the number of video camera
With species can be other, the invention is not limited in this regard.
Laser-projector 402 is used to project infrared light to target, so that IMAQ module 101 collects the red of target
External information.In other embodiments, or other kinds of infrared light supply.
Processor 403 is used to control the working method of IMAQ module 401 and laser-projector 402, and image is adopted
The colour information and infrared information that collection module 401 is collected are processed, to obtain the depth image of target.
Calibration module 404 be used for before image is gathered calibration in advance laser-projector 402 and IMAQ module 401 away from
From.
Memory 405 is used to store the reference speckle image of laser-projector 402, the mesh of the collection of IMAQ module 401
Target colour information and infrared information and processor 403 process the depth image of the target for obtaining.
Below, will be explained in the workflow of processor 403.Processor 403 includes control module 4031, computing module
4032nd, acquisition module 4033, judge module 4034.It is two kinds of work sides that control module 4031 controls the working method of IMAQ
Formula is acquired to target simultaneously, respectively the infrared information of IR video cameras and laser-projector 402 combination collection target, two
The colour information of RGB video camera combination collection target;After IMAQ module 401 has gathered image, image information is transmitted
To computing module 4032, computing module 4032 obtains all pixels point of image respectively under starting to calculate above two working method
Depth information, and above-mentioned depth information is transferred to acquisition module 4033;Acquisition module 4033 is according to above-mentioned all pixels point
Depth information obtain corresponding two depth images of target;Judge module 4034 judges to draw thin in above-mentioned two depth images
Default less one is saved, and as the depth image output of target.
In sum, the situation of prior art is different from, the present invention takes at least two mode of operations, while gathering target
Image information, respectively to gather described image information carry out treatment obtain at least two depth images, by above-mentioned at least two
The default less depth image output final as target of details in Zhang Suoshu depth images, the method that the present invention is taken is not
Need to distinguish the environment residing for target, improve the accuracy and convenience for obtaining target depth image.
Embodiments of the present invention are the foregoing is only, the scope of the claims of the invention is not thereby limited, it is every using this
Equivalent structure or equivalent flow conversion that description of the invention and accompanying drawing content are made, or directly or indirectly it is used in other correlations
Technical field, is included within the scope of the present invention.
Claims (10)
1. it is a kind of obtain target depth image method, it is characterised in that including:
At least two mode of operations are taken, while gathering the image information of target, the described image information for gathering is carried out respectively
Treatment, and then obtain at least two depth images;
At least two detailed information of the depth image of contrast, the output default less depth image of details is the target
Depth image.
2. method according to claim 1, it is characterised in that the details letter of at least two depth images of the contrast
Breath, the output default less depth image of details is that the depth image of the target includes:
The brightness of the same area corresponding region at least two depth images of the target is contrasted, adjacent area is exported
Luminance contrast it is big, the less depth image of the texture that is repeated cyclically is the depth image of the target.
3. method according to claim 1, it is characterised in that described when the mode of operation is the first mode of operation
The image information of target is gathered, the described image information for gathering is processed respectively, and then obtain at least two depth images
Including:
IMAQ is carried out to the target simultaneously by two RGB video cameras, the target corresponding first is obtained respectively and is adopted
Collection image and the second collection image;
Described first collection image and the second collection image are processed, and then obtains the first depth image.
4. method according to claim 3, it is characterised in that described to pass through two RGB video cameras simultaneously to the target
IMAQ is carried out, is included before the corresponding first collection image of the target and the second collection image are obtained respectively:
Two RGB video cameras are calibrated, so that correspondence in the first collection image and the second collection image
It is identical in vertical direction coordinate in the pixel of the same position of the target.
5. method according to claim 3, it is characterised in that described to the described first collection image and second collection
Image is processed, and then the first depth image of acquisition includes:
Calculate it is described first collection image and it is described second collection image in correspond to the target same position the picture
Vegetarian refreshments side-play amount in the horizontal direction;
According to the side-play amount, the depth information of the pixel is obtained using computing formula;
Depth information according to all pixels point in the described first collection image or the second collection image obtains described first
Depth image.
6. method according to claim 5, it is characterised in that the computing formula is:
Z=f*t/ δ x,
Wherein, Z is the depth information of the pixel;F is the focal length of the video camera, two RGB video cameras
The focal length is adjusted to identical during pre-adjusting;T is two distances at the RGB video camera center;δ x are identical bits
The pixel the put side-play amount in the horizontal direction.
7. method according to claim 1, it is characterised in that described when the mode of operation is the second mode of operation
The image information of target is gathered, the described image information for gathering is processed respectively, and then obtain at least two depth images
Including:
Laser-projector projection speckle is to the target, IR camera acquisition speckle images;
The speckle image is processed with the reference speckle image of advance collection, and then is obtained the second depth image.
8. method according to claim 7, it is characterised in that the laser-projector projection speckle to the target, IR
Include before camera acquisition speckle image:
At interval of a segment distance, a reference planes are chosen;The laser-projector projects the speckle to the reference planes;
Reference speckle pattern in all reference planes of IR camera acquisitions.
9. method according to claim 7, it is characterised in that the laser-projector projection speckle to the target, IR
Also include before camera acquisition speckle image:
The position of the laser-projector and the IR video cameras is adjusted, so that the laser-projector is protected with the IR video cameras
Hold preset distance and in same level.
10. method according to claim 7, it is characterised in that described by the speckle image and the reference of collection in advance
Speckle image is processed, and then the second depth image of acquisition includes:
Default infrared speckle regions are set according to the speckle image, the infrared speckle regions can travel through the whole speckle
Image;
The corresponding infrared speckle regions of each pixel are searched according to the infrared speckle regions and the reference speckle pattern
Nearest reference planes, and calculate the described infrared speckle regions corresponding to each pixel and the nearest reference planes
Deviation value;
Depth value according to the deviation value and the nearest reference planes calculates the depth information of each pixel;
The depth information and then acquisition second depth image according to all pixels point.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611128911.8A CN106683133B (en) | 2016-12-09 | 2016-12-09 | Method for obtaining target depth image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611128911.8A CN106683133B (en) | 2016-12-09 | 2016-12-09 | Method for obtaining target depth image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106683133A true CN106683133A (en) | 2017-05-17 |
CN106683133B CN106683133B (en) | 2020-04-17 |
Family
ID=58867893
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611128911.8A Active CN106683133B (en) | 2016-12-09 | 2016-12-09 | Method for obtaining target depth image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106683133B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109191514A (en) * | 2018-10-23 | 2019-01-11 | 北京字节跳动网络技术有限公司 | Method and apparatus for generating depth detection model |
CN109470158A (en) * | 2017-09-08 | 2019-03-15 | 株式会社东芝 | Image processor and range unit |
CN109831660A (en) * | 2019-02-18 | 2019-05-31 | Oppo广东移动通信有限公司 | Depth image acquisition method, depth image obtaining module and electronic equipment |
WO2020087485A1 (en) * | 2018-11-02 | 2020-05-07 | Oppo广东移动通信有限公司 | Method for acquiring depth image, device for acquiring depth image, and electronic device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2009063472A1 (en) * | 2007-11-15 | 2009-05-22 | Microsoft International Holdings B.V. | Dual mode depth imaging |
CN104918035A (en) * | 2015-05-29 | 2015-09-16 | 深圳奥比中光科技有限公司 | Method and system for obtaining three-dimensional image of target |
CN104918034A (en) * | 2015-05-29 | 2015-09-16 | 深圳奥比中光科技有限公司 | 3D image capturing device, capturing method and 3D image system |
CN105160663A (en) * | 2015-08-24 | 2015-12-16 | 深圳奥比中光科技有限公司 | Method and system for acquiring depth image |
CN105279736A (en) * | 2014-07-21 | 2016-01-27 | 由田新技股份有限公司 | Method and system for generating depth image |
-
2016
- 2016-12-09 CN CN201611128911.8A patent/CN106683133B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2009063472A1 (en) * | 2007-11-15 | 2009-05-22 | Microsoft International Holdings B.V. | Dual mode depth imaging |
CN105279736A (en) * | 2014-07-21 | 2016-01-27 | 由田新技股份有限公司 | Method and system for generating depth image |
CN104918035A (en) * | 2015-05-29 | 2015-09-16 | 深圳奥比中光科技有限公司 | Method and system for obtaining three-dimensional image of target |
CN104918034A (en) * | 2015-05-29 | 2015-09-16 | 深圳奥比中光科技有限公司 | 3D image capturing device, capturing method and 3D image system |
CN105160663A (en) * | 2015-08-24 | 2015-12-16 | 深圳奥比中光科技有限公司 | Method and system for acquiring depth image |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109470158A (en) * | 2017-09-08 | 2019-03-15 | 株式会社东芝 | Image processor and range unit |
CN109191514A (en) * | 2018-10-23 | 2019-01-11 | 北京字节跳动网络技术有限公司 | Method and apparatus for generating depth detection model |
CN109191514B (en) * | 2018-10-23 | 2020-11-24 | 北京字节跳动网络技术有限公司 | Method and apparatus for generating a depth detection model |
WO2020087485A1 (en) * | 2018-11-02 | 2020-05-07 | Oppo广东移动通信有限公司 | Method for acquiring depth image, device for acquiring depth image, and electronic device |
US11494925B2 (en) | 2018-11-02 | 2022-11-08 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Method for depth image acquisition, electronic device, and storage medium |
CN109831660A (en) * | 2019-02-18 | 2019-05-31 | Oppo广东移动通信有限公司 | Depth image acquisition method, depth image obtaining module and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN106683133B (en) | 2020-04-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
TWI729995B (en) | Generating a merged, fused three-dimensional point cloud based on captured images of a scene | |
CN105447853B (en) | Flight instruments, flight control system and method | |
WO2017080108A1 (en) | Flying device, flying control system and method | |
US9772405B2 (en) | Backfilling clouds of 3D coordinates | |
CN106780589A (en) | A kind of method for obtaining target depth image | |
US20200195908A1 (en) | Apparatus and method for focal length adjustment and depth map determination | |
EP3373251A1 (en) | Scan colorization with an uncalibrated camera | |
CN108234984A (en) | Binocular depth camera system and depth image generation method | |
CN106683133A (en) | Method for acquiring target depth image | |
US20200265601A1 (en) | Information processing apparatus and information processing method | |
CN109035309A (en) | Pose method for registering between binocular camera and laser radar based on stereoscopic vision | |
CN105282421B (en) | A kind of mist elimination image acquisition methods, device and terminal | |
WO2019100219A1 (en) | Output image generation method, device and unmanned aerial vehicle | |
US9332247B2 (en) | Image processing device, non-transitory computer readable recording medium, and image processing method | |
CN109889799B (en) | Monocular structure light depth perception method and device based on RGBIR camera | |
CN108510540A (en) | Stereoscopic vision video camera and its height acquisition methods | |
CN107680039B (en) | Point cloud splicing method and system based on white light scanner | |
US11403745B2 (en) | Method, apparatus and measurement device for measuring distortion parameters of a display device, and computer-readable medium | |
US20130314533A1 (en) | Data deriving apparatus | |
CN106323190A (en) | Depth measurement range-customizable depth measurement method and system for obtaining depth image | |
CN102890821A (en) | Method and system for calibrating infrared camera | |
CN111654626B (en) | High-resolution camera containing depth information | |
CN206470834U (en) | A kind of device for obtaining target depth image | |
CN104463964A (en) | Method and equipment for acquiring three-dimensional model of object | |
CN110378964A (en) | Join scaling method and device, storage medium outside a kind of video camera |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP01 | Change in the name or title of a patent holder | ||
CP01 | Change in the name or title of a patent holder |
Address after: 518057 Guangdong city of Shenzhen province Nanshan District Hing Road three No. 8 China University of Geosciences research base in building A808 Patentee after: Obi Zhongguang Technology Group Co., Ltd Address before: 518057 Guangdong city of Shenzhen province Nanshan District Hing Road three No. 8 China University of Geosciences research base in building A808 Patentee before: SHENZHEN ORBBEC Co.,Ltd. |