CN106504284B - A kind of depth picture capturing method combined based on Stereo matching with structure light - Google Patents

A kind of depth picture capturing method combined based on Stereo matching with structure light Download PDF

Info

Publication number
CN106504284B
CN106504284B CN201610927415.2A CN201610927415A CN106504284B CN 106504284 B CN106504284 B CN 106504284B CN 201610927415 A CN201610927415 A CN 201610927415A CN 106504284 B CN106504284 B CN 106504284B
Authority
CN
China
Prior art keywords
image
information
light
depth
structure light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610927415.2A
Other languages
Chinese (zh)
Other versions
CN106504284A (en
Inventor
周剑
袁寒
余勤力
唐荣富
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Tongjia Youbo Technology Co Ltd
Original Assignee
Chengdu Tongjia Youbo Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Tongjia Youbo Technology Co Ltd filed Critical Chengdu Tongjia Youbo Technology Co Ltd
Priority to CN201610927415.2A priority Critical patent/CN106504284B/en
Publication of CN106504284A publication Critical patent/CN106504284A/en
Application granted granted Critical
Publication of CN106504284B publication Critical patent/CN106504284B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Landscapes

  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The present invention relates to stereoscopic vision fields, and it discloses a kind of depth picture capturing methods combined based on Stereo matching with structure light, solve the stereo matching problem in the weak texture region of image and image exposure amount deficiency situation.In the present invention, when weak texture region is not present in external ambient light abundance and photographed scene, the depth information of three-dimensional scenic is directly obtained using the image shot under natural light;When the texture information of acquisition is rare or collection process external environment light intensity is weaker, then active projection code structure light, to increase environmental light intensity and object texture information.Since structure light is modulated while enhanced scene texture information by scene, depth information of scene directly can be obtained using structural light measurement method.The present invention is suitable for accurate acquisition object depth information.

Description

A kind of depth picture capturing method combined based on Stereo matching with structure light
Technical field
The present invention relates to stereoscopic vision fields, and in particular to a kind of depth map combined based on Stereo matching with structure light Acquisition methods.
Background technique
Stereoscopic vision is a critical problem in computer vision, and the depth information obtained is widely used in work The design of industry product, artistic sculpture, building, robot vision, unmanned plane avoidance, medicine shaping and beauty, aerial mapping With the fields such as Military Application.Stereo matching is a kind of important method for obtaining depth information, and this method is existed by calculating spatial point Deviation in two images obtains disparity map, the depth information of object is then obtained according to disparity map.The parallax of object each point It is obtained by the matching double points found in left images plane, therefore images match becomes the final three-dimensional scenic depth that obtains and believes The key of breath.In Stereo matching common problem have occlusion area, parallax discontinuity zone, high texture region, low texture region, Repeat texture region etc..Weak texture region often will lead to error hiding due to its pixel discrimination low pass.In existing algorithm, region Matching method enhances the ga s safety degree of weak texture region to expand window, and global restriction is improved by the smooth item of energy function The matching rate of weak texture divides depth areas using the color relationship of image in pretreatment based on the partitioning algorithm of color.On It states algorithm and all achieves good effect on the weak Texture Matching of small area.
However, when texture region area weak in image is larger or image exposure amount deficiency, the superiority of existing algorithm It can reduce.The weak texture region of large area is matched using the local algorithm for increasing window, easily causes the mistake of parallax discontinuity zone Divide smooth, appearance " prospect is loose " effect;And the Global Algorithm based on energy function optimization and the algorithm based on color segmentation exist The matching precision of the weak texture region of large area reduces.Meanwhile under the environment of image exposure amount deficiency (such as dark surrounds), acquisition Image feature information is unable to satisfy matching demand, can not obtain object depth information with existing matching algorithm.
Summary of the invention
The technical problems to be solved by the present invention are: proposing a kind of depth map combined based on Stereo matching with structure light Acquisition methods solve the stereo matching problem in the weak texture region of image and image exposure amount deficiency situation.
The present invention solves scheme used by above-mentioned technical problem:
A kind of depth picture capturing method combined based on Stereo matching with structure light comprising following steps:
A, structure light is encoded;
B, binocular camera is demarcated;
C, judge whether to need active projecting structural optical according to external environment light intensity and three-dimensional scenic texture information;
If D, not needing active projecting structural optical, left images are shot using calibrated binocular camera, are directly remembered The image information under natural light is recorded, disparity map is obtained, then obtains three-dimensional scenic according to the mapping relations between parallax and depth Depth map;
E, if desired active projecting structural optical, then using the structure light after projector coding, and using calibrated Binocular camera shoots left images, if structure light for enhancing external environment light intensity, the image information under interrecord structure light, It preferentially chooses stereo matching method or structural light measurement method obtains disparity map or phase diagram, enter step F;
If structure light is used to enhance the texture information of three-dimensional scenic, respectively under interrecord structure light with the image under natural light Information, chooses stereo matching method or structural light measurement method restores the depth information of weak texture region, and then obtains three-dimensional scenic Disparity map or phase diagram, enter step F;
F, the depth map that three-dimensional scenic is obtained according to the mapping relations between parallax and depth, alternatively, according to phase and depth Mapping relations between degree obtain the depth map of three-dimensional scenic.
As advanced optimizing, in step A, when obtaining three-dimensional scenic depth information using stereo matching method, select special Reference ceases more apparent coding mode coded structured light, comprising: De Bruijn coding, coding or base based on graphical information In the M-marry coding mode of geometrical characteristic;When obtaining three-dimensional scenic depth information using structural light measurement method, select by field Scape modulation has the coding mode coded structured light of certain changing rule, comprising: sine streak graph code mode, Gray code striped Graph code mode.
As advanced optimizing, in step B, calibration is carried out to binocular camera using gridiron pattern and obtains two video cameras Inner parameter and external parameter, inner parameter include the distortion parameter of principal point, focal length and camera, are shot for correcting camera The tangential distortion and radial distortion of image, the direction and position that external parameter is used to determine camera in three dimensions.
As advanced optimizing, in step C, needs are judged whether according to external environment light intensity and three-dimensional scenic texture information Active projecting structural optical, specifically includes:
C1, image under natural lighting is obtained by video camera, is believed by the image grey level histogram distribution characteristics and image It makes an uproar than judging whether external environment light intensity is sufficient, if external environment light intensity is sufficient, C2 is entered step, if external environment light intensity is not Abundance, then active projection code structure light is to increase environmental light intensity information;
C2, in the case where external environmental light intensity is sufficient, judge the texture information of image, if it exists weak texture region, then Active projection code structure light utilizes weak Texture region position under label record to increase texture information.
It is described to pass through the image grey level histogram distribution characteristics and signal noise ratio (snr) of image in step C1 as advanced optimizing Judge whether Chong Zu method is external environment light intensity:
C11. image exposure amount is calculated based on image grey level histogram distribution characteristics, and judges whether image is under-exposure figure Picture then enters step C12 and further judges if under-exposure image;
C12. signal noise ratio (snr) of image is verified, if signal noise ratio (snr) of image is less than given threshold, is determined as external ambient light It is inadequate.
As advanced optimizing, in step C11, the image grey level histogram distribution characteristics that is based on calculates image exposure Amount, and judge whether image is that the method for under-exposure image is:
It is ratio F shared by empty region by grey level histogram both ends pixel value numberrat, histogram peak gray scale gpeakWith the mean value gray scale g of grey level histogrammeanTo estimate image exposure amount:
Frat=(gmin+1)/(255-gmax+gmin+1) (1)
Wherein, gmaxThe maximum gradation value number for being zero for the pixel quantity at grey level histogram left edge, gminIt is straight for gray scale The minimum gradation value number that the pixel quantity of square figure right edge is zero;
Given threshold M < 0.5, works as gpeakOr gmeanLess than 128, FratWhen < M, determine that the image is under-exposure image.
As advanced optimizing, in step C12, the method verified to signal noise ratio (snr) of image is:
Using the signal-to-noise ratio for measuring image without the method with reference to Y-PSNR, the standard deviation of image flat site is made It is poor for noise criteria, mean square error, noise criteria difference formula are replaced with the standard deviation are as follows:
Wherein, N is the total number of image, and I (x, y) indicates the gray value of pixel, the expression formula of pixel mean μ are as follows:
Entire image is divided into multiple zonules, the noise criteria for finding out each region respectively is poor, and ascending order arranges each area The variance yields in domain, the noise criteria that the mean value of the variance before finding out is considered as entire image is poor, then image without reference peak value noise Compare formula are as follows:
Wherein, L is image maximum gray number;
Given threshold W is determined as external environment if that seeks is less than given threshold W without manipulation Y-PSNR NPSNR Light is inadequate, then active projection code structure light is to increase environmental light intensity information.
It is described in the case where external environmental light intensity is sufficient in step C2 as advanced optimizing, judge the texture of image The method of information is:
The statistical property of pixel grey scale judges the line of image under gradient characteristics and certain window based on image pixel gray level Manage information:
Wherein, N indicates the number of pixels in window, W(x,y)Indicate the window centered on pixel (x, y), I(u,v)It indicates Pixel gray value in gray level image, s are the variance of pixel in window, and k is the sum of the gradient value of pixel in window;
Given threshold srWith krIf s2<srAnd k < kr, then determine that the region is weak texture region, then active projecting structural optical To increase texture information.
As advanced optimizing, in step E, if structure light is used to enhance the texture information of three-dimensional scenic, three-dimensional is chosen The method for restoring the depth information of weak texture region with method is:
Divide weak texture region and non-weak texture region according to the label information of weak texture region, structure light is assisted obtaining Parallax information of the Stereo matching parallax information as weak texture region, the Stereo matching parallax information conduct obtained under natural light The parallax information of non-weak texture merges weak texture region and non-weak texture area with weighted superposition method using Dynamic Programming suture collimation method The parallax information in domain obtains the disparity map of the final scene:
Wherein, the calculation formula of cost is merged in dynamic programming path are as follows:
E (x, y)=Ediff(x,y)+λEcolor(x,y) (7)
Wherein, d1Indicate the disparity map obtained under structure light auxiliary, d2The disparity map obtained under natural light, I1Indicate structure The grayscale image collected under light auxiliary, I2Indicate the grayscale image obtained under natural light, EcolorWith the picture in rectangular area V around The difference of element indicates the strength relationship in the V of region on picture element point chromatic, EdiffIt takes at the lesser position of two images structure change Gradient value indicates the similitude of geometry, and λ is an adjustment factor, NVFor the sum of pixel in the V of region, E is fusion generation Valence, E is smaller to select the point bigger as the probability put on suture;
After the completion of path planning, disparity map is merged using the method for weighted superposition along suture, amalgamation mode Are as follows:
dend(x, y)=ω1d1(x,y)+ω2d2(x,y) (10)
Wherein, ω1And ω2For the weight coefficient of weighted superposition, ω12< 1, dendFor the disparity map finally merged.
As advanced optimizing, in step E, if structure light is used to enhance the texture information of three-dimensional scenic, selecting structure light The method that mensuration restores the depth information of weak texture region is: directly whole using being sought by the coded image that scene depth is modulated The phase information of a three-dimensional scenic, specifically includes:
N frame sine streak is projected to scene surface to be measured, video camera is recycled to acquire the modified strip that N frame is modulated by scene Line, the deforming stripe change intensity collected indicate are as follows:
In=R (x, y) { 1+B (x, y) cos [+2 π n/N of φ (x, y)] } (n=1,2 ... N) (11)
Wherein R (x, y) is body surface reflectivity factor, and B (x, y) is fringe contrast, and φ (x, y) is by scene depth Modulated phase information, N are projected fringe frame number, and wherein N is more than or equal to 3;
According to the step phase shift calculation formula such as N step:
It can be obtained the truncation phase information φ (x, y) of scene, then the continuous phase by obtaining scene after solution phase algorithm Information θ (x, y).
The beneficial effects of the present invention are:
It can determine shooting environmental intensity signal automatically and whether three-dimensional scenic texture information meets Stereo matching demand, and according to Decide whether projection code structure light according to judgement result, active projection pattern is utilized to increase external environment light intensity and Stereo matching institute The scene texture information needed, or utilize depth information of scene modulating-coding structure light.This method ensure that binocular vision method The characteristics of low-power consumption, and solve depth map under external environment light intensity weaker condition and obtain the difficult and weak texture region of three-dimensional scenic Depth map the lower problem of precision.
Detailed description of the invention
Fig. 1 is the decision flow chart whether needed to three-dimensional scene projection structure light;
Fig. 2 is the depth picture capturing method flow chart that Stereo matching is combined with structure light;
Fig. 3 is directly to obtain depth map method flow diagram using structural light measurement method.
Specific embodiment
The present invention provides a kind of depth picture capturing method combined based on Stereo matching with structure light, it is intended to solve image Stereo matching problem under weak texture region and image exposure amount deficiency situation.Shooting environmental light can be determined automatically using this method Whether strong information and three-dimensional scenic texture information meet Stereo matching demand, and decide whether projection code knot according to judgement result Structure light, scene texture information needed for increasing external environment light intensity and Stereo matching using active projection pattern, or utilize scene Depth information modulating-coding structure light.The characteristics of this method ensure that binocular vision method low-power consumption, and solve external rings Depth map obtains the difficult problem lower with the precision of the depth map of the weak texture region of three-dimensional scenic under the light intensity weaker condition of border.
In the present invention, using projector coded structured light, left and right cameras acquires image to be matched, and by image It is transmitted to computer, computer can get the disparity map of three-dimensional scenic with Stereo Matching Algorithm, and then can obtain pair The depth map answered.When weak texture region is not present in external ambient light abundance and photographed scene, directly utilizes and clapped under natural light The image taken the photograph obtains the depth information of three-dimensional scenic;When the texture information of acquisition is rare or collection process external environment light intensity When weaker, then active projection code structure light improves Stereo matching precision to increase environmental light intensity and scene texture information.By It is modulated while structure light is in enhanced scene texture information by scene, directly can recover scene using structural light measurement method Depth information, therefore in the scene that texture information is rare or external environment light intensity is weaker, preferentially choose stereo matching method or Structural light measurement method restores its depth information.
In specific implementation, the depth picture capturing method combined based on Stereo matching with structure light in the present invention, specifically Including following realization step:
1, structure light coding:
The present invention is in the way of structure light coding space-based, and in scene, there are weak texture region (such as white metope) When, active projecting structural optical enhances the texture information of three-dimensional scenic using the grayscale information of projected image, or certainly using scene The depth information of body modulates projecting structural optical.And in external ambient light insufficient (such as dark surrounds) also by active projecting structural optical Mode increases the intensity signal of external environment.
When obtaining three-dimensional scenic depth information using stereo matching method, in order to enhance the spy of the weak texture region of three-dimensional scenic Reference breath utmostly improves precision when Stereo matching, it is preferable that selects the more apparent coding mode coding of characteristic information Structure light, such as De Bruijn coding, the coding based on graphical information or the M-marry coding mode based on geometrical characteristic;Benefit When obtaining depth information with structural light measurement method, it is preferable that select the coding mode that there is certain changing rule by scene modulation Coded structured light, such as sine streak graph code, Gray code striped graph code.
2, binocular camera is demarcated:
For the process for obtaining intrinsic parameters of the camera and external parameter, inner parameter includes principal point, focal length and camera Distortion parameter, for the tangential distortion and radial distortion etc. of correcting camera shooting image, external parameter is used to determine camera Direction and position in three dimensions.Using stereo matching method obtain depth information of scene when, by demarcate can get (x, y, d)-(XC,YC,ZC) mapping relations, wherein (x, y) be image pixel coordinates, d be the coordinate parallax value, (XC,YC,ZC) be The three-dimensional coordinate of (x, y, d) corresponding spatial point.Parallax d can be aligned identical point abscissa in rear left right view by polar curve Difference is sought.When obtaining depth information of scene using structural light measurement method, (x, y, θ)-(X can get by demarcatingC,YC,ZC) Mapping relations, wherein θ is the phase value of pixel coordinate (x, y), can pass through phase measuring profilometer, Fourier transform profilometry etc. Method obtains.
3, environment judgment and processing:
The steps flow chart is as shown in Fig. 1, including the judgement of external environment light intensity, the judgement of three-dimensional scenic texture information and root It is judged that result decision whether three parts of active projecting structural optical.
First, image under natural lighting is obtained by video camera, passes through the image histogram distribution characteristics and image noise Judge whether external environment light intensity is sufficient than SNR;
Second, if image histogram distribution concentrates on the low region of gray scale and SNR value is less than given threshold value W, determining should Shooting condition lower outer portion environmental light intensity is weaker, projection code structure light;
Third if external environment light intensity reaches Stereo matching demand, then shoots object by original shooting spectral discrimination and is It is no that there are weak texture regions.It is that pixel grey scale does not occur to convert or convert very faint area in image that the present invention, which defines weak texture, Domain, the transformation are embodied under the gradient and certain window of pixel grey scale in the statistical property of pixel grey scale.According to gradient characteristics or Statistical property process decision chart picture whether there is weak texture region, if it exists weak texture region, then projection code structure light, enhanced scene In texture information distinguish weak texture area in image to subsequent step while using weak Texture region position under label record The location of pixels in domain and non-weak texture region.
4, left images are shot using calibrated video camera, if assisting without structure light, directly under record natural light Image information, obtain disparity map, then according between parallax and depth mapping relations obtain three-dimensional scenic depth map;If Structure of need light auxiliary and active projecting structural optical are used to enhance external environment light intensity, then the image information under interrecord structure light, It preferentially chooses stereo matching method or structural light measurement method obtains disparity map or phase diagram, execute step 6;
If desired structure light assists and active projecting structural optical is then distinguished for enhancing the texture information in three-dimensional scenic With the image information under natural light under interrecord structure light, step 5 is executed.
If 5, obtaining the depth information of weak texture region using stereo matching method, believed according to the label of weak texture region Breath, divides weak texture region and non-weak texture region.The Stereo matching parallax information that structure light is assisted is as weak texture The parallax information in region, parallax information of the Stereo matching parallax information obtained under natural light as non-weak texture.Utilize dynamic Planning suture collimation method merges the parallax information of weak texture region and non-weak texture region with weighted superposition method, obtains the final scene Disparity map.
The disparity difference that the present invention is defined on two width disparity maps on suture is minimum, and pixel value in the neighborhood on suture It is most close.Preferably to meet above-mentioned condition, definition fusion cost calculation formula is as follows:
E (x, y)=Ediff(x,y)+λEcolor(x,y)
Wherein, d1Indicate the disparity map obtained under structure light auxiliary, d2The disparity map obtained under natural light, I1Indicate structure The grayscale image collected under light auxiliary, I2Indicate the grayscale image obtained under natural light.EcolorWith the picture in rectangular area V around The difference of element indicates the strength relationship in the V of region on picture element point chromatic, EdiffIt takes at the lesser position of two images structure change Gradient value indicates the similitude of geometry, and λ is an adjustment factor, NVFor the sum of pixel in the V of region, E is fusion generation Valence, E is smaller to select the point bigger as the probability put on suture.
After the completion of path planning, disparity map is merged using the method for weighted superposition along suture.Amalgamation mode It is as follows:
dend(x, y)=ω1d1(x,y)+ω2d2(x,y)
Wherein, ω1And ω2For the weight coefficient of weighted superposition, ω12< 1, dendFinally merge obtained disparity map.
If obtaining the depth information of weak texture region using structural light measurement method, the volume modulated by scene depth is directly utilized Code image seeks the phase information of entire three-dimensional scenic.
6, scene depth figure is obtained:
If stereo matching method is selected to obtain the depth information of scene, (x, y, d)-(X is utilized in this stepC,YC,ZC) reflect Penetrate the depth information of relationship restoration scenario.If structural light measurement method is selected to obtain the depth information of scene, utilized in this step (x, y,θ)-(XC,YC,ZC) mapping relations restoration scenario depth information.
Embodiment:
This example illustrates the mistake for restoring the depth information of weak texture region using stereo matching method and structural light measurement method respectively Journey.
1, restore the depth information of weak texture region using stereo matching method:
In this method, projecting structural optical is used to enhance the texture information of weak texture region.Preferably due to be used in this example Digital projector projecting structural optical, therefore projected image need to only be encoded;It is stood in this example using local algorithm simultaneously Body matching, and pseudorandomcode have window uniqueness, therefore in this example in the way of pseudorandomcode coding projection image.It is logical It can be obtained coded structured light after crossing digital projector projection code image.Specific flow chart is as shown in Fig. 2, according to three-dimensional With principle, the inner parameter and external parameter that calibration obtains two video cameras are carried out to binocular camera first with gridiron pattern. After obtaining image by video camera, shooting environmental is made decisions, i.e., whether judgement external environment light intensity and scene texture information Meet Stereo matching demand, decision process is as follows:
1) image histogram is sought first and signal noise ratio (snr) of image SNR judges whether external environment light intensity is sufficient:
11) judge image exposure amount size: when under-exposure, in corresponding grey level histogram, pixel collects image mostly In the region low in gray scale, and have apparent spillover, and seldom occur in the high region of gray scale.Pass through grey level histogram Both ends pixel value number is ratio F shared by empty regionrat, histogram peak gray scale gpeakWith the mean value ash of grey level histogram Spend gmeanTo estimate image exposure amount.
Frat=(gmin+1)/(255-gmax+gmin+1)
Wherein, gmaxThe maximum gradation value number for being zero for the pixel quantity at grayscale image left edge, gminFor on the right of grayscale image The minimum gradation value number that pixel quantity at edge is zero;
Given threshold M < 0.5, works as gpeakOr gmeanLess than 128, FratWhen < M, which is under-exposure image.
12) to exclude interference of the black object to judgement, the present invention verifies judgement result using signal-to-noise ratio again. Preferably, using the signal-to-noise ratio for measuring image without the method with reference to Y-PSNR, the standard deviation of image flat site is made It is poor for noise criteria, mean square error, noise criteria difference formula are replaced with the standard deviation are as follows:
Wherein, N is the total number of image, and I (x, y) indicates the gray value of pixel, the expression formula of pixel mean μ are as follows:
Entire image is divided into multiple zonules, the noise criteria for finding out each region respectively is poor, and ascending order arranges each area The variance yields in domain, the noise criteria that the mean value of the variance before finding out is considered as entire image is poor, then image without reference peak value noise Compare formula are as follows:
Wherein, L is image maximum gray number.
Given threshold W, if the NPSNR sought is less than given threshold, it is determined that external ambient light is insufficient under this condition, actively Projecting structural optical increases intensity signal.
2) in the case where external ambient light abundance, image texture information is judged, due to image pixel gray level The statistical property of pixel grey scale can react the texture information of image under gradient and certain window, it is preferable that for deterministic process More accurate, the mode for selecting the two to combine is determined that decision criteria is as follows:
Wherein, N indicates the number of pixels in window, W(x,y)Indicate the window centered on pixel (x, y), I(u,v)It indicates Pixel gray value in gray level image, s are the variance of pixel in window, and k is the sum of the gradient value of pixel in window.Set threshold Value srWith krIf s2<srAnd k < kr, then determine that the region is weak texture region, active projecting structural optical increases texture information.
After decision is completed, using left and right camera collection image, current environment is weak texture region, then passes through three-dimensional The disparity map under disparity map and structure light auxiliary under natural light is obtained with algorithm, and records weak texture region location of pixels. Preferably, in order to realize the equalization of precision and speed in matching process, parallax is calculated using adaptive weighting (ASW) method Figure.The disparity map d finally merged is obtained with the method for weighted superposition by dynamic programming pathend.Dynamic programming path in this example Middle fusion cost calculation are as follows:
E (x, y)=Ediff(x,y)+0.3*Ecolor(x,y)
Fusion calculation formula are as follows:
dend(x, y)=0.5*d1(x,y)+0.5*d2(x,y)
Three-dimensional scenic depth map is sought finally by the mapping relations of parallax and depth.
2, restore the depth information of weak texture region using structural light measurement method:
When structural light measurement method being selected to restore the depth information of weak texture region, directly restore whole using structural light measurement method The three-dimensional information of a scene.Preferably, sinusoidal coding striped is chosen as coded image, and field is obtained using phase measuring profilometer The phase information of scape.Phase measuring profilometer walks phase shift method using N, and projection N frame has 2 π/N phase shift interval sinusoidal coding item Line is to scene surface to be measured.
The detailed process of weak texture region scene depth figure is obtained referring to Fig. 3, first to environment using structural light measurement method It makes decisions, judging process is identical as stereo matching method.If there are weak texture regions in scene, N frame sine streak is projected extremely Scene surface to be measured, the deforming stripe for recycling video camera acquisition N frame to be modulated by scene, the deforming stripe variation collected are strong Degree indicates are as follows:
In=R (x, y) { 1+B (x, y) cos [+2 π n/N of φ (x, y)] } (n=1,2 ... N)
Wherein R (x, y) is body surface reflectivity factor, and B (x, y) is fringe contrast, and φ (x, y) is by scene depth Modulated phase information, N are projected fringe frame number, and wherein N is more than or equal to 3.According to the step phase shift calculation formula such as N step:
It can be obtained the truncation phase information φ (x, y) of scene, then the continuous phase by obtaining scene after solution phase algorithm Information θ (x, y) seeks three-dimensional scenic depth map finally by the mapping relations of phase and depth.

Claims (9)

1. a kind of depth picture capturing method combined based on Stereo matching with structure light, which comprises the following steps:
A, structure light is encoded;
B, binocular camera is demarcated;
C, judge whether to need active projecting structural optical according to external environment light intensity and three-dimensional scenic texture information;
If D, not needing active projecting structural optical, left images are shot using calibrated binocular camera, are directly recorded certainly Image information under right light, obtains disparity map, and the depth of three-dimensional scenic is then obtained according to the mapping relations between parallax and depth Degree figure;
E, if desired active projecting structural optical then using the structure light after projector coding, and utilizes calibrated binocular Video camera shoots left images, if structure light for enhancing external environment light intensity, the image information under interrecord structure light, preferentially It chooses stereo matching method or structural light measurement method obtains disparity map or phase diagram, enter step F;
If structure light is used to enhance the texture information of three-dimensional scenic, believe under interrecord structure light with the image under natural light respectively Breath, chooses stereo matching method or structural light measurement method restores the depth information of weak texture region, and then obtains three-dimensional scenic Disparity map or phase diagram, enter step F;
F, the depth map of three-dimensional scenic is obtained according to the mapping relations between parallax and depth, alternatively, according to phase and depth it Between mapping relations obtain three-dimensional scenic depth map;
In step E, if structure light is used to enhance the texture information of three-dimensional scenic, chooses stereo matching method and restore weak texture region The method of depth information is:
Divide weak texture region and non-weak texture region according to the label information of weak texture region, structure light is assisted vertical Body matches parallax information of the parallax information as weak texture region, and the Stereo matching parallax information obtained under natural light is as non-weak The parallax information of texture merges with weighted superposition method weak texture region and non-weak texture region using Dynamic Programming suture collimation method Parallax information obtains the disparity map of the final scene:
Wherein, the calculation formula of cost is merged in dynamic programming path are as follows:
E (x, y)=Ediff(x,y)+λEcolor(x,y) (7)
Wherein, d1Indicate the disparity map obtained under structure light auxiliary, d2The disparity map obtained under natural light, I1Indicate that structure light is auxiliary Help down the grayscale image collected, I2Indicate the grayscale image obtained under natural light, EcolorWith the pixel in rectangular area V around it Difference indicates the strength relationship in the V of region on picture element point chromatic, EdiffTake the gradient at the lesser position of two images structure change Value indicates the similitude of geometry, and λ is an adjustment factor, NVFor the sum of pixel in the V of region, E is fusion cost, E It is smaller to select the point bigger as the probability put on suture;
After the completion of path planning, disparity map is merged using the method for weighted superposition along suture, amalgamation mode are as follows:
dend(x, y)=ω1d1(x,y)+ω2d2(x,y) (10)
Wherein, ω1And ω2For the weight coefficient of weighted superposition, ω12< 1, dendFor the disparity map finally merged.
2. a kind of depth picture capturing method combined based on Stereo matching with structure light as described in claim 1, feature It is, in step A, when obtaining three-dimensional scenic depth information using stereo matching method, characteristic information is selected more significantly to compile Code mode coded structured light, comprising: De Bruijn coding, the coding based on graphical information or the M-marry based on geometrical characteristic Coding mode;When obtaining three-dimensional scenic depth information using structural light measurement method, selecting is had certain variation by scene modulation The coding mode coded structured light of rule, comprising: sine streak graph code mode, Gray code bar graph coding mode.
3. a kind of depth picture capturing method combined based on Stereo matching with structure light as described in claim 1, feature It is, in step B, the inner parameter and outside ginseng that calibration obtains two video cameras is carried out to binocular camera using gridiron pattern Number, inner parameter includes the distortion parameter of principal point, focal length and camera, for correcting camera shooting image tangential distortion with Radial distortion, the direction and position that external parameter is used to determine camera in three dimensions.
4. a kind of depth picture capturing method combined based on Stereo matching with structure light as described in claim 1, feature It is, in step C, judges whether to need active projecting structural optical according to external environment light intensity and three-dimensional scenic texture information have Body includes:
C1, image under natural lighting is obtained by video camera, passes through the image grey level histogram distribution characteristics and signal noise ratio (snr) of image Judge whether external environment light intensity is sufficient, if external environment light intensity is sufficient, C2 is entered step, if external environment light intensity is not filled Foot, then active projection code structure light is to increase environmental light intensity information;
C2, in the case where external environmental light intensity is sufficient, judge the texture information of image, if it exists weak texture region, then actively Projection code structure light utilizes weak Texture region position under label record to increase texture information.
5. a kind of depth picture capturing method combined based on Stereo matching with structure light as claimed in claim 4, feature It is, it is described to judge that external environment light intensity is with signal noise ratio (snr) of image by the image grey level histogram distribution characteristics in step C1 The method of no abundance is:
C11. image exposure amount is calculated based on image grey level histogram distribution characteristics, and judges whether image is under-exposure image, If under-exposure image, then enters step C12 and make further judgement;
C12. signal noise ratio (snr) of image is verified, if signal noise ratio (snr) of image is less than given threshold, is determined as that external ambient light is not filled Foot.
6. a kind of depth picture capturing method combined based on Stereo matching with structure light as claimed in claim 5, feature Be, in step C11, it is described based on image grey level histogram distribution characteristics calculate image exposure amount, and judge image whether be The method of under-exposure image is:
It is ratio F shared by empty region by grey level histogram both ends pixel value numberrat, histogram peak gray scale gpeakWith ash Spend the mean value gray scale g of histogrammeanTo estimate image exposure amount:
Frat=(gmin+1)/(255-gmax+gmin+1) (1)
Wherein, gmaxThe maximum gradation value number for being zero for the pixel quantity at grey level histogram left edge, gminFor grey level histogram The minimum gradation value number that the pixel quantity of right edge is zero;
Given threshold M < 0.5, works as gpeakOr gmeanLess than 128, FratWhen < M, determine that the image is under-exposure image.
7. a kind of depth picture capturing method combined based on Stereo matching with structure light as claimed in claim 6, feature It is, in step C12, the method verified to signal noise ratio (snr) of image is:
Using the signal-to-noise ratio for measuring image without the method with reference to Y-PSNR, using the standard deviation of image flat site as making an uproar Sound standard deviation replaces mean square error, noise criteria difference formula with the standard deviation are as follows:
Wherein, N is the total number of image, and I (x, y) indicates the gray value of pixel, the expression formula of pixel mean μ are as follows:
Entire image is divided into multiple zonules, the noise criteria for finding out each region respectively is poor, and ascending order arranges each region Variance yields, the noise criteria that the mean value of the variance before finding out is considered as entire image is poor, then image is public without reference Y-PSNR Formula are as follows:
Wherein, L is image maximum gray number;
Given threshold W is determined as external ambient light not if that seeks is less than given threshold W without manipulation Y-PSNR NPSNR Abundance, then active projection code structure light is to increase environmental light intensity information.
8. a kind of depth picture capturing method combined based on Stereo matching with structure light as claimed in claim 7, feature It is, it is described in the case where external environmental light intensity is sufficient in step C2, judge that the method for the texture information of image is:
The statistical property of pixel grey scale judges the texture letter of image under gradient characteristics and certain window based on image pixel gray level Breath:
Wherein, N indicates the number of pixels in window, W(x,y)Indicate the window centered on pixel (x, y), I(u,v)Indicate grayscale image Pixel gray value as in, s are the variance of pixel in window, and k is the sum of the gradient value of pixel in window;
Given threshold srWith krIf s2<srAnd k < kr, then determine that the region is weak texture region, then active projecting structural optical is to increase Add texture information.
9. a kind of depth picture capturing method combined based on Stereo matching with structure light as described in claim 1, feature It is, in step E, if structure light is used to enhance the texture information of three-dimensional scenic, selecting structure light measurement method restores weak texture area The method of the depth information in domain is: directly being believed using the phase for being sought entire three-dimensional scenic by the coded image that scene depth is modulated Breath, specifically includes:
N frame sine streak is projected to scene surface to be measured, the deforming stripe for recycling video camera acquisition N frame to be modulated by scene is adopted Collecting obtained deforming stripe change intensity indicates are as follows:
In=R (x, y) { 1+B (x, y) cos [+2 π n/N of φ (x, y)] } (n=1,2 ... N) (11)
Wherein R (x, y) is body surface reflectivity factor, and B (x, y) is fringe contrast, and φ (x, y) is to be modulated by scene depth Phase information afterwards, N are projected fringe frame number, and wherein N is more than or equal to 3;
According to the step phase shift calculation formula such as N step:
It can be obtained the truncation phase information φ (x, y) of scene, then the continuous phase information θ by obtaining scene after solution phase algorithm (x,y)。
CN201610927415.2A 2016-10-24 2016-10-24 A kind of depth picture capturing method combined based on Stereo matching with structure light Active CN106504284B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610927415.2A CN106504284B (en) 2016-10-24 2016-10-24 A kind of depth picture capturing method combined based on Stereo matching with structure light

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610927415.2A CN106504284B (en) 2016-10-24 2016-10-24 A kind of depth picture capturing method combined based on Stereo matching with structure light

Publications (2)

Publication Number Publication Date
CN106504284A CN106504284A (en) 2017-03-15
CN106504284B true CN106504284B (en) 2019-04-12

Family

ID=58318666

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610927415.2A Active CN106504284B (en) 2016-10-24 2016-10-24 A kind of depth picture capturing method combined based on Stereo matching with structure light

Country Status (1)

Country Link
CN (1) CN106504284B (en)

Families Citing this family (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107087150B (en) * 2017-04-26 2019-05-21 成都通甲优博科技有限责任公司 A kind of three-dimensional camera shooting method, system and device based on binocular solid and photometric stereo
CN108986197B (en) * 2017-11-30 2022-02-01 成都通甲优博科技有限责任公司 3D skeleton line construction method and device
CN108181319B (en) * 2017-12-12 2020-09-11 陕西三星洁净工程有限公司 Accumulated dust detection device and method based on stereoscopic vision
CN109661683B (en) * 2017-12-15 2020-09-15 深圳配天智能技术研究院有限公司 Structured light projection method, depth detection method and structured light projection device based on image content
CN109961417B (en) * 2017-12-26 2021-04-06 广州极飞科技有限公司 Image processing method, image processing apparatus, and mobile apparatus control method
TWI672676B (en) * 2018-03-27 2019-09-21 恆景科技股份有限公司 Structured-light system of dynamically generating a depth map
CN108495113B (en) * 2018-03-27 2020-10-27 百度在线网络技术(北京)有限公司 Control method and device for binocular vision system
CN110349196B (en) * 2018-04-03 2024-03-29 联发科技股份有限公司 Depth fusion method and device
CN109191509A (en) * 2018-07-25 2019-01-11 广东工业大学 A kind of virtual binocular three-dimensional reconstruction method based on structure light
CN110855961A (en) * 2018-08-20 2020-02-28 奇景光电股份有限公司 Depth sensing device and operation method thereof
WO2020037575A1 (en) * 2018-08-22 2020-02-27 深圳市大疆创新科技有限公司 Image depth estimation method, apparatus, readable storage medium, and electronic device
CN109146947B (en) * 2018-09-04 2021-09-28 清华-伯克利深圳学院筹备办公室 Marine fish three-dimensional image acquisition and processing method, device, equipment and medium
CN109461181B (en) * 2018-10-17 2020-10-27 北京华捷艾米科技有限公司 Depth image acquisition method and system based on speckle structured light
CN109443239A (en) * 2018-12-03 2019-03-08 广州欧科信息技术股份有限公司 Structural light measurement method, apparatus, equipment, storage medium and system
CN109540023B (en) * 2019-01-22 2019-11-26 西安电子科技大学 Object surface depth value measurement method based on two-value grid coding formwork structure light
CN113677192B (en) * 2019-02-15 2023-06-02 阿普哈维斯特技术股份有限公司 Depth and vision sensor for testing agricultural environment
CN109885053A (en) 2019-02-28 2019-06-14 深圳市道通智能航空技术有限公司 A kind of obstacle detection method, device and unmanned plane
CN110120074B (en) * 2019-05-10 2020-08-25 清研同创机器人(天津)有限公司 Cable positioning method for live working robot in complex environment
CN110246192A (en) * 2019-06-20 2019-09-17 招商局重庆交通科研设计院有限公司 Binocular crag deforms intelligent identification Method
CN110557622B (en) * 2019-09-03 2021-04-02 歌尔光学科技有限公司 Depth information acquisition method and device based on structured light, equipment and medium
CN111142088B (en) * 2019-12-26 2022-09-13 奥比中光科技集团股份有限公司 Light emitting unit, depth measuring device and method
CN111260715B (en) * 2020-01-20 2023-09-08 深圳市普渡科技有限公司 Depth map processing method, small obstacle detection method and system
CN111275776A (en) * 2020-02-11 2020-06-12 北京淳中科技股份有限公司 Projection augmented reality method and device and electronic equipment
CN113840130A (en) * 2020-06-24 2021-12-24 中兴通讯股份有限公司 Depth map generation method, device and storage medium
CN111951376B (en) * 2020-07-28 2023-04-07 中国科学院深圳先进技术研究院 Three-dimensional object reconstruction method fusing structural light and photometry and terminal equipment
CN114078074A (en) * 2020-08-11 2022-02-22 北京芯海视界三维科技有限公司 Image processing device and terminal
CN114189670B (en) * 2020-09-15 2024-01-23 北京小米移动软件有限公司 Display method, display device, display apparatus and storage medium
CN112212806B (en) * 2020-09-18 2022-09-13 南京理工大学 Three-dimensional phase unfolding method based on phase information guidance
CN112595262B (en) * 2020-12-08 2022-12-16 广东省科学院智能制造研究所 Binocular structured light-based high-light-reflection surface workpiece depth image acquisition method
CN112945140B (en) * 2021-01-29 2022-09-16 四川大学 Color object three-dimensional measurement method based on lookup table and region segmentation
CN112927280B (en) * 2021-03-11 2022-02-11 北京的卢深视科技有限公司 Method and device for acquiring depth image and monocular speckle structured light system
CN113052886A (en) * 2021-04-09 2021-06-29 同济大学 Method for acquiring depth information of double TOF cameras by adopting binocular principle
CN113205592B (en) * 2021-05-14 2022-08-05 湖北工业大学 Light field three-dimensional reconstruction method and system based on phase similarity
CN114998408B (en) * 2022-04-26 2023-06-06 宁波益铸智能科技有限公司 Punch line ccd vision detection system based on laser measurement
CN114924585B (en) * 2022-05-19 2023-03-24 广东工业大学 Safe landing method and system of unmanned gyroplane on rugged ground surface based on vision
CN116608794B (en) * 2023-07-17 2023-10-03 山东科技大学 Anti-texture 3D structured light imaging method, system, device and storage medium
CN116880101B (en) * 2023-07-21 2024-01-30 湖南日光显示技术有限公司 VA type LCD display screen and preparation method thereof

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201241547A (en) * 2011-04-14 2012-10-16 Ind Tech Res Inst System, device and method for acquiring depth image
CN103824318A (en) * 2014-02-13 2014-05-28 西安交通大学 Multi-camera-array depth perception method
CN105869167A (en) * 2016-03-30 2016-08-17 天津大学 High-resolution depth map acquisition method based on active and passive fusion
CN105931240A (en) * 2016-04-21 2016-09-07 西安交通大学 Three-dimensional depth sensing device and method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101259835B1 (en) * 2009-06-15 2013-05-02 한국전자통신연구원 Apparatus and method for generating depth information

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW201241547A (en) * 2011-04-14 2012-10-16 Ind Tech Res Inst System, device and method for acquiring depth image
CN103824318A (en) * 2014-02-13 2014-05-28 西安交通大学 Multi-camera-array depth perception method
CN105869167A (en) * 2016-03-30 2016-08-17 天津大学 High-resolution depth map acquisition method based on active and passive fusion
CN105931240A (en) * 2016-04-21 2016-09-07 西安交通大学 Three-dimensional depth sensing device and method

Also Published As

Publication number Publication date
CN106504284A (en) 2017-03-15

Similar Documents

Publication Publication Date Title
CN106504284B (en) A kind of depth picture capturing method combined based on Stereo matching with structure light
US11721067B2 (en) System and method for virtual modeling of indoor scenes from imagery
US10949978B2 (en) Automatic background replacement for single-image and multi-view captures
Maier et al. Intrinsic3D: High-quality 3D reconstruction by joint appearance and geometry optimization with spatially-varying lighting
EP3242275B1 (en) Using photo collections for three dimensional modeling
Furukawa et al. Accurate, dense, and robust multiview stereopsis
US20200234397A1 (en) Automatic view mapping for single-image and multi-view captures
CN103971404B (en) 3D real-scene copying device having high cost performance
US20230419438A1 (en) Extraction of standardized images from a single-view or multi-view capture
US10950032B2 (en) Object capture coverage evaluation
US20200258309A1 (en) Live in-camera overlays
Zhu et al. Video-based outdoor human reconstruction
US20130272600A1 (en) Range image pixel matching method
EP2650843A2 (en) Image processor, lighting processor and method therefor
US9147279B1 (en) Systems and methods for merging textures
CN108921895A (en) A kind of sensor relative pose estimation method
CN107507269A (en) Personalized three-dimensional model generating method, device and terminal device
Klaudiny et al. High-detail 3D capture and non-sequential alignment of facial performance
CN107492107A (en) The object identification merged based on plane with spatial information and method for reconstructing
Fei et al. Ossim: An object-based multiview stereo algorithm using ssim index matching cost
CN110349249A (en) Real-time dense method for reconstructing and system based on RGB-D data
CN114494582A (en) Three-dimensional model dynamic updating method based on visual perception
Ylimäki et al. Accurate 3-d reconstruction with rgb-d cameras using depth map fusion and pose refinement
Nguyen et al. High-definition texture reconstruction for 3D image-based modeling
CN104599283B (en) A kind of picture depth improved method for recovering camera heights based on depth difference

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant