CN106228507B - A kind of depth image processing method based on light field - Google Patents
A kind of depth image processing method based on light field Download PDFInfo
- Publication number
- CN106228507B CN106228507B CN201610541262.8A CN201610541262A CN106228507B CN 106228507 B CN106228507 B CN 106228507B CN 201610541262 A CN201610541262 A CN 201610541262A CN 106228507 B CN106228507 B CN 106228507B
- Authority
- CN
- China
- Prior art keywords
- initial
- photographic subjects
- default
- normal direction
- reflectivity
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 23
- 238000002310 reflectometry Methods 0.000 claims abstract description 63
- 238000005286 illumination Methods 0.000 claims abstract description 60
- 230000003287 optical effect Effects 0.000 claims abstract description 27
- 238000005457 optimization Methods 0.000 claims abstract description 20
- 230000002708 enhancing effect Effects 0.000 claims abstract description 11
- 238000010586 diagram Methods 0.000 claims description 57
- 238000000034 method Methods 0.000 claims description 19
- 238000001914 filtration Methods 0.000 claims description 10
- 238000012887 quadratic function Methods 0.000 claims description 6
- 230000011218 segmentation Effects 0.000 claims description 6
- 238000003384 imaging method Methods 0.000 abstract description 42
- 238000005516 engineering process Methods 0.000 description 9
- 230000004438 eyesight Effects 0.000 description 8
- 238000011161 development Methods 0.000 description 6
- 230000018109 developmental process Effects 0.000 description 6
- 238000000354 decomposition reaction Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 238000003708 edge detection Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 230000000903 blocking effect Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 241000208340 Araliaceae Species 0.000 description 1
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 1
- 235000003140 Panax quinquefolius Nutrition 0.000 description 1
- 230000002146 bilateral effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 235000008434 ginseng Nutrition 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 238000012958 reprocessing Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000017105 transposition Effects 0.000 description 1
Classifications
-
- G06T3/06—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/111—Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/128—Adjusting depth or disparity
Abstract
The depth image processing method based on light field that the invention discloses a kind of, comprising steps of obtaining the initial 4D light field color image and initial depth image of default photographic subjects using the shooting of optical field acquisition equipment;Pretreatment obtains the initial 3D grid model for presetting photographic subjects and corresponding initial normal direction field;It analyzes and calculates the default photographic subjects surface reflectivity of acquisition;According to the initial normal direction field of default photographic subjects and surface reflectivity, its light field image is modeled, obtains illumination model and illumination parameter;According to the illumination parameter that the surface reflectivity of default photographic subjects and illumination model have, the initial normal direction field of default photographic subjects is optimized;According to the normal direction field of optimization, after carrying out depth enhancing to initial depth image, the 3D grid model of default photographic subjects is rebuild.The present invention can be based on 4D light field, rebuild the shape of captured target, realize and carry out optical field imaging stereoscopic display to captured target, obtain the depth image of high quality.
Description
Technical field
The present invention relates to the technical fields such as optical field imaging, image procossing and computer vision, are based on more particularly to one kind
The depth image processing method of light field.
Background technique
Currently, in computer vision system, three-dimensional scene information is image with the continuous development of human sciences' technology
The computer visions application such as segmentation, target detection, object tracking provides a possibility that more, compared with two dimensional image, depth
Image has the three-dimensional feature information of object, i.e. depth information, and depth image is therefore also as a kind of universal three-dimensional scenic letter
Breath expression way is widely used.Therefore, using the imaging device that can capture color and depth information simultaneously, three are realized
The detection and identification of object are tieed up, a new hot spot of computer vision field will be become, the wherein acquisition of depth image is
Key technology therein.
In computer vision system, the method for obtaining depth image can be divided into two classes: passive type and active.Its
In, the method that passive type obtains depth image mainly utilizes ambient environmental conditions to be imaged, and common method is binocular stereo vision,
Optical field imaging is as a kind of application in terms of estimation of Depth at present of emerging passive type imaging mode also by more and more
Concern.Optical field imaging is an important branch for calculating imaging field.Light field is in space while comprising position and direction information
Light radiation field, compared to the traditional imaging mode for only recording 2-D data, optical field imaging can obtain image more abundant letter
Breath.Therefore, optical field imaging technology provides many new developing direction to calculate imaging.
Currently, optical field imaging utilizes its special imaging arrangement, four-dimensional light field data is obtained, not only includes that brightness is believed
Breath, further includes the directional information of light, while with its powerful reprocessing rate, in stereoscopic display, expanding Depth of field, depth
The fields such as estimation are widely used.There are mainly three types of forms for optical field imaging: microlens array, camera array and exposure mask.It is wherein micro-
Lens array form obtains light field data by the microlens array being placed between main lens and sensor, is the most frequently used at present
Optical field imaging mode.
In addition, high-precision 3D shape modeling becomes with more practicability and challenge along with the fast development of depth camera
Property.But active stereo imaging technique (such as laser, structure light, Kinect) is generally expensive, resolution ratio is low, imaging indoors
Environment, and the algorithm complexity of passive stereo imaging technique (such as binocular stereo vision, multi-angle of view rebuild MVS) is very high, time-consuming
Long, therefore, 3D shape modeling is extremely difficult to high-resolution, high-precision, real-time, practicability and generality.Commercialized light field phase
The appearance of machine (Lytro, Ratrix) brings new development for 3D stereoscopic display, shape modeling.
Currently, commercialized Lytro light-field camera spatial resolution is lower, generally according to the parameter setting in shooting process
The matching for carrying out corresponding white image, realizes the decoding of lenticule image, obtains 4D light field data, then carries out estimation of Depth, meets again
Burnt, stereoscopic display scheduling algorithm processing.Light-field camera carries out depth based on multiple 2D images and estimates as a kind of imaging and passive imaging technology
Meter, the depth map accuracy of calculating are lower.Different from the actives depth acquiring technology such as Kinect, the depth map that Kinect is obtained is
Global smooth, depth value deviation is small, and the depth value based on the estimation of 4D light field grain details are described it is fine, to it is texture-free,
It repeats texture or the few shape face of texture is unable to get estimation of Depth value, and noise depth value and true value deviation are very big.
Shade rebuilds (SFS), multi-angle of view stereo reconstruction (MVS), photometric stereo rebuild (PS) and are three kinds and classical passive stand
Body imaging technique.Shade rebuilds SFS technology, rebuilds shape, still, nothing under some scenes from the shade clue of a luminance picture
Method determines that the variation of object brightness is due to caused by changing geometry or reflecting attribute is different, and shade rebuilds SFS
Algorithm usually assumes that the image-forming conditions such as Lambertian reflector, unified reflectivity, distant points light source in the application.Multi-angle of view solid weight
The 2D image that MVS technology has been demarcated from multiple, different perspectives is shot is built to rebuild shape, passes through multiple figures to adjacent view
As carrying out feature extraction, matching, generating initial depth figure or sparse 3D point cloud, last re-optimization generates high-precision shape mould
Type, therefore, MVS algorithm complexity are high, and time-consuming, feature extraction and matching to texture, block, illumination, the variation of reflecting attribute
Very sensitive, most of algorithm can not all be suitable for all scenes.And photometric stereo is rebuild PS algorithm and is needed in controllable interior
Under light environment, multiple light sources are set, shoots multiple images, accurately calculates light source direction, by the brightness change meter of multiple images
Body surface normal direction field is calculated, shape is modeled.
Therefore, it in view of the 4D data of the special construction of optical field imaging and acquisition, determines and not can be used directly traditional quilt
Dynamic stereoscopic imaging technology carries out estimation of Depth.
Therefore, there is an urgent need to develop a kind of technologies out at present, can be based on 4D light field, rebuild the shape of captured target
Shape is realized and carries out optical field imaging stereoscopic display to captured target, obtains the depth image of high quality, and guarantees the matter of imaging
Amount facilitates the popularization and application range for expanding optical field imaging.
Summary of the invention
In view of this, the object of the present invention is to provide a kind of depth image processing method based on light field, can be based on
4D light field rebuilds the shape of captured target, realizes and carry out optical field imaging stereoscopic display to captured target, obtain high quality
Depth image, and guarantee the quality of imaging, facilitate the popularization and application range for expanding optical field imaging, promotes optical field imaging application
Development is conducive to the product use feeling for improving user, is of great practical significance.
For this purpose, the present invention provides a kind of depth image processing method based on light field, comprising steps of
Step 1: obtaining the initial 4D light field color image of default photographic subjects and initial using the shooting of optical field acquisition equipment
Depth image;
Step 2: being carried out to the initial 4D light field color image and initial depth image of default photographic subjects obtained pre-
Processing obtains the initial 3D grid model for presetting photographic subjects and corresponding initial normal direction field;
Step 3: analyzing according to the initial color image of the default photographic subjects and initial normal direction field and calculating acquisition
The surface reflectivity of the default photographic subjects;
Step 4: according to the corresponding initial normal direction field of the default photographic subjects and surface reflectivity, to the default bat
It takes the photograph the corresponding light field image of target to be modeled, obtains illumination model and the illumination model that the default photographic subjects have
The illumination parameter having;
Step 5: according to the illumination parameter that the surface reflectivity of the default photographic subjects and illumination model have, to institute
The corresponding initial normal direction field of default photographic subjects is stated to optimize;
Step 6: depth enhancing is carried out to the initial depth image of default photographic subjects according to the normal direction field that optimization obtains,
Obtain the initial depth image by depth enhancing;
Step 7: projecting in 3d space according to the initial depth image by depth enhancing, default photographic subjects are rebuild
3D grid model.
Wherein, the second step includes following sub-step:
Mask is established to the initial 4D light field color image and initial depth image, removes background interference therein;
Depth image is pre-processed, is projected in 3d space, the initial 3D grid model of default photographic subjects is obtained;
Initial 3D grid model based on the default photographic subjects obtains the default corresponding initial normal direction of photographic subjects
?.
Wherein, the third step includes following sub-step:
The initial color image of the default photographic subjects is handled, corresponding chromatic diagram is obtained;
To the chromatic diagram by threshold segmentation, the marginal points information that the chromatic diagram has is extracted;
The marginal points information or chromatic value being had according to the chromatic diagram, all surfaces area for including to the chromatic diagram
Domain carries out reflectivity division, and different labels is established to the surface region with different reflectivity;
To each surface region with different reflectivity, calculates separately and obtain its coloration mean value, and by judging the color
Whether the coloration difference between degree mean value and default chromatic value reaches preset threshold, to judge whether it is ambiguity pixel region,
If it is, being defined as ambiguity pixel region, and it is based on Euclidean distance, the surface district as ambiguity pixel region is eliminated in filtering
Domain;
The reflectivity for calculating all surface region in chromatic diagram after filtering, finally obtains the default photographic subjects
Surface reflectivity.
Wherein, the marginal points information being had according to the chromatic diagram reflects chromatic diagram all surfaces region
Rate divide operation specifically includes the following steps:
For any two pixel in the chromatic diagram, judge with the presence or absence of marginal point on their line, if
It is to define them to belong to the surface region with different reflectivity, and different labels is arranged.
Wherein, the chromatic value being had according to the chromatic diagram carries out all surfaces region that the chromatic diagram includes anti-
Penetrate rate division operation specifically includes the following steps:
For any two surface region in the chromatic diagram, judge whether the coloration difference between them reaches default
Value if so, defining them as the surface region with different reflectivity, and marks different labels.
Wherein, in the 4th step, according to the corresponding initial normal direction field of the default photographic subjects and surface reflectivity,
It is modeled, is obtained about light field image of the quadratic function of normal direction and reflectivity to the default photographic subjects using preset
The illumination parameter that the illumination model and the illumination model that the default photographic subjects have have;
The formula of the quadratic function are as follows:
I=s (η)=ηTAη+bTη+c;
ηX, y=ρX, y·nX, y;
Wherein, ηX, yIt is reflectivity ρX, yWith unit normal direction nX, yProduct, A, b, c is the illumination parameter of illumination model, is passed through
Illumination parameter is calculated in linear least-squares optimization algorithm.
Wherein, the 5th step includes following sub-step:
According to the illumination parameter that the surface reflectivity of the default photographic subjects and illumination model have, with preset energy
Function, including color image brightness constrains, local normal direction smoothness constraint, normal direction is prior-constrained and unit vector constrains, to initial
Normal direction field optimizes;
Using Nonlinear least squares optimization LM algorithm, the preset energy function is optimized, is optimized
Normal direction field afterwards.
By the above technical solution provided by the invention as it can be seen that compared with prior art, the present invention provides one kind to be based on
The depth image processing method of light field can be based on 4D light field, rebuild the shape of captured target, realize to captured target
Carry out optical field imaging stereoscopic display, obtain the depth image of high quality, and guarantee imaging quality, facilitate expand light field at
The popularization and application range of picture promotes optical field imaging application development, is conducive to the product use feeling for improving user, has great
Production practices meaning.
Detailed description of the invention
Fig. 1 is a kind of flow chart of the depth image processing method based on light field provided by the invention;
Fig. 2 is to preset the initial of photographic subjects in a kind of depth image processing method based on light field provided by the invention
Color image;
Fig. 3 is to preset the initial of photographic subjects in a kind of depth image processing method based on light field provided by the invention
Depth image;
Fig. 4 is to carry out to depth image smooth in a kind of depth image processing method based on light field provided by the invention
The initial 3D grid model schematic diagram of the default photographic subjects obtained with denoising;
Fig. 5 is in a kind of depth image processing method based on light field provided by the invention shown in Fig. 4, to depth image into
The partial enlargement diagram of the initial 3D grid model for the default photographic subjects that row is smooth and denoising and obtains;
Fig. 6 is to be based on the default shooting mesh in a kind of depth image processing method based on light field provided by the invention
The initial 3D grid model of target obtains the default corresponding normal direction field schematic diagram of photographic subjects;
Fig. 7 is in a kind of depth image processing method based on light field provided by the invention shown in Fig. 6, based on described default
The initial 3D grid model of photographic subjects obtains the normal direction textures of the normal direction field of default photographic subjects;
Fig. 8 is in a kind of depth image processing method based on light field provided by the invention, to the default photographic subjects
Initial color image carry out handling chromatic diagram obtained;
Fig. 9 is the default photographic subjects tool in a kind of depth image processing method based on light field provided by the invention
Some illumination model figures;
Figure 10 is to preset the process of photographic subjects in a kind of depth image processing method based on light field provided by the invention
Normal direction field schematic diagram after optimization;
Figure 11 is to preset the process of photographic subjects in a kind of depth image processing method based on light field provided by the invention
The normal direction textures of normal direction field after optimization;
Figure 12 is a kind of default shooting mesh that the depth image processing method based on light field finally obtains provided by the invention
Target three-dimensional 3D grid model schematic diagram;
Figure 13 is the enlarged diagram of the part I shown in Figure 12;
Figure 14 is the enlarged diagram of the part II shown in Figure 12.
Specific embodiment
In order to enable those skilled in the art to better understand the solution of the present invention, with reference to the accompanying drawing with embodiment to this
Invention is described in further detail.
Fig. 1 is a kind of flow chart of the depth image processing method based on light field provided by the invention;
Referring to Fig. 1, a kind of depth image processing method based on light field provided by the invention, comprising the following steps:
Step S101: the initial 4D light field color image and just of default photographic subjects is obtained using the shooting of optical field acquisition equipment
Beginning depth image;
Referring to fig. 2, Fig. 3, Fig. 2, Fig. 3 are respectively to pass through the optical field acquisitions equipment such as color image sensor, and shooting obtains pre-
If the initial 4D light field color image and initial depth image of photographic subjects.
It should be noted that current commercial hand-held light-field camera is mainly Lytro camera and Raytrix camera,
Lytro camera includes Lytro 1.0 and Lytro Illum camera, and Raytrix camera includes the models such as R5, R12, R29, R42,
It can be used to carry out light field image acquisition, estimation of Depth, again focusing, three-dimensional imaging etc. of real scene.It is also possible to use machine
Tool arm cooperates general camera, simulates light field imaging mode, by small movements, carries out optical field acquisition.
Step S102: the initial 4D light field color image and initial depth image of default photographic subjects obtained are carried out
Pretreatment obtains the initial 3D grid model for presetting photographic subjects and corresponding initial normal direction field;
In the present invention, the step S102 specifically includes following sub-step:
Step S1021: establishing mask (mask) the initial 4D light field color image and initial depth image, removal
Background interference therein (can manually mode removes by user);
In the present invention, in specific implementation, the conspicuousness detection and segmentation of image can be carried out based on color difference, thus
Mask is established to target object (i.e. initial 4D light field color image and initial depth image), background information is deleted, only to target
Object is operated.
Step S1022: carrying out smooth and denoising to initial depth image, project in 3d space, obtains default clap
Take the photograph the initial 3D grid model (as shown in Figure 4, Figure 5) of target;
In the present invention, initial depth image is carried out smoothly and is gone by mean filter and bilateral filtering in specific implementation
It makes an uproar processing.
In specific implementation, it should be noted that it has been generally acknowledged that depth image is 2.5D, i.e., by three-dimensional coordinate (x, y, z)
Depth value z projects to two-dimensional space, is expressed with the gray value of 0-255.If known camera (such as light-field camera Lytro Illum)
Depth information can be projected to three-dimensional space according to the projection model of camera by parameter, obtain (x, y, z) coordinate of target;If nothing
Depth value is zoomed to space z by preset ratio then according to the size of the size of object and image by the parameter that method obtains camera
Value, the 3D shape of approximate expression target object.
It should be noted that for light-field camera Lytro Illum and other light-field cameras, camera parameter needs to pass through phase
Machine calibration output shoots multiple chessboard table images (10-20) of different angle, calculates phase according to traditional camera scaling method
The inside and outside parameter of machine.
Step S1023: it is corresponding to obtain default photographic subjects for the initial 3D grid model based on the default photographic subjects
Initial normal direction field (i.e. initial surface normal direction, as shown in Figure 6, Figure 7, normal vectorFig. 7 indicates that color value is formed
Normal direction textures).
In the present invention, it should be noted that in three-dimensional grid model, each spatial point p (X, Y, Z), perpendicular to this
The tangent plane of grid surface where point, the directive vector of tool be known as normal vector, useExpression, passes through pixel
Tangent plane where all grid computings of point p point, then obtains the normal vector of the point, object can be expressed by ultimately generating
The normal direction field of body 3D shape.
Step S103: it according to the initial color image of the default photographic subjects and initial normal direction field, analyzes and calculates and obtain
The surface reflectivities of the default photographic subjects (reflectivity can be expressed accurately default after eliminating most of ambiguous point
The reflecting attribute of the surface pixels of photographic subjects);
In the present invention, the step S103 specifically includes following sub-step:
Step S1031: handling the initial color image (as shown in Figure 2) of the default photographic subjects, acquisition pair
The chromatic diagram (as shown in Figure 8) answered;
In the present invention, it should be noted that by chromatic diagram it can be found that the shade as caused by blocking and be mutually reflected
There is ambiguousness in the pixel color angle value in region, can not correctly express the reflecting attribute of default photographic subjects.By to chromatic diagram into
The clustering processing of the existing K-means algorithm of row, it can be found that can be obtained there are ambiguity pixel region and block and be mutually reflected
Caused by brightness value region of variation (i.e. the corresponding chromatic diagram of initial color image);
Step S1032: to the chromatic diagram by threshold segmentation, marginal points information (this that the chromatic diagram has is extracted
It is that there are ambiguousness, the reflecting attributes for the object that is beyond expression for the chromatic value of the shadow region as caused by blocking and be mutually reflected);
In the present invention, it should be noted that Threshold segmentation is passed through to the chromatic diagram, specifically: utilize edge detection
Operator (such as existing Canny or Sobel edge detection algorithm) carries out edge detection to the chromatic diagram, extracts in chromatic diagram
Edge pixel point, while can also be to divergent margin point by expansion optimization, to extract all edge pixel points as far as possible.
Step S1033: the marginal points information or chromatic value being had according to the chromatic diagram include to the chromatic diagram
All surfaces region carries out reflectivity division, and establishes to the surface region with different reflectivity (i.e. different reflecting attributes)
Different labels;
For the present invention, according to the chromatic value that the chromatic diagram has, all surfaces region for including to the chromatic diagram
Carry out reflectivity division operation specifically includes the following steps:
For any two surface region in the chromatic diagram, judge whether the coloration difference between them reaches default
Value if so, defining them as the surface region with different reflectivity (i.e. different reflecting attributes), and marks different marks
Note.
It it should be noted that the label can be any label that can distinguish two surface regions, such as can be A
Or the numeral marks such as alphabetic flags such as B and 1 and 2, it can also be certainly other labels.
It should also be noted that, the marginal points information being had according to chromatic diagram includes to the chromatic diagram in specific implementation
All surfaces region carry out reflectivity division operation specifically includes the following steps:
For whether there is also other marginal points on the line between any two the pixel p and q in chromatic diagram (i.e.
Match the marginal point for the marginal points information that the chromatic diagram has), if so, define they be with different reflectivity (i.e. not
Same reflecting attribute) surface region, and different labels is marked, because this can equally illustrate that they are with different reflectivity
The surface region of (i.e. different reflecting attributes).
In the present invention, the marginal points information or chromatic value being had according to chromatic diagram, to the chromatic diagram include it is complete
Portion's surface region carries out reflectivity division and therefore for any two surface region in chromatic diagram, judges the two regions
Any pixel p and q illustrate two surface regions where them if there are other marginal points on their line
With different reflecting attributes.
Step S1034: to each surface region with different reflectivity (i.e. different reflecting attributes), acquisition is calculated separately
Its coloration mean value, and by judging whether the coloration difference between the coloration mean value and default chromatic value reaches preset threshold, come
Judge whether it is ambiguity pixel region (i.e. ambiguity pixel), if it is, being defined as ambiguity pixel region, and based on European
The surface region as ambiguity pixel region is eliminated in distance, filtering;
It should be noted that seeking its coloration mean value for each surface region with different reflectivityIfIt is then ambiguous point.
In the present invention, it should be noted thatIt is average color angle value.ForCh thereinpIt is any
The chromatic value of one point p, τ is interceptive value, and user of the invention can preset the value of τ by testing.
In specific implementation, for the present invention, the circular being filtered based on Euclidean distance and coloration difference is such as
Under:
Wherein,It is the local neighborhood of pixel p, IpIt is the colour intensity value of pixel p, γ is normalization coefficient
, ωdIt is the space length weight of pixel p He its neighborhood territory pixel point q, i.e. Euclidean distance,It is pixel p and its neighborhood
The coloration difference weight of pixel q, dp、dqIt is the spatial position (coordinate value) of pixel p and q, Chp、ChqIt is pixel p and q
Chromatic value.
For according to the coloration difference between the coloration mean value and default chromatic value, if the colour difference of a surface region
It is different to be greater than default Chroma threshold, then judge the surface region be due to shade be mutually reflected caused by ambiguity pixel.
In specific implementation, for the present invention, ambiguity pixel caused by filtering is eliminated shade and is mutually reflected passes through office
The chromatic value of portion's neighborhood territory pixel point calculates the correct chromatic value of ambiguous point using mean filter, eliminates shade and phase to reach
The influence mutually reflected.
Step S1035: calculating the reflectivity in all surface region in chromatic diagram after filtering, finally obtains described default
The surface reflectivity of photographic subjects.
It should be noted that specifically can use well known, preset algorithm (Qifeng Chen and for the present invention
VladlenKoltun,“A simple model for intrinsic image decomposition with depth
Cues, " in Computer Vision (ICCV), IEEE International Conference on, 2013 algorithm) come
The reflectivity for calculating all surface region in chromatic diagram after filtering, to finally obtain the surface of the default photographic subjects
Reflectivity.
In the present invention, it should be noted that input data is light field centre visual angle subgraph and its corresponding initial depth
Degree figure, obtains initial normal direction field by step S102 by initial depth figure, passes through step S103 by light field centre visual angle subgraph
Obtain the accurate chromatic diagram of target surface.Based on initial normal direction field and chromatic diagram, contents attribute decomposition is carried out to image, by pre-
If algorithm (Qifeng Chen and VladlenKoltun, " A simple modelfor intrinsic image
decomposition with depth cues,”inComputer Vision(ICCV),IEEE International
Conference on, 2013 algorithm), to extract the reflecting attribute A of each pixel of body surfacep, establishing one includes
The energy function of data item and regular terms, and linear least square is utilized, it is finally calculated after filtering in chromatic diagram
The reflectivity in all surface region.Formula are as follows:
Ereg=ωAEA+ωDED+ωNEN+ωCEC;
Wherein, IpIt is the colour intensity value of pixel p, ApIt is reflectivity, DpIt is direct irradiation level, NpIt is indirect radiant degree,
CpIt is illuminating color, wA、wD、wN、wCIt is the weight of corresponding content attribute, the regular terms energy function E of corresponding content attributeA、ED、
EN、ECIs defined as:
Wherein, αP, qIt is the coloration difference weight of local neighborhood pixel, apIt is the reflectivity of pixel p, dpIt is pixel p
Spatial position (coordinate value), ChpIt is the chromatic value of pixel p, npIt is the normal vector of pixel p.
Step S104: according to the corresponding initial normal direction field of the default photographic subjects and surface reflectivity, to described default
The corresponding light field image of photographic subjects is modeled, and the illumination model (as shown in Figure 9) that the default photographic subjects have is obtained
And the illumination parameter that the illumination model has;
In the present invention, in specific implementation, according to the corresponding initial normal direction field of the default photographic subjects and surface reflection
Rate carries out illumination modeling to the default photographic subjects, since the acquisition of light field image is the natural lighting item in consecutive variations
Under part, measurement illumination attribute be it is highly difficult, therefore, using one it is preset about the quadratic function of normal direction and reflectivity to institute
It states the corresponding light field image of default photographic subjects (i.e. light field luminance picture) to be modeled, obtaining the default photographic subjects has
Illumination model and the illumination parameter that has of the illumination model, the formula of the quadratic function are as follows:
I=s (η)=ηTAη+bTη+c;
ηX, y=ρX, y·nX, y;
Wherein, ηX, yIt is reflectivity ρX, yWith unit normal direction nX, yProduct, A, b, c is the illumination parameter of illumination model, is passed through
Illumination parameter is calculated in linear least-squares optimization algorithm, can obtain global smooth default bat based on global illumination model
Take the photograph target surface (as shown in Figure 9).
Step S105: the illumination parameter being had according to the surface reflectivity of the default photographic subjects and illumination model, it is right
The corresponding initial normal direction field of the default photographic subjects optimizes, and restores the geometric detail of body surface;
Normal direction field is optimized it should be noted that illumination parameter is added, the default shooting finally constructed can be made
The surface shape of the three-dimensional 3D three-dimensional shape image of target is smooth.
In the present invention, in specific implementation, had according to the surface reflectivity of the default photographic subjects and illumination model
Illumination parameter, the corresponding initial normal direction field of the default photographic subjects is optimized.It should be noted that default shooting mesh
Target surface reflectivity and illumination model are calculation optimization to be carried out based on initial normal direction field, but initial normal direction field is from initial
Depth map generates, and there are many noises and ambiguity value.Default photographic subjects surface includes many high frequency partial geometric details, often
The normal direction of a point is uniquely that accurate normal direction field is therefore the essential factor of three-dimensional reconstruction restores the office of target surface
Portion's geometric detail, it is necessary to carry out the optimized reconstruction of normal direction field.According to the surface reflectivity and illumination mould of the default photographic subjects
The illumination parameter that type has, by characterizing illumination consistency, local smoothing method, initial priori knowledge, the bound term of unit vector,
It establishes and minimizes energy function, the corresponding initial normal direction field of the default photographic subjects is optimized
For the present invention, the step S105 specifically includes the following steps:
Step S1051: the illumination parameter being had according to the surface reflectivity of the default photographic subjects and illumination model, fortune
With preset energy function, to the normal direction of the default each pixel in photographic subjects surfaceIt optimizes, i.e., initial normal direction field is carried out
Optimize (as shown in Figure 10, Figure 11);
Step S1052: Nonlinear least squares optimization LM algorithm is utilized, the preset energy function is optimized and is asked
Solution, the normal direction field (as shown in Figure 12 and Figure 13) after being optimized.
For the present invention, it should be noted that the acquisition of light field image is to measure illumination attribute under the conditions of natural lighting
It is highly difficult, and light environment is variation, the present invention is using one about normal directionIt is secondary with reflectivity ρ
Shading function models light field luminance picture;Multiply the global illumination ginseng for solving estimation with Least-squares minimization LM algorithm
Number;It is the energy function inputted, each picture of Lai Youhua body surface especially by minimizing with reflectivity ρ and illumination model parameter
The normal direction of element
Wherein, the preset energy function E (n) includes brightness of image constraint Ei(n), local normal direction smoothness constraint Esh(n)、
Initial Normal Constraint Er(n) and unit vector constrains Eu(n), the circular of the preset energy function is as follows:
E (n)=λiEi(n)+λshEsh(n)+λrEr(n)+λuEu(n);
It should be noted that wherein, IpIt is the colour intensity value of pixel p, s (ηp) be pixel p illumination model output
Brightness value, Ei(n) be to illumination model output brightness of image and true brightness value consistency constraint;npIt is pixel p
Normal vector, Esh(n) it is smoothness constraint to default photographic subjects surface local neighborhood;It is the initial normal vector of pixel p, Er
(n) be to after optimization normal direction and initial normal direction consistency constraint,It is the transposition of the normal direction of pixel p, EuIt (n) is to excellent
The normal vector of change must be the constraint of unit vector.
Step S106: the normal direction field obtained according to optimization carries out depth increasing to the initial depth image of default photographic subjects
By force, obtaining the final initial depth image by depth enhancing, (i.e. the present invention is using high-precision optimization normal direction to initial depth
Degree figure is enhanced);
In the present invention, in specific implementation, initial depth figure can be enhanced based on the normal direction of optimization, thus
To the depth map of high quality, then carry out geometry three-dimensional grid rebuild, using preset algorithm (Diego Nehab,
SzymonRusinkiewicz,James Davis,andRavi Ramamoorthi,“Efficiently combining
positionsand normals for precise 3d geometry”in ACM Transactionson Graphics
(TOG), 2005 algorithm) carry out depth enhancing, specific algorithm are as follows:
Combine enhancing with the mutual of normal direction field by space coordinate and local message, using weighted least-squares method to energy
Function is calculated, and high accuracy depth figure, formula are obtained is defined as:
Wherein, EpIt is the energy function of space coordinate, wherein P (x, y) is the three-dimensional space of image plane pixel point (x, y)
Between coordinate, Z (x, y) is depth value, fx、fyIt is the focal length of camera, PiIt is the space coordinate of optimization,It is the space that measurement obtains
Coordinate is obtained by initial depth figure;EnIt is the energy function of normal direction field, Tx、TyIt is the pixel (x, y) of default photographic subjects
Surface tangent,It is the space coordinate P of corresponding optimizationiCorrect normal vector.
Step S107: according to the initial depth image by depth enhancing, projecting in 3d space, rebuilds default shooting mesh
Target 3D grid model (as shown in figure 14).Therefore, the present invention may be implemented to carry out default photographic subjects optical field imaging solid
It has been shown that, obtains the depth image of high quality, and guarantees the quality of imaging, facilitates the popularization and application model for expanding optical field imaging
It encloses,
It should be noted that for the present invention, specifically can use well known algorithm (Diego Nehab,
SzymonRusinkiewicz,James Davis,and Ravi Ramamoorthi,“Efficiently combining
positions and normals for precise 3d geometry”in ACM Transactions on Graphics
(TOG), 2005 algorithm), so that it may according to the normal direction field that optimization obtains, the initial depth image of default photographic subjects is carried out
Depth enhancing, to obtain final, high quality the initial depth image enhanced by depth.
In the present invention, it should be noted that according to the projection model of two-dimensional space to three-dimensional space, will increase by depth
Strong initial depth image projection is to 3d space, formula are as follows:
Wherein, (X, Y) is the plane of delineation coordinate of default photographic subjects, and (X, Y, Z) is default photographic subjects surface in 3D
The coordinate in space, is the focal length and centre coordinate of camera respectively, and R and T are projective transformation rotation and translation matrix respectively.
In conclusion compared with prior art, the depth image processing method based on light field that the present invention provides a kind of,
It can be based on 4D light field, rebuild the shape of captured target, realize and carry out optical field imaging stereoscopic display to captured target, obtain
The depth image of high quality is obtained, and guarantees the quality of imaging, facilitates the popularization and application range for expanding optical field imaging, promotes light
Field imaging applications development, is conducive to the product use feeling for improving user, is of great practical significance.
The above is only a preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art
For member, various improvements and modifications may be made without departing from the principle of the present invention, these improvements and modifications are also answered
It is considered as protection scope of the present invention.
Claims (7)
1. a kind of depth image processing method based on light field, which is characterized in that comprising steps of
Step 1: obtaining the initial 4D light field color image and initial depth of default photographic subjects using the shooting of optical field acquisition equipment
Image;
Step 2: the initial 4D light field color image and initial depth image to default photographic subjects obtained are located in advance
Reason obtains the initial 3D grid model for presetting photographic subjects and corresponding initial normal direction field;
Step 3: analyzing and calculating described in acquisition according to the initial color image of the default photographic subjects and initial normal direction field
The surface reflectivity of default photographic subjects;
Step 4: according to the corresponding initial normal direction field of the default photographic subjects and surface reflectivity, to the default shooting mesh
It marks corresponding light field image to be modeled, the illumination model and the illumination model that the acquisition default photographic subjects have have
Illumination parameter;
Step 5: according to the illumination parameter that the surface reflectivity of the default photographic subjects and illumination model have, to described pre-
If the corresponding initial normal direction field of photographic subjects optimizes;
Step 6: carrying out depth enhancing according to the normal direction field that optimization obtains to the initial depth image of default photographic subjects, obtaining
The initial depth image enhanced by depth;
Step 7: projecting in 3d space according to the initial depth image by depth enhancing, the 3D of default photographic subjects is rebuild
Grid model.
2. the method as described in claim 1, which is characterized in that the second step includes following sub-step:
Mask is established to the initial 4D light field color image and initial depth image, removes background interference therein;
Depth image is pre-processed, is projected in 3d space, the initial 3D grid model of default photographic subjects is obtained;
Initial 3D grid model based on the default photographic subjects obtains the default corresponding initial normal direction field of photographic subjects.
3. the method as described in claim 1, which is characterized in that the third step includes following sub-step:
The initial color image of the default photographic subjects is handled, corresponding chromatic diagram is obtained;
To the chromatic diagram by threshold segmentation, the marginal points information that the chromatic diagram has is extracted;
The marginal points information or chromatic value being had according to the chromatic diagram, all surfaces region for including to the chromatic diagram into
Row reflectivity divides, and different labels is established to the surface region with different reflectivity;
To each surface region with different reflectivity, calculates separately and obtain its coloration mean value, and by judging that the coloration is equal
Whether the coloration difference between value and default chromatic value reaches preset threshold, to judge whether it is ambiguity pixel region, if
It is then to be defined as ambiguity pixel region, and be based on Euclidean distance, the surface region as ambiguity pixel region is eliminated in filtering;
The reflectivity for calculating all surface region in chromatic diagram after filtering, finally obtains the surface of the default photographic subjects
Reflectivity.
4. method as claimed in claim 3, which is characterized in that according to the marginal points information that the chromatic diagram has, to described
Chromatic diagram all surfaces region carry out reflectivity division operation specifically includes the following steps:
For any two pixel in the chromatic diagram, judge with the presence or absence of marginal point on the line between them, if
It is to define them to belong to the surface region with different reflectivity, and different labels is arranged.
5. method as claimed in claim 3, which is characterized in that according to the chromatic value that the chromatic diagram has, to the coloration
All surfaces region that figure includes carry out the operation of reflectivity division specifically includes the following steps:
For any two surface region in the chromatic diagram, judge whether the coloration difference between them reaches preset value,
If so, defining them as the surface region with different reflectivity, and mark different labels.
6. method as claimed in claim 4, which is characterized in that in the 4th step, according to the default photographic subjects pair
The initial normal direction field answered and surface reflectivity, using it is preset about the quadratic function of normal direction and reflectivity to the default shooting
The light field image of target is modeled, what the illumination model and the illumination model that the acquisition default photographic subjects have had
Illumination parameter;
The formula of the quadratic function are as follows:
I=s (η)=ηTAη+bTη+c;
ηX, y=ρX, y·nX, y;
Wherein, ηX, yIt is reflectivity ρX, yWith unit normal direction nX, yProduct, A, b, c is the illumination parameter of illumination model, passes through line
Illumination parameter is calculated in property Least-squares minimization algorithm.
7. such as method described in any one of claims 1 to 6, which is characterized in that the 5th step includes following sub-step:
According to the illumination parameter that the surface reflectivity of the default photographic subjects and illumination model have, with preset energy letter
Number, including color image brightness constrains, local normal direction smoothness constraint, normal direction is prior-constrained and unit vector constrains, to initial method
It is optimized to field;
Using Nonlinear least squares optimization LM algorithm, the preset energy function is optimized, after being optimized
Normal direction field.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610541262.8A CN106228507B (en) | 2016-07-11 | 2016-07-11 | A kind of depth image processing method based on light field |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610541262.8A CN106228507B (en) | 2016-07-11 | 2016-07-11 | A kind of depth image processing method based on light field |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106228507A CN106228507A (en) | 2016-12-14 |
CN106228507B true CN106228507B (en) | 2019-06-25 |
Family
ID=57519550
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610541262.8A Active CN106228507B (en) | 2016-07-11 | 2016-07-11 | A kind of depth image processing method based on light field |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106228507B (en) |
Families Citing this family (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106780705B (en) * | 2016-12-20 | 2020-10-16 | 南阳师范学院 | Depth map robust smooth filtering method suitable for DIBR preprocessing process |
EP3343506A1 (en) * | 2016-12-28 | 2018-07-04 | Thomson Licensing | Method and device for joint segmentation and 3d reconstruction of a scene |
CN107228625B (en) * | 2017-06-01 | 2023-04-18 | 深度创新科技(深圳)有限公司 | Three-dimensional reconstruction method, device and equipment |
CN109427086A (en) * | 2017-08-22 | 2019-03-05 | 上海荆虹电子科技有限公司 | 3-dimensional image creation device and method |
US10776995B2 (en) * | 2017-10-17 | 2020-09-15 | Nvidia Corporation | Light fields as better backgrounds in rendering |
CN108805921B (en) * | 2018-04-09 | 2021-07-06 | 奥比中光科技集团股份有限公司 | Image acquisition system and method |
CN109146934A (en) * | 2018-06-04 | 2019-01-04 | 成都通甲优博科技有限责任公司 | A kind of face three-dimensional rebuilding method and system based on binocular solid and photometric stereo |
CN109087347B (en) * | 2018-08-15 | 2021-08-17 | 浙江光珀智能科技有限公司 | Image processing method and device |
CN109166176B (en) * | 2018-08-23 | 2020-07-07 | 百度在线网络技术(北京)有限公司 | Three-dimensional face image generation method and device |
CN111080689B (en) * | 2018-10-22 | 2023-04-14 | 杭州海康威视数字技术股份有限公司 | Method and device for determining face depth map |
EP3671645A1 (en) * | 2018-12-20 | 2020-06-24 | Carl Zeiss Vision International GmbH | Method and device for creating a 3d reconstruction of an object |
CN110417990B (en) * | 2019-03-25 | 2020-07-24 | 浙江麦知网络科技有限公司 | APP starting system based on target analysis |
CN109974625B (en) * | 2019-04-08 | 2021-02-09 | 四川大学 | Color object structured light three-dimensional measurement method based on hue optimization gray scale |
CN110471061A (en) * | 2019-07-16 | 2019-11-19 | 青岛擎鹰信息科技有限责任公司 | A kind of emulation mode and its system for realizing airborne synthetic aperture radar imaging |
CN110455815B (en) * | 2019-09-05 | 2023-03-24 | 西安多维机器视觉检测技术有限公司 | Method and system for detecting appearance defects of electronic components |
CN110686652B (en) * | 2019-09-16 | 2021-07-06 | 武汉科技大学 | Depth measurement method based on combination of depth learning and structured light |
CN111147745B (en) * | 2019-12-30 | 2021-11-30 | 维沃移动通信有限公司 | Shooting method, shooting device, electronic equipment and storage medium |
CN111207762B (en) * | 2019-12-31 | 2021-12-07 | 深圳一清创新科技有限公司 | Map generation method and device, computer equipment and storage medium |
CN111343444B (en) * | 2020-02-10 | 2021-09-17 | 清华大学 | Three-dimensional image generation method and device |
CN113538552B (en) * | 2020-02-17 | 2024-03-22 | 天目爱视(北京)科技有限公司 | 3D information synthetic image matching method based on image sorting |
CN113052970B (en) * | 2021-04-09 | 2023-10-13 | 杭州群核信息技术有限公司 | Design method, device and system for light intensity and color of lamplight and storage medium |
CN113436325B (en) * | 2021-07-30 | 2023-07-28 | 北京达佳互联信息技术有限公司 | Image processing method and device, electronic equipment and storage medium |
CN113989473B (en) * | 2021-12-23 | 2022-08-12 | 北京天图万境科技有限公司 | Method and device for relighting |
CN116109520B (en) * | 2023-04-06 | 2023-07-04 | 南京信息工程大学 | Depth image optimization method based on ray tracing algorithm |
CN116447978B (en) * | 2023-06-16 | 2023-10-31 | 先临三维科技股份有限公司 | Hole site information detection method, device, equipment and storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104966289A (en) * | 2015-06-12 | 2015-10-07 | 北京工业大学 | Depth estimation method based on 4D light field |
CN105357515A (en) * | 2015-12-18 | 2016-02-24 | 天津中科智能识别产业技术研究院有限公司 | Color and depth imaging method and device based on structured light and light-field imaging |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9524556B2 (en) * | 2014-05-20 | 2016-12-20 | Nokia Technologies Oy | Method, apparatus and computer program product for depth estimation |
-
2016
- 2016-07-11 CN CN201610541262.8A patent/CN106228507B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104966289A (en) * | 2015-06-12 | 2015-10-07 | 北京工业大学 | Depth estimation method based on 4D light field |
CN105357515A (en) * | 2015-12-18 | 2016-02-24 | 天津中科智能识别产业技术研究院有限公司 | Color and depth imaging method and device based on structured light and light-field imaging |
Non-Patent Citations (1)
Title |
---|
光场成像技术及其在计算机视觉中的应用;张驰 等;《中国图象图形学报》;20160331;第21卷(第3期);第263-281段 |
Also Published As
Publication number | Publication date |
---|---|
CN106228507A (en) | 2016-12-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106228507B (en) | A kind of depth image processing method based on light field | |
CN107945268B (en) | A kind of high-precision three-dimensional method for reconstructing and system based on binary area-structure light | |
CN103503025B (en) | Model parameter is determined based on the model of object is carried out conversion | |
CN104335005B (en) | 3D is scanned and alignment system | |
US20130335535A1 (en) | Digital 3d camera using periodic illumination | |
CN109801374B (en) | Method, medium, and system for reconstructing three-dimensional model through multi-angle image set | |
US20150146032A1 (en) | Light field processing method | |
CN108475327A (en) | three-dimensional acquisition and rendering | |
Sajadi et al. | Autocalibration of multiprojector cave-like immersive environments | |
WO2012096747A1 (en) | Forming range maps using periodic illumination patterns | |
Starck et al. | The multiple-camera 3-d production studio | |
Varol et al. | Monocular 3D reconstruction of locally textured surfaces | |
EP3382645B1 (en) | Method for generation of a 3d model based on structure from motion and photometric stereo of 2d sparse images | |
JP2016537901A (en) | Light field processing method | |
Nguyen et al. | Plant phenotyping using multi-view stereo vision with structured lights | |
Ngo et al. | Reflectance and shape estimation with a light field camera under natural illumination | |
JP5441752B2 (en) | Method and apparatus for estimating a 3D pose of a 3D object in an environment | |
Grochulla et al. | Combining photometric normals and multi-view stereo for 3d reconstruction | |
MacDonald | Representation of cultural objects by image sets with directional illumination | |
Feris et al. | Multiflash stereopsis: Depth-edge-preserving stereo with small baseline illumination | |
Gribben et al. | Structured light 3D measurement of reflective objects using multiple DMD projectors | |
CN111598939A (en) | Human body circumference measuring method based on multi-vision system | |
Chotikakamthorn | Near point light source location estimation from shadow edge correspondence | |
Liu et al. | Synthesis and identification of three-dimensional faces from image (s) and three-dimensional generic models | |
Yao et al. | A new environment mapping method using equirectangular panorama from unordered images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP03 | Change of name, title or address |
Address after: 300457 unit 1001, block 1, msd-g1, TEDA, No.57, 2nd Street, Binhai New Area Economic and Technological Development Zone, Tianjin Patentee after: Tianjin Zhongke intelligent identification Co.,Ltd. Address before: 300457 No. 57, Second Avenue, Economic and Technological Development Zone, Binhai New Area, Tianjin Patentee before: TIANJIN ZHONGKE INTELLIGENT IDENTIFICATION INDUSTRY TECHNOLOGY RESEARCH INSTITUTE Co.,Ltd. |
|
CP03 | Change of name, title or address |