CN106228507A - A kind of depth image processing method based on light field - Google Patents
A kind of depth image processing method based on light field Download PDFInfo
- Publication number
- CN106228507A CN106228507A CN201610541262.8A CN201610541262A CN106228507A CN 106228507 A CN106228507 A CN 106228507A CN 201610541262 A CN201610541262 A CN 201610541262A CN 106228507 A CN106228507 A CN 106228507A
- Authority
- CN
- China
- Prior art keywords
- photographic subjects
- image
- normal direction
- initial
- field
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 23
- 238000005286 illumination Methods 0.000 claims abstract description 60
- 238000002310 reflectometry Methods 0.000 claims abstract description 41
- 230000003287 optical effect Effects 0.000 claims abstract description 27
- 230000002708 enhancing effect Effects 0.000 claims abstract description 6
- 238000010586 diagram Methods 0.000 claims description 56
- 238000000034 method Methods 0.000 claims description 20
- 238000002372 labelling Methods 0.000 claims description 16
- 238000005457 optimization Methods 0.000 claims description 11
- 238000001914 filtration Methods 0.000 claims description 10
- 238000012887 quadratic function Methods 0.000 claims description 6
- 230000011218 segmentation Effects 0.000 claims description 6
- 239000000284 extract Substances 0.000 claims description 5
- 238000003384 imaging method Methods 0.000 abstract description 42
- 238000005516 engineering process Methods 0.000 description 10
- 230000004438 eyesight Effects 0.000 description 8
- 238000011161 development Methods 0.000 description 6
- 230000018109 developmental process Effects 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 230000000007 visual effect Effects 0.000 description 6
- 238000001514 detection method Methods 0.000 description 5
- 230000008859 change Effects 0.000 description 4
- 238000000354 decomposition reaction Methods 0.000 description 3
- 230000000903 blocking effect Effects 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 241000208340 Araliaceae Species 0.000 description 1
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 1
- 235000003140 Panax quinquefolius Nutrition 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000002146 bilateral effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 235000008434 ginseng Nutrition 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000013011 mating Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000000149 penetrating effect Effects 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 238000012958 reprocessing Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000017105 transposition Effects 0.000 description 1
Classifications
-
- G06T3/06—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/111—Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/128—Adjusting depth or disparity
Abstract
The invention discloses a kind of depth image processing method based on light field, including step: utilize the shooting of optical field acquisition equipment to obtain initial 4D light field coloured image and the ID image presetting photographic subjects;Pretreatment obtains presets the initial 3D grid model of photographic subjects and corresponding initial normal direction field;Analyze and calculate acquisition and preset photographic subjects surface reflectivity;Initial normal direction field according to default photographic subjects and surface reflectivity, be modeled its light field image, it is thus achieved that illumination model and illumination parameter;The illumination parameter that surface reflectivity according to default photographic subjects and illumination model have, is optimized the initial normal direction field of default photographic subjects;According to the normal direction field optimized, after ID image is carried out degree of depth enhancing, rebuild the 3D grid model presetting photographic subjects.The present invention based on 4D light field, can rebuild the shape of captured target, it is achieved captured target is carried out optical field imaging stereo display, it is thus achieved that high-quality depth image.
Description
Technical field
The present invention relates to the technical fields such as optical field imaging, image procossing and computer vision, particularly relate to a kind of based on
The depth image processing method of light field.
Background technology
At present, along with the development of human sciences's technology, in computer vision system, three-dimensional scene information is image
The computer vision application such as segmentation, target detection, object tracking provide more probability, compared with two dimensional image, and the degree of depth
Image has the three-dimensional feature information of object, i.e. depth information, and therefore depth image also serves as a kind of universal three-dimensional scenic letter
Breath expression way is widely used.Therefore, the imaging device of color and depth information can be captured the while of utilization, it is achieved three
The detection of dimension object and identification, will become a new focus of computer vision field, and wherein the acquisition of depth image is
Key technology therein.
In computer vision system, the method obtaining depth image can be divided into two classes: passive type and active.Its
In, passive type obtains the method for depth image and mainly utilizes ambient environmental conditions imaging, conventional method to be binocular stereo visions,
Optical field imaging as a kind of emerging passive type imaging mode application in terms of estimation of Depth at present also by increasing
Pay close attention to.Optical field imaging is the important branch calculating imaging field.Light field is to comprise position and directional information in space simultaneously
Light radiation field, compare the traditional imaging mode only recording 2-D data, optical field imaging is obtained in that more abundant image letter
Breath.Therefore, optical field imaging technology is to be calculated as providing many new developing direction.
At present, optical field imaging utilizes the imaging arrangement that it is special, obtains four-dimensional light field data, not only includes that brightness is believed
Breath, also includes the directional information of light, simultaneously with its powerful reprocessing rate, in stereo display, expands Depth of field, the degree of depth
The fields such as estimation are widely used.Optical field imaging mainly has three kinds of forms: microlens array, camera array and mask.The most micro-
Lens array form obtains light field data by the microlens array being placed between main lens and sensor, is the most the most frequently used
Optical field imaging mode.
Additionally, along with the fast development of depth camera, 3D shape modeling becomes to have more practicality and challenge in high precision
Property.But, active stereo imaging technique (such as laser, structure light, Kinect) is generally expensive, resolution is low, imaging indoors
Environment, and the algorithm complex of passive stereo imaging technique (as binocular stereo vision, various visual angles rebuild MVS) is the highest, time-consuming
Long, therefore, 3D shape modeling be extremely difficult to high-resolution, in high precision, in real time, practicality and universality.Business-like light field phase
The appearance of machine (Lytro, Ratrix), brings new development for 3D stereo display, shape modeling.
At present, business-like Lytro light-field camera spatial resolution is relatively low, and the parameter being typically based in shooting process sets
Carry out the coupling of corresponding white image, it is achieved the decoding of lenticule image, obtain 4D light field data, then carry out estimation of Depth, reunion
Burnt, stereo display scheduling algorithm processes.Light-field camera, as a kind of imaging and passive imaging technology, carries out the degree of depth based on multiple 2D images and estimates
Meter, the depth map degree of accuracy of calculating is relatively low.Different from the actively degree of depth acquiring technology such as Kinect, the depth map that Kinect obtains is
The overall situation is smooth, and depth value deviation is little, and that grain details is described by the depth value estimated based on 4D light field is fine, to texture-free,
Repeat texture or the few shape face of texture cannot obtain estimation of Depth value, and noise depth value is the biggest with actual value deviation.
Shade rebuilds (SFS), from various visual angles stereo reconstruction (MVS), photometric stereo reconstruction (PS) is three kinds of classical passively standing
Body imaging technique.Shade rebuilds SFS technology, rebuilds shape from the shade clue of a luminance picture, but, nothing under some scene
Method determines that the change of object brightness changes due to geometry or reflecting attribute difference causes, and therefore shade rebuilds SFS
Algorithm usually assumes that Lambertian reflector, the unified image-forming condition such as reflectance, distant points light source in the application.The three-dimensional weight of various visual angles
Build MVS technology demarcated from multiple, different visual angles shooting 2D image to rebuild shape, by multiple figures to adjacent view
As carrying out feature extraction, mating, generate ID figure or sparse 3D point cloud, the last re-optimization high-precision shape mould of generation
Type, therefore, MVS algorithm complex is high, the longest, feature extraction and matching to texture, block, illumination, the change of reflecting attribute
Very sensitive, major part algorithm all cannot be applicable to all scenes.And photometric stereo is rebuild PS algorithm and is needed in controlled indoor
Under photoenvironment, multiple light source is set, shoots multiple images, accurately calculate light source direction, by the brightness flop meter of multiple images
Calculate body surface normal direction field, shape is modeled.
Therefore, in view of special construction and the 4D data of acquisition of optical field imaging, determine and cannot directly apply traditional quilt
Dynamic stereoscopic imaging technology carries out estimation of Depth.
Therefore, at present in the urgent need to developing a kind of technology, it based on 4D light field, can rebuild the shape of captured target
Shape, it is achieved captured target is carried out optical field imaging stereo display, it is thus achieved that high-quality depth image, and ensure the matter of imaging
Amount, contributes to expanding the popularization and application scope of optical field imaging.
Summary of the invention
In view of this, it is an object of the invention to provide a kind of depth image processing method based on light field, it can be based on
4D light field, rebuilds the shape of captured target, it is achieved captured target is carried out optical field imaging stereo display, it is thus achieved that high-quality
Depth image, and ensure the quality of imaging, contribute to expanding the popularization and application scope of optical field imaging, promote optical field imaging application
Development, is conducive to the product improving user to use impression, is of great practical significance.
To this end, the invention provides a kind of depth image processing method based on light field, including step:
The first step: utilize the shooting of optical field acquisition equipment to obtain the initial 4D light field coloured image presetting photographic subjects with initial
Depth image;
Second step: initial 4D light field coloured image and ID image to the default photographic subjects obtained carry out pre-
Process, it is thus achieved that preset the initial 3D grid model of photographic subjects and corresponding initial normal direction field;
3rd step: according to initial color image and the initial normal direction field of described default photographic subjects, analyze and calculate acquisition
The surface reflectivity of described default photographic subjects;
4th step: the initial normal direction field corresponding according to described default photographic subjects and surface reflectivity, to described default bat
The light field image taking the photograph target corresponding is modeled, it is thus achieved that illumination model that described default photographic subjects has and this illumination model
The illumination parameter having;
5th step: the illumination parameter having according to surface reflectivity and the illumination model of described default photographic subjects, to institute
The initial normal direction field stating default photographic subjects corresponding is optimized;
6th step: according to optimizing the normal direction field obtained, the ID image of default photographic subjects is carried out degree of depth enhancing,
Obtain the ID image strengthened through the degree of depth;
7th step: according to the ID image strengthened through the degree of depth, projects in 3d space, rebuilds and presets photographic subjects
3D grid model.
Wherein, described second step includes following sub-step:
Described initial 4D light field coloured image and ID image are set up mask, removes ambient interferences therein;
Depth image is carried out pretreatment, projects in 3d space, it is thus achieved that preset the initial 3D grid model of photographic subjects;
Initial 3D grid model based on described default photographic subjects, it is thus achieved that preset the initial normal direction that photographic subjects is corresponding
?.
Wherein, described 3rd step includes following sub-step:
The initial color image of described default photographic subjects is processed, it is thus achieved that corresponding chromaticity diagram;
To described chromaticity diagram by threshold segmentation, extract the marginal points information that described chromaticity diagram has;
The marginal points information having according to described chromaticity diagram or chromatic value, all surfaces district that described chromaticity diagram is included
Territory carries out reflectance division, and the region, surface with different reflectivity is set up different labellings;
To each region, surface with different reflectivity, calculate respectively and obtain its colourity average, and by judging this color
Whether the colourity difference between degree average and default chromatic value reaches predetermined threshold value, judges whether it is ambiguity pixel region,
If it is, be defined as ambiguity pixel region, and based on Euclidean distance, filtering eliminates the surface district as ambiguity pixel region
Territory;
The reflectance in all surface region in chromaticity diagram after calculating after filtering, the described default photographic subjects of final acquisition
Surface reflectivity.
Wherein, the marginal points information having according to described chromaticity diagram, reflects described chromaticity diagram all surfaces region
The operation that rate divides specifically includes following steps:
For any two pixel in described chromaticity diagram, it is judged that whether there is marginal point on their line, if
It is to define them and belong to the region, surface with different reflectivity, and different labellings is set.
Wherein, the chromatic value having according to described chromaticity diagram, all surfaces region including described chromaticity diagram is carried out instead
The operation that rate of penetrating divides specifically includes following steps:
For the region, any two surface in described chromaticity diagram, it is judged that whether the colourity difference between them reaches default
Value, if it is, defining them is the region, surface with different reflectivity, and the labelling that labelling is different.
Wherein, in described 4th step, the initial normal direction field corresponding according to described default photographic subjects and surface reflectivity,
Use the quadratic function about normal direction and reflectance preset that the light field image of described default photographic subjects is modeled, it is thus achieved that
Illumination model that described default photographic subjects has and the illumination parameter that this illumination model has;
The formula of described quadratic function is:
I=s (η)=ηTAη+bTη+c;
ηX, y=ρX, y·nX, y;
Wherein, ηX, yIt it is reflectivity ρX, yWith unit normal direction ηX, yProduct, A, b, c are the illumination parameters of illumination model, pass through
Linear least-squares optimized algorithm is calculated illumination parameter.
Wherein, described 5th step includes following sub-step:
The illumination parameter that surface reflectivity according to described default photographic subjects and illumination model have, uses preset energy
Function, including coloured image brightness constraint, locally normal direction smoothness constraint, normal direction is prior-constrained and unit vector retrains, to initially
Normal direction field is optimized;
Utilize Nonlinear least squares optimization LM algorithm, described preset energy function is optimized and solves, it is thus achieved that optimize
After normal direction field.
The technical scheme provided from the above present invention, compared with prior art, the invention provides a kind of based on
The depth image processing method of light field, it based on 4D light field, can rebuild the shape of captured target, it is achieved to captured target
Carry out optical field imaging stereo display, it is thus achieved that high-quality depth image, and ensure the quality of imaging, contribute to expanding light field and become
The popularization and application scope of picture, promotes optical field imaging application development, is conducive to the product improving user to use impression, has great
Production practices meaning.
Accompanying drawing explanation
The flow chart of a kind of based on light field the depth image processing method that Fig. 1 provides for the present invention;
In a kind of based on light field the depth image processing method that Fig. 2 provides for the present invention, preset the initial of photographic subjects
Coloured image;
In a kind of based on light field the depth image processing method that Fig. 3 provides for the present invention, preset the initial of photographic subjects
Depth image;
In a kind of based on light field the depth image processing method that Fig. 4 provides for the present invention, depth image is smoothed
The initial 3D grid model schematic diagram of the default photographic subjects obtained with denoising;
In a kind of based on light field the depth image processing method that Fig. 5 provides for the present invention shown in Fig. 4, depth image is entered
Row is smooth and denoising and the close-up schematic view of the initial 3D grid model of default photographic subjects that obtains;
In a kind of based on light field the depth image processing method that Fig. 6 provides for the present invention, based on described default shooting mesh
Target initial 3D grid model, it is thus achieved that preset the normal direction field schematic diagram that photographic subjects is corresponding;
In a kind of based on light field the depth image processing method that Fig. 7 provides for the present invention shown in Fig. 6, preset based on described
The initial 3D grid model of photographic subjects, it is thus achieved that preset the normal direction pinup picture of the normal direction field of photographic subjects;
In a kind of based on light field the depth image processing method that Fig. 8 provides for the present invention, to described default photographic subjects
Initial color image carry out processing obtained chromaticity diagram;
In a kind of based on light field the depth image processing method that Fig. 9 provides for the present invention, described default photographic subjects has
Some illumination model figures;
In a kind of based on light field the depth image processing method that Figure 10 provides for the present invention, preset the process of photographic subjects
Normal direction field schematic diagram after optimization;
In a kind of based on light field the depth image processing method that Figure 11 provides for the present invention, preset the process of photographic subjects
The normal direction pinup picture of the normal direction field after optimization;
The default shooting mesh that a kind of based on light field the depth image processing method that Figure 12 provides for the present invention finally obtains
Target three-dimensional 3D grid model schematic diagram;
Figure 13 is the enlarged diagram of the part of I shown in Figure 12;
Figure 14 is the enlarged diagram of the part of II shown in Figure 12.
Detailed description of the invention
In order to make those skilled in the art be more fully understood that the present invention program, below in conjunction with the accompanying drawings with embodiment to this
Invention is described in further detail.
The flow chart of a kind of based on light field the depth image processing method that Fig. 1 provides for the present invention;
See Fig. 1, a kind of based on light field the depth image processing method that the present invention provides, comprise the following steps:
Step S101: utilize the shooting of optical field acquisition equipment to obtain the initial 4D light field coloured image presetting photographic subjects with just
Beginning depth image;
Seeing Fig. 2, Fig. 3, Fig. 2, Fig. 3 are respectively by optical field acquisition equipment such as color image sensors, and shooting obtains pre-
If the initial 4D light field coloured image of photographic subjects and ID image.
Needing explanation, current commercial hand-held light-field camera is mainly Lytro camera and Raytrix camera, Lytro
Camera includes Lytro 1.0 and Lytro Illum camera, and Raytrix camera includes the models such as R5, R12, R29, R42, Ke Yiyong
Carry out the light field image collection of real scene, estimation of Depth, heavily focusing, three-dimensional imaging etc..It is also possible to join with mechanical arm
Close general camera, simulate light field imaging mode, by small movements, carry out optical field acquisition.
Step S102: initial 4D light field coloured image and ID image to the default photographic subjects obtained are carried out
Pretreatment, it is thus achieved that preset the initial 3D grid model of photographic subjects and corresponding initial normal direction field;
In the present invention, described step S102 specifically includes following sub-step:
Step S1021: described initial 4D light field coloured image and ID image are set up mask (mask), removes it
In ambient interferences (can be removed by manual type by user);
In the present invention, implement, significance detection and the segmentation of image can be carried out based on color distortion, thus
Destination object (i.e. initial 4D light field coloured image and ID image) is set up mask, deletes background information, only to target
Object operates.
Step S1022: ID image is carried out smooth and denoising, projects in 3d space, it is thus achieved that preset and clap
Take the photograph the initial 3D grid model (as shown in Figure 4, Figure 5) of target;
In the present invention, implement and ID image is carried out smooth by mean filter and bilateral filtering and goes
Make an uproar process.
Implement, it should be noted that it has been generally acknowledged that depth image is 2.5D, will three-dimensional coordinate (x, y, z)
Depth value z projects to two-dimensional space, expresses with the gray value of 0-255.If known camera (such as light-field camera Lytro Illum)
Parameter, can project to three dimensions according to the projection model of camera by depth information, obtains (x, y, z) coordinate of target;If nothing
Method obtains the parameter of camera, then according to size and the size of image of object, by preset ratio, depth value is zoomed to space z
Value, the 3D shape of approximate expression target object.
Needing explanation, for light-field camera Lytro Illum and other light-field camera, camera parameter needs by phase
Output demarcated by machine, according to traditional camera scaling method, multiple gridiron pattern images (10-20 opens) of shooting different angles, calculates phase
The inside and outside parameter of machine.
Step S1023: initial 3D grid model based on described default photographic subjects, it is thus achieved that preset photographic subjects corresponding
Initial normal direction field (i.e. initial surface normal direction, as shown in Figure 6, Figure 7, normal vectorFig. 7 represents that color value is formed
Normal direction pinup picture).
In the present invention, it should be noted that in three-dimensional grid model, each spatial point p (X, Y, Z), it is perpendicular to this
The incisal plane of some place grid surface, have directive vector and be referred to as normal vector, useExpress, pass through pixel
The incisal plane at this place of all grid computings of some p, then obtains the normal vector of this point, ultimately generates and can express object
The normal direction field of body 3D shape.
Step S103: according to initial color image and the initial normal direction field of described default photographic subjects, analyze and calculate and obtain
Described default photographic subjects surface reflectivity (reflectance eliminate major part ambiguous point after, it is possible to exactly express preset
The reflecting attribute of the surface pixels of photographic subjects);
In the present invention, described step S103 specifically includes following sub-step:
Step S1031: the initial color image (as shown in Figure 2) of described default photographic subjects is processed, it is thus achieved that right
The chromaticity diagram (as shown in Figure 8) answered;
In the present invention, it should be noted that by chromaticity diagram it is found that by blocking and be mutually reflected the shade caused
There is ambiguousness in the pixel color angle value in region, it is impossible to the correct reflecting attribute expressing default photographic subjects.By chromaticity diagram is entered
The clustering processing of the existing K-means algorithm of row, it appeared that there is ambiguity pixel region, can obtain and block and be mutually reflected
The brightness value region of variation (chromaticity diagram that i.e. initial color image is corresponding) caused;
Step S1032: to described chromaticity diagram by threshold segmentation, extract marginal points information (this that described chromaticity diagram has
It is to there is ambiguousness owing to blocking and being mutually reflected the chromatic value of the shadow region caused, it is impossible to express the reflecting attribute of object);
In the present invention, it should be noted that to described chromaticity diagram by Threshold segmentation, particularly as follows: utilize rim detection
Operator (such as edge detection algorithms such as existing Canny or Sobel), carries out rim detection to this chromaticity diagram, extracts in chromaticity diagram
Edge pixel point, can also be optimized by expansion divergent margin point, to extract all of edge pixel point as far as possible simultaneously.
Step S1033: the marginal points information having according to described chromaticity diagram or chromatic value, includes described chromaticity diagram
All surfaces region carries out reflectance division, and sets up the region, surface with different reflectivity (i.e. different reflecting attribute)
Different labellings;
For the present invention, the chromatic value having according to described chromaticity diagram, all surfaces region that described chromaticity diagram is included
Carry out the operation of reflectance division and specifically include following steps:
For the region, any two surface in described chromaticity diagram, it is judged that whether the colourity difference between them reaches default
Value, if it is, defining them is the region, surface with different reflectivity (i.e. different reflecting attribute), and the mark that labelling is different
Note.
It should be noted that described labelling can be any labelling that can distinguish region, two surfaces, such as, can be A
Or the alphabetic flag such as B, and the numeral labelling such as 1 and 2, can also be other labellings certainly.
Also, it should be noted implement, the marginal points information having according to chromaticity diagram, described chromaticity diagram is included
All surfaces region carry out the operation of reflectance division and specifically include following steps:
For whether there is also other marginal point (i.e. on the line between any two pixel p and q in chromaticity diagram
Mate the marginal point of the marginal points information that described chromaticity diagram has), if it is, define them for there is different reflectivity (the most not
Same reflecting attribute) region, surface, and the labelling that labelling is different, because this equally illustrates that they are for having different reflectivity
The region, surface of (i.e. different reflecting attribute).
In the present invention, the marginal points information having according to chromaticity diagram or chromatic value, include described chromaticity diagram is complete
Region, surface, portion carries out reflectance division, therefore, for the region, any two surface in chromaticity diagram, it is judged that the two region
Arbitrarily pixel p and q, if there is other marginal point on their line, then illustrates the region, two surfaces at their place
There is different reflecting attributes.
Step S1034: to each region, surface with different reflectivity (i.e. different reflecting attribute), calculate acquisition respectively
Its colourity average, and by judging whether the colourity difference between this colourity average and default chromatic value reaches predetermined threshold value, come
Judge whether it is ambiguity pixel region (i.e. ambiguity pixel), if it is, be defined as ambiguity pixel region, and based on European
Distance, filtering eliminates the region, surface as ambiguity pixel region;
It should be noted that for each region, surface with different reflectivity, seek its colourity averageIfIt it is then ambiguous point.
In the present invention, it should be noted thatIt it is average color angle value.ForCh thereinpIt is any
The chromatic value of 1 p, τ is interceptive value, and the user of the present invention can pre-set the value of τ by experiment.
Implementing, for the present invention, the circular being filtered based on Euclidean distance and colourity difference is such as
Under:
Wherein,It is the local neighborhood of pixel p, IpBeing the colour intensity value of pixel p, γ is normalization coefficient
, ωdThe space length weight of pixel p and its neighborhood territory pixel point q, i.e. Euclidean distance,It is pixel p and its neighborhood picture
The colourity difference weight of vegetarian refreshments q, dp、dqIt is the locus (coordinate figure) of pixel p and q, Chp、ChqIt is pixel p and q
Chromatic value.
For according to the colourity difference between this colourity average and default chromatic value, if the colour difference in region, a surface
Different more than presetting Chroma threshold, then judge that this region, surface is due to shade and to be mutually reflected the ambiguity pixel caused.
Implementing, for the present invention, filtering eliminates shade and is mutually reflected the ambiguity pixel caused, i.e. passing through office
The chromatic value of portion's neighborhood territory pixel point, utilizes mean filter, calculates the correct chromatic value of ambiguous point, thus reaches to eliminate shade and phase
The impact of reflection mutually.
Step S1035: the reflectance in all surface region in chromaticity diagram after calculating after filtering, final acquisition is described to be preset
The surface reflectivity of photographic subjects.
It should be noted that for the present invention, algorithm (Qifeng Chen and known, that preset specifically can be utilized
VladlenKoltun,“A simple model for intrinsic image decomposition with depth
Cues, " in Computer Vision (ICCV), IEEE International Conference on, the algorithm of 2013) come
The reflectance in all surface region in chromaticity diagram after calculating after filtering, thus finally obtain the surface of described default photographic subjects
Reflectance.
In the present invention, it should be noted that input data are the initial deep of light field centre visual angle subimage and correspondence thereof
Degree figure, is obtained initial normal direction field by ID figure by step S102, by light field centre visual angle subimage through step S103
Obtain the accurate chromaticity diagram of target surface.Based on initial normal direction field and chromaticity diagram, image is carried out contents attribute decomposition, by advance
If algorithm (Qifeng Chen and VladlenKoltun, " A simple modelfor intrinsic image
decomposition with depth cues,”inComputer Vision(ICCV),IEEE International
Conference on, the algorithm of 2013), carry out the reflecting attribute A of each pixel in extract surfacep, set up one and comprise number
According to item and the energy function of regular terms, and utilize linear least square, be finally calculated after after filtering institute in chromaticity diagram
There is the reflectance in region, surface.Formula is:
Ereg=ωAEA+ωDED+ωNEN+ωCEC;
Wherein, IpIt is the colour intensity value of pixel p, ApIt is reflectance, DpIt is direct irradiance, NpIt is indirect radiant degree,
CpIt is illuminating color, wA、wD、wN、WCIt is the weight of corresponding content attribute, the regular terms energy function E of corresponding content attributeA、ED、
EN、ECIt is defined as:
Wherein, αP, qIt is the colourity difference weight of local neighborhood pixel, apIt is the reflectance of pixel p, dpIt is pixel p
Locus (coordinate figure), ChpIt is the chromatic value of pixel p, npIt it is the normal vector of pixel p.
Step S104: the initial normal direction field corresponding according to described default photographic subjects and surface reflectivity, presets described
Light field image corresponding to photographic subjects is modeled, it is thus achieved that the illumination model (as shown in Figure 9) that described default photographic subjects has
And the illumination parameter that this illumination model has;
In the present invention, implement, the initial normal direction field corresponding according to described default photographic subjects and surface reflection
Rate, carries out illumination modeling to described default photographic subjects, owing to the collection of light field image is at continually varying natural lighting bar
Under part, it is highly difficult for measuring illumination attribute, therefore, uses a default quadratic function about normal direction and reflectance to institute
State light field image corresponding to default photographic subjects (i.e. light field luminance picture) to be modeled, it is thus achieved that described default photographic subjects has
Illumination model and the illumination parameter that has of this illumination model, the formula of this quadratic function is:
I=s (η)=ηTAη+bTη+c;
ηX, y=ρX, y·nX, y;
Wherein, ηX, yIt it is reflectivity ρX, yWith unit normal direction nX, yProduct, A, b, c are the illumination parameters of illumination model, pass through
Linear least-squares optimized algorithm is calculated illumination parameter, can obtain, based on global illumination model, the default bat that the overall situation is smooth
Take the photograph target surface (as shown in Figure 9).
Step S105: the illumination parameter having according to surface reflectivity and the illumination model of described default photographic subjects is right
Initial normal direction field corresponding to described default photographic subjects is optimized, and recovers the geometric detail of body surface;
It should be noted that add illumination parameter, normal direction field is optimized, so that the final default shooting built
The surface configuration of the three-dimensional 3D three-dimensional shape image of target smooths.
In the present invention, implementing, surface reflectivity and illumination model according to described default photographic subjects have
Illumination parameter, the initial normal direction field that described default photographic subjects is corresponding is optimized.It should be noted that preset shooting mesh
Target surface reflectivity and illumination model carry out calculation optimization based on initial normal direction field, but initial normal direction field is from initially
Depth map generates, and there is a lot of noise and ambiguity value.Preset photographic subjects surface and comprise a lot of high frequency partial geometric detail, often
The normal direction of individual point is unique, and accurate normal direction field is the requisite factor of three-dimensional reconstruction, therefore, recovers the office of target surface
Portion's geometric detail, it is necessary to carry out the optimized reconstruction of normal direction field.Surface reflectivity according to described default photographic subjects and illumination mould
The illumination parameter that type has, by characterizing illumination consistency, local smoothing method, initial priori, the bound term of unit vector,
Foundation minimizes energy function, is optimized the initial normal direction field that described default photographic subjects is corresponding
For the present invention, described step S105 specifically includes following steps:
Step S1051: the illumination parameter having according to surface reflectivity and the illumination model of described default photographic subjects, fortune
Use preset energy function, the normal direction to each pixel in default photographic subjects surfaceIt is optimized, i.e. initial normal direction field is carried out excellent
Change (as shown in Figure 10, Figure 11);
Step S1052: utilize Nonlinear least squares optimization LM algorithm, is optimized described preset energy function and asks
Solve, it is thus achieved that the normal direction field (as shown in Figure 12 and Figure 13) after optimization.
For the present invention, it should be noted that the collection of light field image is under the conditions of natural lighting, measure illumination attribute
Being highly difficult, and photoenvironment is change, the present invention uses one about normal directionSecondary with reflectivity ρ
Light field luminance picture is modeled by shading function;The global illumination ginseng solving estimation is taken advantage of with Least-squares minimization LM algorithm
Number;Especially by minimizing with reflectivity ρ and illumination model parameter for the energy function of input, optimize each picture of body surface
The normal direction of element
Wherein, described preset energy function E (n) includes that brightness of image retrains Ei(n), locally normal direction smoothness constraint Esh(n)、
Initial Normal Constraint Er(n) and unit vector constraint Eu(n), the circular of described preset energy function is as follows:
E (n)=λiEi(n)+λshEsh(n)+λrEr(n)+λuEu(n);
It should be noted that wherein, IpIt is the colour intensity value of pixel p, s (ηp) be pixel p illumination model output
Brightness value, EiN () is the brightness of image to illumination model output and the consistency constraint of true brightness value;npIt is pixel p
Normal vector, EshN () is the smoothness constraint to default photographic subjects surface local neighborhood;It is the initial normal vector of pixel p, Er
N () is the consistency constraint to the normal direction after optimizing with initial normal direction,It is the transposition of the normal direction of pixel p, EuN () is to optimization
Normal vector must be the constraint of unit vector.
Step S106: according to optimizing the normal direction field obtained, the ID image of default photographic subjects is carried out degree of depth increasing
By force, it is thus achieved that (i.e. the present invention utilizes high-precision optimization normal direction to initial deep to the final ID image through degree of depth enhancing
Degree figure is strengthened);
In the present invention, implement, based on the normal direction optimized, ID figure can be strengthened, thus
To high-quality depth map, then the three-dimensional grid carrying out geometry is rebuild, utilize preset algorithm (Diego Nehab,
SzymonRusinkiewicz,James Davis,andRavi Ramamoorthi,“Efficiently combining
positionsand normals for precise 3d geometry”in ACM Transactionson Graphics
(TOG), the algorithm of 2005) carry out degree of depth enhancing, specific algorithm is:
Mutually combine enhancing by space coordinates and local message and normal direction field, utilize weighted least-squares method to energy
Function calculates, and obtains high accuracy depth figure, and formula is defined as:
Wherein, EpBeing the energy function of space coordinates, wherein, (x y) is image plane pixel point (x, three-dimensional space y) to P
Between coordinate, (x, y) is depth value to Z, and fx, fy are the focal lengths of camera, PiIt is the space coordinates optimized,It is to measure the sky obtained
Between coordinate, obtained by ID figure;EnIt is the energy function of normal direction field, Tx、TyBe preset photographic subjects pixel (x,
Y) surface tangent,It it is corresponding space coordinates P optimizediOrthoepy vector.
Step S107: according to the ID image strengthened through the degree of depth, project in 3d space, rebuilds and presets shooting mesh
Target 3D grid model (as shown in figure 14).Therefore, the present invention can realize default photographic subjects carries out optical field imaging solid
Display, it is thus achieved that high-quality depth image, and ensure the quality of imaging, contribute to expanding the popularization and application model of optical field imaging
Enclose,
It should be noted that for the present invention, specifically can utilize known algorithm (Diego Nehab,
SzymonRusinkiewicz,James Davis,and Ravi Ramamoorthi,“Efficiently combining
positions and normals for precise 3d geometry”in ACM Transactions on Graphics
(TOG), the algorithm of 2005), it is possible to according to optimizing the normal direction field obtained, the ID image of default photographic subjects is carried out
The degree of depth strengthens, thus obtains final, the high-quality ID image strengthened through the degree of depth.
In the present invention, it should be noted that according to two-dimensional space to three-dimensional projection model, will increase through the degree of depth
Strong ID image projection to 3d space, formula is:
Wherein, (X, Y) is the plane of delineation coordinate presetting photographic subjects, and (X, Y, Z) is to preset photographic subjects surface at 3D
The coordinate in space, is focal length and the centre coordinate of camera respectively, R and T is projective transformation rotation and translation matrix respectively.
In sum, compared with prior art, the invention provides a kind of depth image processing method based on light field,
It based on 4D light field, can rebuild the shape of captured target, it is achieved captured target is carried out optical field imaging stereo display, obtains
High-quality depth image, and ensure the quality of imaging, contribute to expanding the popularization and application scope of optical field imaging, promote light
Field imaging applications development, is conducive to the product improving user to use and experiences, be of great practical significance.
The above is only the preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art
For Yuan, under the premise without departing from the principles of the invention, it is also possible to make some improvements and modifications, these improvements and modifications also should
It is considered as protection scope of the present invention.
Claims (7)
1. a depth image processing method based on light field, it is characterised in that include step:
The first step: utilize the shooting of optical field acquisition equipment to obtain initial 4D light field coloured image and the ID presetting photographic subjects
Image;
Second step: initial 4D light field coloured image and ID image to the default photographic subjects obtained carry out pre-place
Reason, it is thus achieved that preset the initial 3D grid model of photographic subjects and corresponding initial normal direction field;
3rd step: according to initial color image and the initial normal direction field of described default photographic subjects, analyzes and calculates acquisition described
Preset the surface reflectivity of photographic subjects;
4th step: the initial normal direction field corresponding according to described default photographic subjects and surface reflectivity, to described default shooting mesh
The light field image that mark is corresponding is modeled, it is thus achieved that illumination model and this illumination model that described default photographic subjects has have
Illumination parameter;
5th step: the illumination parameter having according to surface reflectivity and the illumination model of described default photographic subjects, to described pre-
If initial normal direction field corresponding to photographic subjects is optimized;
6th step: according to optimizing the normal direction field obtained, the ID image of default photographic subjects is carried out degree of depth enhancing, it is thus achieved that
Through the ID image that the degree of depth strengthens;
7th step: according to the ID image strengthened through the degree of depth, projects in 3d space, rebuilds the 3D presetting photographic subjects
Grid model.
2. the method for claim 1, it is characterised in that described second step includes following sub-step:
Described initial 4D light field coloured image and ID image are set up mask, removes ambient interferences therein;
Depth image is carried out pretreatment, projects in 3d space, it is thus achieved that preset the initial 3D grid model of photographic subjects;
Initial 3D grid model based on described default photographic subjects, it is thus achieved that preset the initial normal direction field that photographic subjects is corresponding.
3. the method for claim 1, it is characterised in that described 3rd step includes following sub-step:
The initial color image of described default photographic subjects is processed, it is thus achieved that corresponding chromaticity diagram;
To described chromaticity diagram by threshold segmentation, extract the marginal points information that described chromaticity diagram has;
The marginal points information having according to described chromaticity diagram or chromatic value, all surfaces region including described chromaticity diagram is entered
Row reflectance divides, and the region, surface with different reflectivity is set up different labellings;
To each region, surface with different reflectivity, calculate respectively and obtain its colourity average, and equal by judging this colourity
Whether the colourity difference between value and default chromatic value reaches predetermined threshold value, judges whether it is ambiguity pixel region, if
It is then to be defined as ambiguity pixel region, and based on Euclidean distance, filtering eliminates the region, surface as ambiguity pixel region;
The reflectance in all surface region in chromaticity diagram after calculating after filtering, the final surface obtaining described default photographic subjects
Reflectance.
4. method as claimed in claim 3, it is characterised in that the marginal points information having according to described chromaticity diagram, to described
Chromaticity diagram all surfaces region carries out the operation of reflectance division and specifically includes following steps:
For any two pixel in described chromaticity diagram, it is judged that whether there is marginal point on the line between them, if
It is to define them and belong to the region, surface with different reflectivity, and different labellings is set.
5. method as claimed in claim 3, it is characterised in that the chromatic value having according to described chromaticity diagram, to described colourity
The all surfaces region that figure includes carries out the operation of reflectance division and specifically includes following steps:
For the region, any two surface in described chromaticity diagram, it is judged that whether the colourity difference between them reaches preset value,
If it is, defining them is the region, surface with different reflectivity, and the labelling that labelling is different.
6. method as claimed in claim 4, it is characterised in that in described 4th step, according to described default photographic subjects pair
The initial normal direction field answered and surface reflectivity, use the quadratic function about normal direction and reflectance preset to described default shooting
The light field image of target is modeled, it is thus achieved that illumination model that described default photographic subjects has and this illumination model have
Illumination parameter;
The formula of described quadratic function is:
I=s (η)=ηTAη+bTη+c;
ηX, y=ρX, y·nX, y;
Wherein, ηX, yIt it is reflectivity ρX, yWith unit normal direction nX, yProduct, A, b, c are the illumination parameters of illumination model, by linearly
Least-squares minimization LM algorithm is calculated illumination parameter.
7. the method as according to any one of claim 1 to 6, it is characterised in that described 5th step includes following sub-step:
The illumination parameter that surface reflectivity according to described default photographic subjects and illumination model have, uses preset energy letter
Number, including coloured image brightness constraint, locally normal direction smoothness constraint, normal direction is prior-constrained and unit vector retrains, to initial method
It is optimized to field;
Utilize Nonlinear least squares optimization LM algorithm, described preset energy function is optimized and solves, it is thus achieved that after optimization
Normal direction field.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610541262.8A CN106228507B (en) | 2016-07-11 | 2016-07-11 | A kind of depth image processing method based on light field |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610541262.8A CN106228507B (en) | 2016-07-11 | 2016-07-11 | A kind of depth image processing method based on light field |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106228507A true CN106228507A (en) | 2016-12-14 |
CN106228507B CN106228507B (en) | 2019-06-25 |
Family
ID=57519550
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610541262.8A Active CN106228507B (en) | 2016-07-11 | 2016-07-11 | A kind of depth image processing method based on light field |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106228507B (en) |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106780705A (en) * | 2016-12-20 | 2017-05-31 | 南阳师范学院 | Suitable for the depth map robust smooth filtering method of DIBR preprocessing process |
CN107228625A (en) * | 2017-06-01 | 2017-10-03 | 深度创新科技(深圳)有限公司 | Three-dimensional rebuilding method, device and equipment |
CN108805921A (en) * | 2018-04-09 | 2018-11-13 | 深圳奥比中光科技有限公司 | Image-taking system and method |
CN109087347A (en) * | 2018-08-15 | 2018-12-25 | 杭州光珀智能科技有限公司 | A kind of image processing method and device |
CN109146934A (en) * | 2018-06-04 | 2019-01-04 | 成都通甲优博科技有限责任公司 | A kind of face three-dimensional rebuilding method and system based on binocular solid and photometric stereo |
CN109166176A (en) * | 2018-08-23 | 2019-01-08 | 百度在线网络技术(北京)有限公司 | The generation method and device of three-dimensional face images |
CN109427086A (en) * | 2017-08-22 | 2019-03-05 | 上海荆虹电子科技有限公司 | 3-dimensional image creation device and method |
CN109685882A (en) * | 2017-10-17 | 2019-04-26 | 辉达公司 | Using light field as better background in rendering |
CN109974625A (en) * | 2019-04-08 | 2019-07-05 | 四川大学 | A kind of color body structural light three-dimensional measurement method based on form and aspect optimization gray scale |
CN110121733A (en) * | 2016-12-28 | 2019-08-13 | 交互数字Ce专利控股公司 | The method and apparatus of joint segmentation and 3D reconstruct for scene |
CN110417990A (en) * | 2019-03-25 | 2019-11-05 | 李萍 | APP activation system based on target analysis |
CN110455815A (en) * | 2019-09-05 | 2019-11-15 | 西安多维机器视觉检测技术有限公司 | A kind of method and system of electronic component open defect detection |
CN110471061A (en) * | 2019-07-16 | 2019-11-19 | 青岛擎鹰信息科技有限责任公司 | A kind of emulation mode and its system for realizing airborne synthetic aperture radar imaging |
CN110686652A (en) * | 2019-09-16 | 2020-01-14 | 武汉科技大学 | Depth measurement method based on combination of depth learning and structured light |
CN111080689A (en) * | 2018-10-22 | 2020-04-28 | 杭州海康威视数字技术股份有限公司 | Method and device for determining face depth map |
CN111147745A (en) * | 2019-12-30 | 2020-05-12 | 维沃移动通信有限公司 | Shooting method, shooting device, electronic equipment and storage medium |
CN111207762A (en) * | 2019-12-31 | 2020-05-29 | 深圳一清创新科技有限公司 | Map generation method and device, computer equipment and storage medium |
CN111325780A (en) * | 2020-02-17 | 2020-06-23 | 天目爱视(北京)科技有限公司 | 3D model rapid construction method based on image screening |
CN111343444A (en) * | 2020-02-10 | 2020-06-26 | 清华大学 | Three-dimensional image generation method and device |
CN111602177A (en) * | 2018-12-20 | 2020-08-28 | 卡尔蔡司光学国际有限公司 | Method and apparatus for generating a 3D reconstruction of an object |
CN113052970A (en) * | 2021-04-09 | 2021-06-29 | 杭州群核信息技术有限公司 | Neural network-based light intensity and color design method, device and system and storage medium |
CN113436325A (en) * | 2021-07-30 | 2021-09-24 | 北京达佳互联信息技术有限公司 | Image processing method and device, electronic equipment and storage medium |
CN113989473A (en) * | 2021-12-23 | 2022-01-28 | 北京天图万境科技有限公司 | Method and device for relighting |
CN116109520A (en) * | 2023-04-06 | 2023-05-12 | 南京信息工程大学 | Depth image optimization method based on ray tracing algorithm |
CN116447978A (en) * | 2023-06-16 | 2023-07-18 | 先临三维科技股份有限公司 | Hole site information detection method, device, equipment and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104966289A (en) * | 2015-06-12 | 2015-10-07 | 北京工业大学 | Depth estimation method based on 4D light field |
US20150339824A1 (en) * | 2014-05-20 | 2015-11-26 | Nokia Corporation | Method, apparatus and computer program product for depth estimation |
CN105357515A (en) * | 2015-12-18 | 2016-02-24 | 天津中科智能识别产业技术研究院有限公司 | Color and depth imaging method and device based on structured light and light-field imaging |
-
2016
- 2016-07-11 CN CN201610541262.8A patent/CN106228507B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150339824A1 (en) * | 2014-05-20 | 2015-11-26 | Nokia Corporation | Method, apparatus and computer program product for depth estimation |
CN104966289A (en) * | 2015-06-12 | 2015-10-07 | 北京工业大学 | Depth estimation method based on 4D light field |
CN105357515A (en) * | 2015-12-18 | 2016-02-24 | 天津中科智能识别产业技术研究院有限公司 | Color and depth imaging method and device based on structured light and light-field imaging |
Non-Patent Citations (1)
Title |
---|
张驰 等: "光场成像技术及其在计算机视觉中的应用", 《中国图象图形学报》 * |
Cited By (38)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106780705A (en) * | 2016-12-20 | 2017-05-31 | 南阳师范学院 | Suitable for the depth map robust smooth filtering method of DIBR preprocessing process |
CN106780705B (en) * | 2016-12-20 | 2020-10-16 | 南阳师范学院 | Depth map robust smooth filtering method suitable for DIBR preprocessing process |
CN110121733A (en) * | 2016-12-28 | 2019-08-13 | 交互数字Ce专利控股公司 | The method and apparatus of joint segmentation and 3D reconstruct for scene |
CN107228625A (en) * | 2017-06-01 | 2017-10-03 | 深度创新科技(深圳)有限公司 | Three-dimensional rebuilding method, device and equipment |
CN107228625B (en) * | 2017-06-01 | 2023-04-18 | 深度创新科技(深圳)有限公司 | Three-dimensional reconstruction method, device and equipment |
CN109427086A (en) * | 2017-08-22 | 2019-03-05 | 上海荆虹电子科技有限公司 | 3-dimensional image creation device and method |
CN109685882A (en) * | 2017-10-17 | 2019-04-26 | 辉达公司 | Using light field as better background in rendering |
CN108805921A (en) * | 2018-04-09 | 2018-11-13 | 深圳奥比中光科技有限公司 | Image-taking system and method |
CN109146934A (en) * | 2018-06-04 | 2019-01-04 | 成都通甲优博科技有限责任公司 | A kind of face three-dimensional rebuilding method and system based on binocular solid and photometric stereo |
CN109087347B (en) * | 2018-08-15 | 2021-08-17 | 浙江光珀智能科技有限公司 | Image processing method and device |
CN109087347A (en) * | 2018-08-15 | 2018-12-25 | 杭州光珀智能科技有限公司 | A kind of image processing method and device |
CN109166176A (en) * | 2018-08-23 | 2019-01-08 | 百度在线网络技术(北京)有限公司 | The generation method and device of three-dimensional face images |
CN111080689A (en) * | 2018-10-22 | 2020-04-28 | 杭州海康威视数字技术股份有限公司 | Method and device for determining face depth map |
CN111080689B (en) * | 2018-10-22 | 2023-04-14 | 杭州海康威视数字技术股份有限公司 | Method and device for determining face depth map |
CN111602177B (en) * | 2018-12-20 | 2023-05-09 | 卡尔蔡司光学国际有限公司 | Method and apparatus for generating a 3D reconstruction of an object |
CN111602177A (en) * | 2018-12-20 | 2020-08-28 | 卡尔蔡司光学国际有限公司 | Method and apparatus for generating a 3D reconstruction of an object |
CN110417990B (en) * | 2019-03-25 | 2020-07-24 | 浙江麦知网络科技有限公司 | APP starting system based on target analysis |
CN110417990A (en) * | 2019-03-25 | 2019-11-05 | 李萍 | APP activation system based on target analysis |
CN109974625A (en) * | 2019-04-08 | 2019-07-05 | 四川大学 | A kind of color body structural light three-dimensional measurement method based on form and aspect optimization gray scale |
CN110471061A (en) * | 2019-07-16 | 2019-11-19 | 青岛擎鹰信息科技有限责任公司 | A kind of emulation mode and its system for realizing airborne synthetic aperture radar imaging |
CN110455815A (en) * | 2019-09-05 | 2019-11-15 | 西安多维机器视觉检测技术有限公司 | A kind of method and system of electronic component open defect detection |
CN110686652B (en) * | 2019-09-16 | 2021-07-06 | 武汉科技大学 | Depth measurement method based on combination of depth learning and structured light |
CN110686652A (en) * | 2019-09-16 | 2020-01-14 | 武汉科技大学 | Depth measurement method based on combination of depth learning and structured light |
CN111147745B (en) * | 2019-12-30 | 2021-11-30 | 维沃移动通信有限公司 | Shooting method, shooting device, electronic equipment and storage medium |
CN111147745A (en) * | 2019-12-30 | 2020-05-12 | 维沃移动通信有限公司 | Shooting method, shooting device, electronic equipment and storage medium |
CN111207762B (en) * | 2019-12-31 | 2021-12-07 | 深圳一清创新科技有限公司 | Map generation method and device, computer equipment and storage medium |
CN111207762A (en) * | 2019-12-31 | 2020-05-29 | 深圳一清创新科技有限公司 | Map generation method and device, computer equipment and storage medium |
CN111343444A (en) * | 2020-02-10 | 2020-06-26 | 清华大学 | Three-dimensional image generation method and device |
CN111325780A (en) * | 2020-02-17 | 2020-06-23 | 天目爱视(北京)科技有限公司 | 3D model rapid construction method based on image screening |
CN111325780B (en) * | 2020-02-17 | 2021-07-27 | 天目爱视(北京)科技有限公司 | 3D model rapid construction method based on image screening |
CN113052970A (en) * | 2021-04-09 | 2021-06-29 | 杭州群核信息技术有限公司 | Neural network-based light intensity and color design method, device and system and storage medium |
CN113052970B (en) * | 2021-04-09 | 2023-10-13 | 杭州群核信息技术有限公司 | Design method, device and system for light intensity and color of lamplight and storage medium |
CN113436325A (en) * | 2021-07-30 | 2021-09-24 | 北京达佳互联信息技术有限公司 | Image processing method and device, electronic equipment and storage medium |
CN113436325B (en) * | 2021-07-30 | 2023-07-28 | 北京达佳互联信息技术有限公司 | Image processing method and device, electronic equipment and storage medium |
CN113989473A (en) * | 2021-12-23 | 2022-01-28 | 北京天图万境科技有限公司 | Method and device for relighting |
CN116109520A (en) * | 2023-04-06 | 2023-05-12 | 南京信息工程大学 | Depth image optimization method based on ray tracing algorithm |
CN116447978A (en) * | 2023-06-16 | 2023-07-18 | 先临三维科技股份有限公司 | Hole site information detection method, device, equipment and storage medium |
CN116447978B (en) * | 2023-06-16 | 2023-10-31 | 先临三维科技股份有限公司 | Hole site information detection method, device, equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN106228507B (en) | 2019-06-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106228507A (en) | A kind of depth image processing method based on light field | |
Furukawa et al. | Accurate, dense, and robust multiview stereopsis | |
Nishida et al. | Procedural modeling of a building from a single image | |
CN103839277B (en) | A kind of mobile augmented reality register method of outdoor largescale natural scene | |
CN104330074B (en) | Intelligent surveying and mapping platform and realizing method thereof | |
Zhang et al. | A UAV-based panoramic oblique photogrammetry (POP) approach using spherical projection | |
Sajadi et al. | Autocalibration of multiprojector cave-like immersive environments | |
CN110728671B (en) | Dense reconstruction method of texture-free scene based on vision | |
Varol et al. | Monocular 3D reconstruction of locally textured surfaces | |
Starck et al. | The multiple-camera 3-d production studio | |
Zhu et al. | Video-based outdoor human reconstruction | |
CN105913444B (en) | Livestock figure profile reconstructing method and Body Condition Score method based on soft laser ranging | |
CN109493384A (en) | Camera position and orientation estimation method, system, equipment and storage medium | |
CN104200476B (en) | The method that camera intrinsic parameter is solved using the circular motion in bimirror device | |
Lin et al. | Vision system for fast 3-D model reconstruction | |
CN108961151B (en) | A method of the three-dimensional large scene that ball curtain camera obtains is changed into sectional view | |
CN107610219A (en) | The thick densification method of Pixel-level point cloud that geometry clue perceives in a kind of three-dimensional scenic reconstruct | |
Vidanapathirana et al. | Plan2scene: Converting floorplans to 3d scenes | |
Lee et al. | Interactive 3D building modeling using a hierarchical representation | |
Ran et al. | High-precision human body acquisition via multi-view binocular stereopsis | |
Coorg | Pose imagery and automated three-dimensional modeling of urban environments | |
Gava et al. | Dense scene reconstruction from spherical light fields | |
Luo et al. | Sparse rgb-d images create a real thing: a flexible voxel based 3d reconstruction pipeline for single object | |
CN111598939B (en) | Human body circumference measuring method based on multi-vision system | |
Zhang et al. | A Robust Multi‐View System for High‐Fidelity Human Body Shape Reconstruction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP03 | Change of name, title or address |
Address after: 300457 unit 1001, block 1, msd-g1, TEDA, No.57, 2nd Street, Binhai New Area Economic and Technological Development Zone, Tianjin Patentee after: Tianjin Zhongke intelligent identification Co.,Ltd. Address before: 300457 No. 57, Second Avenue, Economic and Technological Development Zone, Binhai New Area, Tianjin Patentee before: TIANJIN ZHONGKE INTELLIGENT IDENTIFICATION INDUSTRY TECHNOLOGY RESEARCH INSTITUTE Co.,Ltd. |
|
CP03 | Change of name, title or address |