CN103841327A - Four-dimensional light field decoding preprocessing method based on original image - Google Patents
Four-dimensional light field decoding preprocessing method based on original image Download PDFInfo
- Publication number
- CN103841327A CN103841327A CN201410067394.2A CN201410067394A CN103841327A CN 103841327 A CN103841327 A CN 103841327A CN 201410067394 A CN201410067394 A CN 201410067394A CN 103841327 A CN103841327 A CN 103841327A
- Authority
- CN
- China
- Prior art keywords
- image
- sub
- subaperture
- light field
- original image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 54
- 238000007781 pre-processing Methods 0.000 title abstract description 6
- 238000003384 imaging method Methods 0.000 claims abstract description 30
- 238000012545 processing Methods 0.000 claims abstract description 17
- 238000005286 illumination Methods 0.000 claims description 29
- 230000003287 optical effect Effects 0.000 claims description 21
- HUTDUHSNJYTCAR-UHFFFAOYSA-N ancymidol Chemical compound C1=CC(OC)=CC=C1C(O)(C=1C=NC=NC=1)C1CC1 HUTDUHSNJYTCAR-UHFFFAOYSA-N 0.000 claims description 13
- 238000012952 Resampling Methods 0.000 claims description 11
- 239000011159 matrix material Substances 0.000 claims description 5
- 238000001914 filtration Methods 0.000 claims description 4
- 230000007704 transition Effects 0.000 claims description 4
- 238000013459 approach Methods 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000011161 development Methods 0.000 abstract description 2
- 230000001737 promoting effect Effects 0.000 abstract 1
- 238000005070 sampling Methods 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 17
- 230000000694 effects Effects 0.000 description 9
- 230000008569 process Effects 0.000 description 5
- 239000000654 additive Substances 0.000 description 2
- 230000000996 additive effect Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 244000291564 Allium cepa Species 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000010189 synthetic method Methods 0.000 description 1
Images
Landscapes
- Image Processing (AREA)
Abstract
The invention discloses a four-dimensional light field decoding preprocessing method based on an original image. The method comprises the step of utilizing a light field imaging device for collecting the original image of a scene, the step of calibrating the original image to obtain a center coordinate set of the original image, the step of carrying out re-sampling processing on the original image according to the calibration information to obtain a sub-aperture image array, the step of carrying out the vignetting removing processing on an edge sub-aperture image in the sub-aperture image array to obtain a sub-aperture image array with the vignetting removed, and the step of finishing the four-dimensional light field decoding through the sub-aperture image array with the vignetting removed to obtain the parameterization expression of a four-dimensional light field. The method breaks through the dependence limitation on a white image in the two key preprocessing steps of calibrating and vignetting removing in a traditional preprocessing method, the application flexibility of light field imaging is improved, the popularization application range of the light field imaging can be widened, and the method plays a positive role in promoting the application and development of the light field imaging.
Description
Technical field
The invention belongs to the technical fields such as optical field imaging, image processing, computer vision, relate to a kind of decoding of four-dimensional light field preprocess method, especially one and do not rely on the adaptivity decoding preprocess method of white image (White image).
Background technology
Optical field imaging utilizes its special imaging arrangement, can obtain four-dimensional light field data, significantly widen the data class of image capture, therefore also just can, for image reconstruction brings more information, be applied in fields such as expanding Depth of field, estimation of Depth at present.Optical field imaging mainly contains three kinds of forms: microlens array, camera array, mask and other.Wherein, microlens array form is obtained light field data by the microlens array being placed between main lens and transducer, is current the most frequently used optical field imaging mode.
From the light field collected by camera of microlens array form to original image (Raw image) to decode four-dimensional light field information be the first step that carries out light field data processing, and the pretreatment operation of four-dimensional light field decoding is to determine the whether accurate key of light field decoding.Four-dimensional light field decoding preliminary treatment mainly comprises demarcates and goes vignetting, wherein, demarcate is the center of accurately locating each lenticule imaging, utilize demarcation information to carry out accurate four-dimensional light field decoding to the primary light field picture collecting, but microlens array is not complete and sensor pixel registration, there is skew and rotation, therefore four-dimensional light field decode procedure depends on the Accurate Calibration to light field camera, simultaneously, demarcation information is not changeless, it can be along with the zoom of camera lens etc. changes, so demarcating is the key of light field data processing, also be the bottleneck that restriction light field image is processed flexibility.Removing vignetting is also simultaneously the four-dimensional light field pretreated necessary links of decoding, and it is used for eliminating lenticule the part of light is hindered to impact, improves optical field imaging quality.Existing is to adopt identical acquisition parameters to gather a white image as regulation mould plate for solving demarcation and the main stream approach of removing this two large problems of vignetting, the brightness maximum point centre of location by lenticule image in white image completes demarcation, adopts the mode that acquired original image respective pixel and white image respective pixel value are divided by reach the effect of vignetting.
Current business-like lytro light field camera has prestored about nearly 50 white images of main lens different parameters setting, in shooting process, finds the white image matching for decoding according to setting parameter, and better user experience still can not be changed camera lens.Raytrix can change camera lens, but requires each light uniforming device that all will use when changing camera lens or zoom to take white image that a width is corresponding for preliminary treatment, uses flexibility ratio poor compared with traditional slr camera.Although these methods can obtain good effect aspect the preliminary treatment of four-dimensional light field decoding, but still be limited by white image, in the time that light field camera has been changed camera lens or carried out the operations such as zoom, all need the white image that Resurvey is new just can carry out further data processing for preliminary treatment, so just greatly affect the flexibility that light field camera uses, further limited the universal use of light field camera.
Summary of the invention
For problems of the prior art, need be based on this limitation of white image in order to break through four-dimensional light field decoding preliminary treatment, the invention provides a kind of four-dimensional light field decoding preprocess method of the white adaptation based on original image, to improve the flexibility of optical field imaging application.
To achieve these goals, the invention provides a kind of four-dimensional light field decoding preprocess method based on original image, the method comprises the following steps:
Step S1, utilizes optical field imaging device to collect scene original image;
Step S2, demarcates for described original image, obtains the centre coordinate collection g of described original image
c(x);
Step S3, the demarcation information of utilizing described step S2 to obtain is carried out resampling processing to described original image, obtains sub-subaperture image array;
Step S4, goes vignetting processing to the sub-subaperture image in edge in described sub-subaperture image array, obtains the sub-subaperture image array going after vignetting;
Step S5, utilizes the sub-subaperture image array going after vignetting to complete four-dimensional light field decoding, and the parametrization that obtains four-dimensional light field represents.
The present invention is based on the feature of microlens array form optical field imaging, problem of calibrating in optical field imaging is modeled as to characteristics of image registration problems, to move back vignetting problem and be converted into illumination correction problem, on the basis that existing ripe algorithm is applied in a flexible way, a kind of four-dimensional light field decoding preprocess method based on original image has been proposed, to solve demarcation crucial in preprocessing process and to go vignetting problem, break through the dependence of conventional pretreatment dialogue image, improve the flexibility of optical field imaging application, be expected to expand the popularization and application scope of optical field imaging, promote the application development of optical field imaging.
Accompanying drawing explanation
Fig. 1 is white image calibration schematic diagram, and Fig. 1 (a) is the white image schematic diagram of a width, and Fig. 1 (b) is the partial enlarged drawing of Fig. 1 (a), the calibration result schematic diagram that Fig. 1 (c) is white image;
Fig. 2 is the flow chart of traditional four-dimensional light field decoding preprocess method;
Fig. 3 is the flow chart of the four-dimensional light field decoding preprocess method based on original image provided by the invention;
Fig. 4 is light field collected by camera original image schematic diagram, the original image schematic diagram of Fig. 4 (a) for collecting, and Fig. 4 (b) is the partial enlarged drawing of Fig. 4 (a);
Fig. 5 is the vignetting effect schematic diagram in optical field imaging process, Fig. 5 (a) is for having the sub-subaperture image array schematic diagram of vignetting effect, Fig. 5 (b) is the middle center subaperture image of Fig. 5 (a), and Fig. 5 (c) is the sub-subaperture image in edge of Fig. 5 (a);
Fig. 6 is the sub-subaperture image array schematic diagram after vignetting;
Fig. 7 is the flow chart of the scaling method based on original image provided by the invention;
Fig. 8 is the schematic diagram of the relevant calibrating parameters of optical field imaging;
Fig. 9 is the schematic diagram of definite initial key point diversity method provided by the invention, Fig. 9 (a) is original image schematic diagram, Fig. 9 (b) is image schematic diagram to be calibrated, Fig. 9 (c) is for demarcating area schematic, and Fig. 9 (d) is initial key point set schematic diagram;
Figure 10 is the flow chart that goes vignetting method provided by the invention.
Embodiment
For making the object, technical solutions and advantages of the present invention clearer, below in conjunction with specific embodiment, and with reference to accompanying drawing, the present invention is described in more detail.
Four-dimensional light field decoding is the first step that carries out light field processing, and demarcation in preprocessing process before decoding and to go vignetting operation be the key that determines that decoding is good and bad, current pretreatment operation is all based on white image spread.Fig. 1 (a) is the white image schematic diagram of a width, and Fig. 1 (b) is the partial enlarged drawing of Fig. 1 (a), and demarcate is the center of location lenticule imaging, as shown in Fig. 1 (c).Fig. 2 is the flow chart of traditional four-dimensional light field decoding preprocess method, and the method comprises the following steps:
Step S1, optical field imaging device gathers scene original image;
Step S2, optical field imaging device gathers the white image of a width in the situation that keeping consistent with step S1 imaging parameters;
Step S3, completes demarcation by the brightness maximum point centre of location of lenticule image in white image;
Step S4, is divided by reach vignetting effect by original image and white image respective pixel value;
Step S5, utilize the demarcation information of step S3 to go the original scene image after vignetting to carry out resampling processing to step S4, obtain sub-subaperture image, method for resampling is according to document D.G.Dansereau, O.Pizarro, and S.B.Williams.Decoding, calibration and rectification for lenselet-based plenoptic cameras.In Computer Vision and Pattern Recognition (CVPR), 2013IEEE Conference on.IEEE, 2013.
Step S6: utilize resampling result to complete four-dimensional light field decoding, the parametrization that obtains four-dimensional light field represents.
Although the above-mentioned To Several Traditional Preconditioning Methods based on white image can be obtained good effect, but application process is limit by white image, change after camera lens or zoom when imaging device, all need the white image of Resurvey to do preliminary treatment, greatly affected the flexibility of optical field imaging application.
Fig. 3 is the flow chart of the four-dimensional light field decoding preprocess method based on original image provided by the invention, and the inventive method has broken through the dependence of To Several Traditional Preconditioning Methods for white image, and as shown in Figure 3, the method comprises the following steps:
Step S1, utilizes optical field imaging device to collect scene original image;
The schematic diagram that Fig. 4 (a) is the original image that collects according to one embodiment of the invention;
Step S2, demarcates for described original image, obtains the centre coordinate collection g of described original image
c(x);
Fig. 4 (b) is the partial enlarged drawing of the original image shown in Fig. 4 (a), this step utilizes the lenticule image edge information of original image to demarcate, and utilizes the black part in lenticule image (white portion) gap in Fig. 4 (b) to be divided into rower calmly;
The scaling method based on original image that proving operation in described step S2 adopts the present invention to propose, as shown in Figure 7, described step S2 is further comprising the steps:
Step S21, sets up the initial coordinate collection g (x) at the microlens array center of described original image, g (x)=(g according to the sensor size of described optical field imaging device and microlens array number
v(x), g
h(x), 1)
t, wherein x is lenticular index value, is 1 to N integer, N is lenticular number, g
v(x), g
h(x) be respectively the vertical and horizontal coordinate value at microlens array center, and set the horizontal component initial value o of microlens array centre coordinate side-play amount
h=0, vertical component initial value o
v=0, wherein o
h, o
vimplication as shown in Figure 8, in Fig. 8, suppose that A point is for estimating enumeration, B point is real center point;
Step S22, strengthens the dark areas in the lenticule image gap in described original image, i.e. black (brightness value is close to zero) the region I in lenticule image gap in Fig. 4 (b), to set up initial key point set L;
In described step S22, set up initial key point set L and adopt a kind of initial key point set localization method provided by the invention, as shown in Figure 9, described step S22 is further comprising the steps:
Step S221 adopts gaussian filtering to obtain image to be calibrated (as shown in Fig. 9 (b)) described original image (as shown in Fig. 9 (a)) after anti-phase processing;
Step S222, after described anti-phase and filtering is processed, in described original image, be transformed into white (brightness value approaches maximum) region (as shown in Fig. 9 (c)) for the black region (as shown in Fig. 4 (b)) of demarcating, for the white portion in described image to be calibrated, adopt the method location of asking for local extremum to obtain described initial key point set L (as shown in Fig. 9 (d)).
Step S23, in the scope of described initial key point set L, utilizes the priori of the initial coordinate collection g (x) at described microlens array center to estimate to obtain arest neighbors center point set g
n(x);
Step S24, calculates described arest neighbors center point set g
n(x) effective mask m (x);
This step can adopt RANSAC of the prior art (RANdom SAmple Consensus) to calculate effective mask m (x) of described arest neighbors center point set, RANSAC algorithm can list of references " Fischler, M.A.and Bolles, R.C.Random Sample Consensus:A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography.Communications ofthe ACM, 24 (6): 381-395, 1981 ", certainly, also can adopt additive method to calculate, here do not impose any restrictions for the method for calculating effective mask,
Step S25, according to described arest neighbors center point set g
n(x) and effectively mask m (x), at described arest neighbors center point set g
n(x) in, filter out active centre point set g
m, and utilize neighborhood relationships between active centre point to calculate the horizontal and vertical step value s of each lenticule central point (x)
hand s
v, wherein s
h, s
vimplication as shown in Figure 8, in Fig. 8, some C, D, E be up and down, left and right contiguous microlens central point.
Calculated level and vertical step value s
hand s
vformula be:
s
h=average(|g
m.h(x+1)-g
m.h(x)|),
s
v=average(|g
m.v(x+1)-g
m.v(x)|),
Wherein, g
m.h(x), g
m.v(x) be active centre point set g
m(x) the horizontal and vertical coordinate components of certain 1 C in, g
m.h(x+1), g
m.v(x+1) be respectively a horizontal coordinate component of C horizontal direction consecutive points D and the vertical coordinate component of vertical direction consecutive points E;
Step S26, utilizes transition matrix t=t (s
v, s
h, o
v, o
h) and the initial coordinate collection g (x) at described microlens array center, according to formula g
t=gt calculates pre-estimation centre coordinate collection g
t(x);
Step S27, based on described pre-estimation centre coordinate collection g
tand arest neighbors center point set g (x)
n(x) calculate the horizontal and vertical side-play amount o of each lenticule central point
hand o
v, specific formula for calculation is as follows:
o
v=average(g
t.v(x)-g
n.v(x)),
o
h=average(g
t.h(x)-g
n.h(x)),
Wherein, g
t.h(x), g
t.v(x) be pre-estimation centre coordinate collection g
t(x) horizontal and vertical coordinate components, g
n.h(x), g
n.v(x) be arest neighbors center point set g
n(x) horizontal and vertical coordinate components.
Step S28, the step value and the side-play amount that calculate based on described step S25 and step S27, build new transition matrix t=t (s
vs
h, o
v, o
h), the initial coordinate collection g (x) based on described microlens array center, and according to formula g
c=gt calculates final centre coordinate collection g
c(x).
Step S3, the demarcation information of utilizing described step S2 to obtain is carried out resampling processing to described original image, obtains sub-subaperture image array (Sub-aperture image);
Method for resampling in this step can be consistent with the method for resampling that the step S5 shown in Fig. 2 adopts, and certainly, also can adopt additive method to carry out resampling, do not impose any restrictions here for the method for resampling.
Step S4, goes vignetting processing to the sub-subaperture image in edge in described sub-subaperture image array, obtains and removes the uniform sub-subaperture image array of illumination after vignetting, for follow-up four-dimensional light field decoding;
Fig. 5 (a) is for having the sub-subaperture image array schematic diagram of vignetting effect, as can be seen from the figure, its edge image is subject to vignetting effect to affect the low and skewness of brightness, Fig. 5 (b) is the middle center subaperture image of Fig. 5 (a), distribution of light is even, Fig. 5 (c) is the sub-subaperture image in edge of Fig. 5 (a), distribution of light inequality and brightness are low, Fig. 6 is the sub-subaperture image array schematic diagram after vignetting, as can be seen from the figure, the overall light in this pattern matrix is evenly distributed;
Vignetting is to be stopped and cause by microlens aperture part by the light by main lens aperture, and the embodiment on image is exactly, the sub-subaperture image of the different directions yardstick obtaining from original image resampling, from centre to edge brightness, to weaken gradually.Sub-subaperture image comprises two components, be illumination component and details component, wherein illumination component reflects the illumination information of this image, vignetting effect can embody in illumination component, and details component is by image scene Details, have nothing to do with illumination, therefore can reach by the processing of these two components of antithetical phrase subaperture image the object of vignetting.As shown in figure 10, described step S4 is further comprising the steps:
Step S41, isolates respectively illumination component A 1 and B1 by the sub-subaperture image A in a certain edge in described sub-subaperture image array and the dynatron subaperture image B of described sub-subaperture image array;
In this step, can utilize WLS filter (weighted least squares filter) to carry out separated light according to component, wherein, WLS filter can list of references " Z.Farbman; R.Fattal; D.Lischinski; and R.Szeliski.Edgepreserving decompositions for multi-scale tone and detail manipulation.In ACM Transactions on Graphics (TOG); volume27; page67.ACM, 2008 ";
Step S42, utilizes illumination component A 1 that original image and described step S41 obtain and details component A 2 and the B2 of the sub-subaperture image A of B1 edge calculation and dynatron subaperture image B, and computing formula is as follows:
A2=A-A1,
B2=B-B1;
Step S43, utilizes described illumination component A 1 and B1, obtain the illumination component A 1 of the sub-subaperture image A in edge after illumination calibration ';
In this step, can adopt navigational figure filter (guided image filter) obtain the illumination component A 1 of the sub-subaperture image A in edge after illumination calibration ', wherein, navigational figure filter can list of references " K.He; J.Sun; and X.Tang.Guided image filtering.In Computer Vision-ECCV2010, pages1-14.Springer, 2010 ".In navigational figure filter, the illumination component A 1 of the sub-subaperture image A in edge is as the input of navigational figure filter, and the illumination component B1 of dynatron subaperture image B is as navigational figure.
Step S44, the illumination component A 1 of the sub-subaperture image A in edge being obtained by described step S43 after illumination calibration ' and the details component A 2 of the sub-subaperture image A in edge that obtains of described step S42 synthetic obtain the final sub-subaperture image image A in edge of removing vignetting for the sub-subaperture image A in edge ', and then obtain the sub-subaperture image array after vignetting.
In this step, go the synthetic method of the sub-subaperture image A ' in edge of vignetting to be:
A′=A1′+A2。
Step S5, utilizes the sub-subaperture image array going after vignetting to complete four-dimensional light field decoding, and the parametrization that obtains four-dimensional light field represents, realizes the preliminary treatment of four-dimensional light field decoding.
In this step, four-dimensional light field coding/decoding method belongs to prior art, and therefore not to repeat here.
Above-described specific embodiment; object of the present invention, technical scheme and beneficial effect are further described; institute is understood that; the foregoing is only specific embodiments of the invention; be not limited to the present invention; within the spirit and principles in the present invention all, any modification of making, be equal to replacement, improvement etc., within all should being included in protection scope of the present invention.
Claims (9)
1. the four-dimensional light field decoding preprocess method based on original image, is characterized in that, the method comprises the following steps:
Step S1, utilizes optical field imaging device to collect scene original image;
Step S2, demarcates for described original image, obtains the centre coordinate collection g of described original image
c(x);
Step S3, the demarcation information of utilizing described step S2 to obtain is carried out resampling processing to described original image, obtains sub-subaperture image array;
Step S4, goes vignetting processing to the sub-subaperture image in edge in described sub-subaperture image array, obtains the sub-subaperture image array going after vignetting;
Step S5, utilizes the sub-subaperture image array going after vignetting to complete four-dimensional light field decoding, and the parametrization that obtains four-dimensional light field represents.
2. method according to claim 1, is characterized in that, described step S2 is further comprising the steps:
Step S21, sets up the initial coordinate collection g (x) at the microlens array center of described original image, g (x)=(g according to the sensor size of described optical field imaging device and microlens array number
v(x), g
h(x), 1)
t, wherein, x is lenticular index value, is 1 to N integer, N is lenticular number, g
v(x), g
h(x) be respectively the vertical and horizontal coordinate value at microlens array center, and set the horizontal component initial value o of microlens array centre coordinate side-play amount
h=0, vertical component initial value o
v=0;
Step S22, strengthens the dark areas in the lenticule image gap in described original image, to set up initial key point set L;
Step S23, in the scope of described initial key point set L, utilizes the priori of the initial coordinate collection g (x) at described microlens array center to estimate to obtain arest neighbors center point set g
n(x);
Step S24, calculates described arest neighbors center point set g
n(x) effective mask m (x);
Step S25, according to described arest neighbors center point set g
n(x) and effectively mask m (x), at described arest neighbors center point set g
n(x) in, filter out active centre point set g
m, and utilize neighborhood relationships between active centre point to calculate the horizontal and vertical step value s of each lenticule central point (x)
hand s
v;
Step S26, utilizes transition matrix t=t (s
v, s
h, o
v, o
h) and the initial coordinate collection g (x) at described microlens array center, according to formula g
t=gt calculates pre-estimation centre coordinate collection g
t(x);
Step S27, based on described pre-estimation centre coordinate collection g
tand arest neighbors center point set g (x)
n(x) calculate the horizontal and vertical side-play amount o of each lenticule central point
hand o
v;
Step S28, the step value and the side-play amount that calculate based on described step S25 and step S27, build new transition matrix t=t (s
v, s
h, o
v, o
h), the initial coordinate collection g (x) based on described microlens array center, and according to formula g
c=gt calculates final centre coordinate collection g
c(x).
3. method according to claim 2, is characterized in that, described step S22 is further comprising the steps:
Step S221 obtains described original image image to be calibrated after anti-phase processing and gaussian filtering;
Step S222, approaches maximum region for brightness value in described image to be calibrated, adopts the method location of asking for local extremum to obtain described initial key point set L.
4. method according to claim 2, is characterized in that, in described step S25, utilizes following formula to calculate described horizontal and vertical step value s
hand s
v:
s
h=average(|g
m.h(x+1)-g
m.h(x)|),
s
v=average(|g
m.v(x+1)-g
m.v(x)|),
Wherein, g
m.h(x), g
m.v(x) be active centre point set g
m(x) the horizontal and vertical coordinate components of certain 1 C in, g
m.h(x+1), g
m.v(x+1) be respectively a horizontal coordinate component of C horizontal direction consecutive points D and the vertical coordinate component of vertical direction consecutive points E.
5. method according to claim 2, is characterized in that, in described step S27, utilizes following formula to calculate the horizontal and vertical side-play amount o of each lenticule central point
hand o
v:
o
v=average(g
t.v(x)-g
n.v(x)),
o
h=average(g
t.h(x)-g
n.h(x)),
Wherein, g
t.h(x), g
t.v(x) be pre-estimation centre coordinate collection g
t(x) horizontal and vertical coordinate components, g
n.h(x), g
n.v(x) be arest neighbors center point set g
n(x) horizontal and vertical coordinate components.
6. method according to claim 1, is characterized in that, described step 4 is further comprising the steps:
Step S41, isolates respectively illumination component A 1 and B1 by the sub-subaperture image A in a certain edge in described sub-subaperture image array and the dynatron subaperture image B of described sub-subaperture image array;
Step S42, utilizes illumination component A 1 that original image and described step S41 obtain and details component A 2 and the B2 of the sub-subaperture image A of B1 edge calculation and dynatron subaperture image B;
Step S43, utilizes described illumination component A 1 and B1, obtain the illumination component A 1 of the sub-subaperture image A in edge after illumination calibration ';
Step S44, the illumination component A 1 of the sub-subaperture image A in edge being obtained by described step S43 after illumination calibration ' and the details component A 2 of the sub-subaperture image A in edge that obtains of described step S42 synthetic obtain the sub-subaperture image image A in edge of removing vignetting for the sub-subaperture image A in edge ', and then obtain the sub-subaperture image array after vignetting.
7. method according to claim 6, is characterized in that, in described step S42, utilizes following formula to come computational details component A 2 and B2:
A2=A-A1,
B2=B-B1。
8. method according to claim 6, it is characterized in that, in described step S43, adopt navigational figure filter obtain the illumination component A 1 of the sub-subaperture image A in edge after illumination calibration ', wherein, the illumination component A 1 of the sub-subaperture image A in edge is as the input of navigational figure filter, and the illumination component B1 of dynatron subaperture image B is as navigational figure.
9. method according to claim 6, is characterized in that, in described step S44, adopts following formula to synthesize the sub-subaperture image A ' in edge that removes vignetting:
A′=A1′+A2。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410067394.2A CN103841327B (en) | 2014-02-26 | 2014-02-26 | Four-dimensional light field decoding preprocessing method based on original image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410067394.2A CN103841327B (en) | 2014-02-26 | 2014-02-26 | Four-dimensional light field decoding preprocessing method based on original image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103841327A true CN103841327A (en) | 2014-06-04 |
CN103841327B CN103841327B (en) | 2017-04-26 |
Family
ID=50804424
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410067394.2A Active CN103841327B (en) | 2014-02-26 | 2014-02-26 | Four-dimensional light field decoding preprocessing method based on original image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103841327B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106303228A (en) * | 2016-08-04 | 2017-01-04 | 深圳市未来媒体技术研究院 | The rendering intent of a kind of focus type light-field camera and system |
CN106910224A (en) * | 2017-02-27 | 2017-06-30 | 清华大学 | Image sensor array calibration method in wide visual field high-resolution micro-imaging |
CN111968049A (en) * | 2020-08-06 | 2020-11-20 | 中国科学院光电技术研究所 | Light field image hot pixel point removing method based on side window guide filtering |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101308012A (en) * | 2008-05-29 | 2008-11-19 | 上海交通大学 | Double monocular white light three-dimensional measuring systems calibration method |
US20110298933A1 (en) * | 2010-06-04 | 2011-12-08 | Apple Inc. | Dual processing of raw image data |
-
2014
- 2014-02-26 CN CN201410067394.2A patent/CN103841327B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101308012A (en) * | 2008-05-29 | 2008-11-19 | 上海交通大学 | Double monocular white light three-dimensional measuring systems calibration method |
US20110298933A1 (en) * | 2010-06-04 | 2011-12-08 | Apple Inc. | Dual processing of raw image data |
Non-Patent Citations (3)
Title |
---|
DONGHYEON CHO ; MINHAENG LEE ; SUNYEONG KIM ; YU-WING TAI: "Modeling the Calibration Pipeline of the Lytro Camera for High Quality Light-Field Image Reconstruction", 《2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION》 * |
TODOR GEORGIEV;ZHAN YU;ANDREW L;SERGIO G: "Lytro camera technology: theory, algorithms, performance analysis", 《HTTP://PROCEEDINGS.SPIEDIGITALLIBRARY.ORG/PROCEEDING.ASPX?ARTICLEID=1662493》 * |
朱咸昌: "微透镜阵列焦距及其一致性检测技术研究", 《WWW.IOE.AC.CN,中国科学院研究生院》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106303228A (en) * | 2016-08-04 | 2017-01-04 | 深圳市未来媒体技术研究院 | The rendering intent of a kind of focus type light-field camera and system |
CN106303228B (en) * | 2016-08-04 | 2019-09-13 | 深圳市未来媒体技术研究院 | A kind of rendering method and system of focus type light-field camera |
CN106910224A (en) * | 2017-02-27 | 2017-06-30 | 清华大学 | Image sensor array calibration method in wide visual field high-resolution micro-imaging |
CN106910224B (en) * | 2017-02-27 | 2019-11-22 | 清华大学 | Image sensor array calibration method in wide visual field high-resolution micro-imaging |
CN111968049A (en) * | 2020-08-06 | 2020-11-20 | 中国科学院光电技术研究所 | Light field image hot pixel point removing method based on side window guide filtering |
CN111968049B (en) * | 2020-08-06 | 2022-11-11 | 中国科学院光电技术研究所 | Light field image hot pixel point removing method based on side window guide filtering |
Also Published As
Publication number | Publication date |
---|---|
CN103841327B (en) | 2017-04-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107705333B (en) | Space positioning method and device based on binocular camera | |
CN110276734B (en) | Image distortion correction method and device | |
CN103325112B (en) | Moving target method for quick in dynamic scene | |
CN107316326B (en) | Edge-based disparity map calculation method and device applied to binocular stereo vision | |
TW200840365A (en) | Motion-blur degraded image restoration method | |
Lyudvichenko et al. | A semiautomatic saliency model and its application to video compression | |
CN107564091A (en) | A kind of three-dimensional rebuilding method and device based on quick corresponding point search | |
CN109064418A (en) | A kind of Images Corrupted by Non-uniform Noise denoising method based on non-local mean | |
CN111383204A (en) | Video image fusion method, fusion device, panoramic monitoring system and storage medium | |
RU2419880C2 (en) | Method and apparatus for calculating and filtering disparity map based on stereo images | |
CN104794727A (en) | Symmetry based fast calibration method of PSF (Point Spread Function) for single-lens imaging calculation | |
CN108492263A (en) | A kind of camera lens Lens Distortion Correction method | |
Fan et al. | Multiscale cross-connected dehazing network with scene depth fusion | |
CN116958419A (en) | Binocular stereoscopic vision three-dimensional reconstruction system and method based on wavefront coding | |
Javidnia et al. | Accurate depth map estimation from small motions | |
CN103841327A (en) | Four-dimensional light field decoding preprocessing method based on original image | |
Xiong et al. | An efficient underwater image enhancement model with extensive Beer-Lambert law | |
CN115314635A (en) | Model training method and device for determining defocus amount | |
CN111582036A (en) | Cross-view-angle person identification method based on shape and posture under wearable device | |
CN116580169B (en) | Digital man driving method and device, electronic equipment and storage medium | |
CN103873773B (en) | Primary-auxiliary synergy double light path design-based omnidirectional imaging method | |
TWI805282B (en) | Methods and apparatuses of depth estimation from focus information | |
CN108665448B (en) | Obstacle detection method based on binocular vision | |
CN113888614B (en) | Depth recovery method, electronic device, and computer-readable storage medium | |
CN109544611B (en) | Binocular vision stereo matching method and system based on bit characteristics |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |