CN101426085A - Imaging arrangements and methods therefor - Google Patents

Imaging arrangements and methods therefor Download PDF

Info

Publication number
CN101426085A
CN101426085A CNA2008101691410A CN200810169141A CN101426085A CN 101426085 A CN101426085 A CN 101426085A CN A2008101691410 A CNA2008101691410 A CN A2008101691410A CN 200810169141 A CN200810169141 A CN 200810169141A CN 101426085 A CN101426085 A CN 101426085A
Authority
CN
China
Prior art keywords
light
image
example embodiment
frame
main lens
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2008101691410A
Other languages
Chinese (zh)
Other versions
CN101426085B (en
Inventor
Y-R·恩
P·M·汉拉恩
M·S·莱沃
M·A·霍罗威茨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Leland Stanford Junior University
Original Assignee
Leland Stanford Junior University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Leland Stanford Junior University filed Critical Leland Stanford Junior University
Publication of CN101426085A publication Critical patent/CN101426085A/en
Application granted granted Critical
Publication of CN101426085B publication Critical patent/CN101426085B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Studio Devices (AREA)
  • Image Input (AREA)
  • Transforming Light Signals Into Electric Signals (AREA)

Abstract

Image data is processed to facilitate focusing and/or optical correction. According to an example embodiment of the present invention, an imaging arrangement collects light data corresponding to light passing through a particular focal plane. The light data is collected using an approach that facilitates the determination of the direction from which various portions of the light incident upon a portion of the focal plane emanate from. Using this directional information in connection with value of the light as detected by photosensors, an image represented by the light is selectively focused and/or corrected.

Description

Imaging device and method thereof
Patent application of the present invention is that international application no is PCT/US2005/035189, international filing date is on September 30th, 2005, the application number that enters the China national stage is 200580039822.X, and name is called the dividing an application of application for a patent for invention of " imaging device and method thereof ".
Relevant patent document
This patent file is according to 35U.S.C. § 119 (e), the sequence number that requirement was submitted on October 1st, 2004 is 60/615, the sequence number that 179 U.S. Provisional Patent Application and on January 27th, 2005 submit to is 60/647, the priority of 492 U.S. Provisional Patent Application, both by reference integral body be incorporated into this.
Technical field
The present invention relates generally to imaging applications, relate in particular to image data processing to focus on and/or image correcting data.
Background technology
Usually be subject to the amount of the light of being gathered such as the imaging applications that relates to camera, video camera, microscope, telescope etc.That is, most of imaging devices do not write down about entering the photodistributed most information of this device.For example, do not write down photodistributed most information such as the traditional camera of digital still camera and video camera about entering from the external world.In these devices, the light of being gathered processings that usually can not in all sorts of ways is such as focusing on different depth (apart from the distance of imaging device), correcting lens aberration or controlling the visual angle.
Use for quiescent imaging, the typical imaging device of catching special scenes focuses on target or the object in the scene usually, and the other parts of this scene drop on outside the focus.Use for video imaging, have similar problem, wherein the IMAQ that uses in Video Applications capturing scenes in focus.
Many imaging applications are used to gather the aberration of the device (lens) of light.This aberration can comprise for example spherical aberration, aberration, distortion, light field bending, oblique astigmation and coma aberration.Correction to aberration relates generally to use the calibrating optical parts, is easy to this moment add volume, expenditure and weight to imaging device.Having benefited from the application of miniature optical components such as some of camera phone and security camera, the physical restriction that is associated with application does not expect to comprise additional optical.
With the above difficulty that is associated to comprising that the image applications of obtaining and changing that relates to digital picture has proposed challenge.
Summary of the invention
The present invention relates to overcome above-mentioned challenge and other challenge relevant with imaging device and realization thereof.The present invention is illustration in many realizations with in using, and wherein a part will be summarized hereinafter.
According to an example embodiment of the present invention, detect this light with the directional information that characterizes detected light.Corresponding to one of focusedimage and calibrated image again or both, directional information is used for producing the virtual image with detected light.
According to another example embodiment of the present invention, two or more targets of different depths of focus in scene are carried out imaging, wherein the part corresponding to each target with scene is imaged on different focal planes.Focus on a physics focal plane and detected with the information that characterizes direction from the light of scene, wherein this light arrives ad-hoc location on physics focal plane from this direction.For at least one target that is positioned at the depth of field that does not focus on the physics focal plane, determine the empty focal plane different with the physics focal plane.Use detected light and directional characteristic thereof, the part corresponding to the focusedimage of at least one target on the empty focal plane of light is gathered and is added up to form the empty focusedimage of this at least one target.
The example embodiment again according to the present invention is carried out digital imagery to scene.The light from scene of diverse location detects on the focal plane to being transferred to, and the incidence angle of the detected light of diverse location on the focal plane is detected.The depth of field of scene that detected light was derived from part is detected, and is used from determined incidence angle one and detected light to be carried out digitlization rearranges.Depend on application, rearrange and comprise again and to focus on and/or the correcting lens aberration.
Above conclusion of the present invention is not intended to describe of the present invention each execution mode or each realization is shown.Accompanying drawing hereinafter and these execution modes of having described more specifically illustration in detail.
Description of drawings
Consider the detailed description of hereinafter relevant each execution mode of the present invention, can understand the present invention more thoroughly with accompanying drawing, in the accompanying drawings:
Fig. 1 is that the light of one example embodiment according to the present invention is caught and processing unit;
Fig. 2 is the optical imaging device of another example embodiment according to the present invention;
Fig. 3 is the process flow diagram of the image processing of another example embodiment according to the present invention;
Fig. 4 is the process flow diagram that is used to produce preview image of another example embodiment according to the present invention;
Fig. 5 be according to the present invention another example embodiment be used to handle process flow diagram with compressing image data;
Fig. 6 be according to the present invention another example embodiment be used for the synthetic process flow diagram of image;
Fig. 7 be according to the present invention another example embodiment be used for the process flow diagram that image focuses on again;
Fig. 8 is the process flow diagram that is used for the extension bitmap picture depth of field of another example embodiment according to the present invention;
Fig. 9 is the process flow diagram of the other method that is used for the extension bitmap picture depth of field of another example embodiment according to the present invention;
Figure 10 illustrates an exemplary method of the separation light of another example embodiment according to the present invention;
Figure 11 illustrate according to the present invention another example embodiment with the sensor pixel location map to L (u, v, s, t) method of light in the space with respect to institute's image data;
Figure 12 illustrates the some images that focus on different depth again of another example embodiment according to the present invention;
Figure 13 A illustrates the 2D imaging configuration of the another example embodiment according to the present invention;
The total that Figure 13 B illustrates another example embodiment according to the present invention becomes the pencil of ordering from a 3D of a pixel;
The calculating that Figure 14 A-14C illustrates the another example embodiment according to the present invention has the method for the image of the different depth of field;
The method of the light that the tracking that Figure 15 illustrates another example embodiment according to the present invention is ordered from a 3D on the empty film plane;
Figure 16 illustrates the method for the value that obtains light of another example embodiment according to the present invention;
Figure 17 A illustrates ideal 512 x 512 photographs of another example embodiment according to the present invention;
Figure 17 B illustrates the image with the generation of f/2 biconvex spherical lens of the another example embodiment according to the present invention;
Figure 17 C illustrates the image of the use method for correcting image calculating of another example embodiment according to the present invention;
Figure 18 A-18C illustrates shared light in the tracking color imaging system of the another example embodiment according to the present invention;
Figure 19 A-19F illustrates the method for the realization mosaic array of another example embodiment according to the present invention;
Figure 20 is the process flow diagram of the computational methods that another example embodiment focuses in Fourier domain again according to the present invention;
Figure 21 A illustrates the triangle filtering method of another example embodiment according to the present invention;
Figure 21 B illustrates the Fourier transform of the triangle filtering method of the another example embodiment according to the present invention;
Figure 22 is the flow chart of method of focusing again in frequency domain of another example embodiment according to the present invention;
Figure 23 illustrates the light set of passing through the expectation focus of the another example embodiment according to the present invention;
Figure 24 A-B illustrates the different views of the part of the microlens array of another example embodiment according to the present invention;
Figure 24 C illustrates the image that occurs of the another example embodiment according to the present invention on optical sensor;
Figure 25 illustrates the present invention's one example embodiment, and wherein the virtual image is calculated as just as it appears on the empty film;
Figure 26 illustrates the method on the empty lens of the manipulation plane of another example embodiment according to the present invention;
Figure 27 illustrates that the empty film of another example embodiment can adopt Any shape according to the present invention;
Figure 28 illustrates the imaging device of another example embodiment according to the present invention;
Figure 29 is the process flow diagram of precomputation and each output image pixel and the weight database that each light sensor value is associated of the another example embodiment according to the present invention;
Figure 30 is the process flow diagram that the use weight database of another example embodiment according to the present invention is calculated output image;
Figure 31 A-D illustrates the various scalar functions as the realization of vignette circle function selectivity of the another example embodiment according to the present invention;
Figure 32 illustrates the vignette circle function of the individual element variation of another example embodiment according to the present invention;
Figure 33 is the user of the another example embodiment according to a present invention zone selecting output image, editor's one image section and the process flow diagram of preserving output image;
Figure 34 is the flow chart that is used for the extension bitmap picture depth of field of another example embodiment according to the present invention; And
Figure 35 is being used for according to the light sensor data computation that receives through the process flow diagram of focusedimage again of another example embodiment according to the present invention.
Though the present invention is easy to have various variants and other form, its details is shown in the drawings and will describe in detail by example.Yet, should be appreciated that, be not to be intended to the present invention is limited to described specific implementations.On the contrary, all variants, equivalents and the alternative that drops in the spirit and scope of the present invention contained in the present invention.
Describe in detail
Believe that the present invention is useful to various dissimilar devices, and find that the present invention is particularly useful for electronic imaging apparatus and application.Though the present invention needn't be subject to this application, by using context various examples are discussed, be appreciated that various aspects of the present invention.
According to an example embodiment of the present invention, use the amount relate to the light of determining to arrive the transducer that is arranged in the focal plane and the method for direction to detect the four-dimension (4D) light field (for example along every light ray propagation in zone) such as free space.The two-dimensional position of light in the focal plane is detected with the information that characterizes direction, and wherein this light arrives ad-hoc location on plane from this direction.Use this method, the directional lighting that arrives diverse location on the transducer distributes and is determined and is used to form image.In the various discussion of this paper, realize that the assembly or a plurality of assembly that are used for sensing and/or measure light field are called as " light sensor " or " radiation transducers ".
In one uses, the imaging system that use has optics that the ray space that incides on the imaging plane is sampled and a transducer realizes and similar method above that this imaging system has with distinct methods from measuring the computing function that the light set presents image.Depend on realization, the combination or use diverse ways to realize optics, transducer and computing function individually.For example, have image focusing be can be used for ray space is sampled at the camera that is positioned at the lens (optics) on the photosensor array of imaging plane (a plurality of transducer).Be used for presenting image from the output of photosensor array with computing function (for example on the inner and/or outside processor of camera), such as the photograph that focuses on different depth or have the different depth of field by calculating, and/or by the calculation correction lens aberration to produce higher-quality image.
In another example embodiment, the optics of imaging system and sensor element on sensor element, make each sensor element sensing comprise the light set of the light of dispersing from specific direction light conduction.In many application, this light set is the Ray Of Light of localization on space and direction.For many application, this bundle light increases along with optics and sensor resolution and pools single how much light.Like this, herein difference is described part and will be called " light ray " or " light " or " ray " simply by the value of sensor element sensing, even they are not limited to light how much usually.
Referring now to accompanying drawing, Figure 28 illustrates the imaging device 2890 of another example embodiment according to the present invention.This imaging device 2890 comprises main lens 2810, and the light sensor to arriving diverse location on the transducer and measuring from the value of the light of different incident directions.In the context of this article, light that can be by detect arriving diverse location on the transducer and produce numerical value such as the characteristic of the light of intensity is to realize the measurement to the value of light.
Fig. 1 illustrates the imaging system 100 of another execution mode according to the present invention.This imaging system 100 comprise have main lens 110, the imaging device 190 of microlens array 120 and photosensor array 130.In this case, microlens array 120 and photosensor array 130 are realized a light sensor.Though Fig. 1 illustrates a specific main lens 110 (discrete component) and a specific microlens array 120, but those skilled in the art are to be appreciated that the available similar method of various lens and/or microlens array (existing at present or exploitation soon) optionally realizes by for example replacing shown main lens and/or microlens array.
Can arrive the single focus point on the focal plane of microlens array 120 from the light of a single point on the object 105 in the imaging scene.For example, when imaging point on the object 105 the distance main lens with from the microlens array to the main lens apart from the distance of conjugation the time, distance ' ' d ' ' approximates distance " s " as shown in the figure.Lenticule 122 on this convergent point separates these light according to direction of light, thereby produces the focusedimage of the aperture of main lens 110 on the optical sensor under the lenticule.
130 pairs of light that incide on it of photosensor array detect and produce the output of handling by one or more different parts.In this application, output light data are passed to sensing data treatment circuit 140, and this circuit uses these data and produces the scene image of (for example comprising object 105,106 and 107) about the positional information of each optical sensor that data are provided.Sensing data treatment circuit 140 usefulness for example computer or other treatment circuit of realizing in common component (for example chip) or different parts realize.In one realized, the part of sensing data treatment circuit 140 realized in imaging device 190, and another part is externally realized in the computer.The known incident direction (calculating as the known location of using each optical sensor) that use to detect light (and the characteristic that for example detects light) and arrive the light of microlens array, sensing data treatment circuit 140 optionally focuses on and/or proofreaies and correct light data (wherein imaging recoverable) more again when forming image.Following reference or do not describe the whole bag of tricks of handling the detected light data in detail with reference to other accompanying drawing.These methods can be used with above consistent sensing data treatment circuit 140 and optionally realize.
The different piece of imaging system 100 depends on application-specific and optionally realizes in shared or independent physical unit.For example, when realizing with various application, microlens array 120 and photosensor array 130 be capable of being combined to become a shared device 180.In some applications, microlens array 120 and photosensor array 130 are coupled on co-used chip or other circuit arrangement.When using when realizing such as the handheld apparatus of similar camera apparatus, main lens 110, microlens array 120 and photosensor array 130 optionally be combined into the integrated shared imaging device 190 of handheld apparatus in.In addition, some application relate in having the common circuit device of photosensor array 130 (for example on co-used chip) realize sensing data treatment circuit 140 partly or entirely.
In some applications, imaging device 100 comprises the pre-viewing device 150 that is used for presenting to the user who catches image preview image.This pre-viewing device is by communicative couplings, to receive the view data from photosensor array 130.Preview processor 160 is handled this view data to be created in the preview image that shows on the preview screen 170.In some applications, preview processor 160 is realized on co-used chip and/or in the common circuit with imageing sensor 180.At sensing data treatment circuit 140 as mentioned above in the application that photosensor array 130 is realized, preview processor 160 usefulness sensing data treatment circuits 140 optionally realize, wherein photosensor array 130 view data of gathering partly or entirely is used to produce preview image.
Use be used to produce the relative less computing function of comparing of final image and/or data still less can produce preview image.For example, when using when realizing such as the hand-held imaging device of camera or mobile phone, unreal incumbent what focus on or the preview image of lens correction just enough.Like this, expectation realizes that relatively cheap and/or less treatment circuit is to produce preview image.In these were used, the preview processor was by for example using above-mentioned first to extend depth of field computational methods, assessed the cost and/or used data still less to produce image with relatively low.
Depend on application, imaging system 100 can realize in various manners.For example, though microlens array 120 is shown to have some distinguishable lenticules as example, that this array usually uses is a plurality of (for example several thousand, millions of), and lenticule is realized.Photosensor array 130 generally includes than microlens array 120 relative meticulousr spacings, wherein has some optical sensors for each lenticule in the microlens array 120.In addition, lenticule in the microlens array 120 and the optical sensor in the photosensor array 130 light that is positioned to usually propagate into photosensor array by each lenticule does not overlap with the light of propagating by contiguous microlens.
In different application, main lens 110 is along its optical axis translation (as shown in Figure 1 in the horizontal direction), on the interesting target that focuses on illustrative desired depth " d " between by main lens and example imageable target 105.By example, light from a single point on the target 105 is shown for purpose is discussed.The single convergent point at lenticule 122 places of these light transmission to microlens array 120 focal planes.Lenticule 122 separates these light according to direction, thereby produces the focusedimage of main lens 110 apertures in the pel array under the lenticule on the set of pixels 132.Figure 10 illustrates an exemplary method, this method separate light make a point from the main lens 1010 disperse and all light of arriving any position on the surface of same lenticule (for example 1022) by this lenticule conduction to converge in the same point on the optical sensor (for example 1023).This method shown in Figure 10 can for example realize (promptly realize main lens 110, realize microlens array 120 and realize photosensor array 130 with photosensor array 1030 with microlens array 1020 with main lens 1010) relatively with Fig. 1.
The graphical representation system that in microlens array 122, forms under the specific lenticule for imaging plane on the directional resolution of this position.In some applications, help sharpening lenticule image to strengthen directional resolution by the primary flat that makes lenticule focus on main lens.In some applications, lenticule than the interval between microlens array and the main lens 110 to two magnitudes when young.In these were used, main lens 110 was located effectively from (optical infinity) at lenticular optics infinity; In order to focus on lenticule, photosensor array 130 is placed the plane that is positioned at lenticule depth of focus place.
Interval " s " between main lens 110 and the microlens array 120 is chosen to realize the sharpening image in the lenticule depth of field.In many application, this is accurate to about Δ x at interval p(f m/ Δ x m) in, Δ x wherein pBe the width of sensor pixel, f mBe lenticular depth of focus and Δ x mIt is lenticular width.In an application-specific, Δ x pBe about 9 microns, f mBe about 500 microns and Δ x mBe about 125 microns, and the interval between microlens array 120 and the photosensor array 130 is accurate to about 36 microns.
Use each lenticular one or more and configuration realization microlens array 120.In an example embodiment, the lenticule plane with latent space qualitative change is used as microlens array 120.For example, microlens array can comprise even and/or inhomogeneous, square extension or non-square extension, regular distribution or irregular distribution and can repeat or unrepeatable pattern in lens, and the part that can randomly block.Lenticule self can be convex surface, non-convex surface, perhaps have the expectation physical direction of random appearance with realization light, and profile can one by one change with the lenticule on the plane.Optionally make up different distributions and lens-shape.These different execution modes be provided at array some regional spaces higher (correspondingly lower angle) and in other regional perspective higher (correspondingly lower space) sample pattern.A kind of purposes of this data is to be convenient to interpolation with expectation space and angular resolution in the coupling 4D space.
Figure 24 A illustrates the view (sight line is perpendicular to the plane) of the part of the microlens array of another example embodiment according to the present invention.Lenticule is foursquare and is distributed in the array regularly.
Figure 24 B illustrates the view of the part of the microlens array of another example embodiment according to the present invention.The lenticule plane distribution is not have rule or unduplicated, and lenticule is an arbitrary shape.
That Figure 24 C illustrates is relevant with another example embodiment of the present invention, by using the main lens that shown in Figure 24 A, has the distribution of convex shape and have circular iris, the image that on optical sensor, occurs.
In other example embodiment, use bigger and less lenticular routine to inlay.In one realizes, to the optical sensor data that obtains carry out interpolation with provide have inlay in the uniform sampling of one or more lenticular maximum spaces and angular resolution.
Figure 19 A-19F illustrate with one or more example embodiment of the present invention relatively, realize method such as above-mentioned mosaic array.Figure 19 A is the plan view from above that a plurality of lenticular example relative dimensions and arrangement are shown.Figure 19 B is the view by the picture shape that forms on photosensor array after each lenticule projection among Figure 19 A.Figure 19 C is the cross-sectional view of array among Figure 19 A, shows lenticule and has identical f-number (f-number) and its focus at grade.This needs the less bigger lenticule of lenticule to be placed to more close focal plane.This causes appearing at the more but zero lap of main lens image behind each lenticule, and focuses on the optical sensor that places the place, plane that comprises focus.
Figure 19 D illustrates comprising main lens 1910, inlaying lenticular cross-sectional view shown in Figure 19 A that realizes in the complete imaging device of microlens array 1920 and light sensor arrangement 1930 and the 19C according to another example embodiment of the present invention.Notice that though accompanying drawing has been shown several lenticules and each lenticule has some pixels, the actual number of lenticule and pixel can be selected with distinct methods, such as by the resolution requirement of determining given application and the proper number of realizing each.
Figure 19 E is the u of expression from the main lens 1910 and the Descartes ray figure (though ray space is 4D, for clarity sake, ray space is illustrated as 2D) of the ray space that finishes of the s on microlens array 1920.The Descartes ray that the light of being sued for peace by each optical sensor (being designated A-P) shown in Figure 19 D is integrated among Figure 19 E illustrates.In 4D ray space completely, each optical sensor is in conjunction with 4D light box.Compare with the optical sensor under the less lenticule, the optical sensor 4D box under the bigger lenticule is in that (u, v) axis of orientation has the width (resolution of twice) of half, and in that (x, y) spatial axes has the width width (resolution of half) doubly of twice.
In another example embodiment, the optical sensor value is interpolated in the regular grid, makes resolution and the coupling of the ultimate resolution in all in all.Figure 19 F illustrates this method, wherein by near the box value interpolation light box of representing each optical sensor value is cut apart.Shown in the 2D ray space in, each box is divided into two, but in the 4D space, each box is divided into four (every along its two longer sides is divided into two).In some embodiments, calculate by analyzing neighbor through the value of interpolation.In another embodiment, interpolation be embodied as the desired value neighborhood initially, do not cut apart the weighted sum of the value of box.
In some applications, to depend on that the mode based on the decision function of neighborhood intermediate value realizes weighting.For example, weighting can be along the axle interpolation that least may comprise the edge in the 4D function space.Near the possibility at the edge this value can be according in the gradient magnitude of these position functional values and the Laplacian Composition Estimation of this function.
In another example embodiment, each lenticule (for example the array in Figure 19 D 1920 or similarly in) optical axis that makes them that slopes inwardly is the center with the aperture of main lens all.Aberration in the image that this method has reduced to form to the edge of array under lenticule.
Referring again to Fig. 1, lenticular aperture scale in main lens 110 and the microlens array 120 (for example effective dimensions of lens split shed) also is chosen to be suitable for wherein being embodied as the special applications of picture configuration 100.In many application, the relative aperture size is selected to the image gathered big as much as possible under nonoverlapping situation (that is, thereby light can close overlap on the adjacent photosensors) with needing.This method is by coupling main lens and lenticular f-number (focusing on than number, i.e. the ratio of the aperture of lens and effective focal length).In this case, the effective focal length of the main lens of representing with f-number 110 is the diameter of main lens aperture and the ratio between the distance " s " between main lens 110 and the microlens array 120.In the application of the primary flat of main lens 110 with respect to the plane translation at microlens array 120 places therein, the aperture of optionally changing main lens is to keep the ratio and thereby the size of the image that each lenticule forms down in the microlens array.In some applications, the pattern matrix bag on the photosensor surface under (for example effective) microlens array that is used to realize expecting such as the different main lens iris shapes of square aperture.
The common application that relates to the imaging device 100 of Fig. 1 with one and a plurality of example embodiment of the present invention relatively below is discussed.Consider two planes (two-plane) light fields " L " in the imaging device 100, L (u, v, s, t) expression along with main lens 110 (u, v) intersect and with the plane of microlens array 12 at (s, t) light of crossing light ray propagation.Desired light transducer (for example pixel) on supposing to snap to grid in desirable lenticule in the microlens array 120 and the photosensor array 130, all light that are transferred to optical sensor also transmit by its square female lenticule in the microlens array 120, and transmission is by the conjugation square of optical sensor on the main lens 110.Two square area of on main lens 110 and the lenticule this are specified the medium and small four-dimensional box of light field, and optical sensor is measured the total amount of light in the light set of being represented by this box.Correspondingly, this four-dimensional box in each light sensors light field, therefore the light field that is detected by photosensor array 130 is L (u, v, s, box filtration t), a line-sampling.
Figure 11 illustrate according to the present invention another example embodiment with the sensor pixel location map to L (u, v, s, t) method of light in the space with respect to institute's image data.Shown in Figure 11ly can for example be used for Fig. 1 with method this paper discussion, wherein each optical sensor in the photosensor array 130 is corresponding to sensor pixel.The image 1170 in the lower right corner is the down-sampling (downsample) of the initial data that reads from light sensor (optical sensor) 1130, and has the image that is circled 1150 that forms under a circular microlens.The image 1180 in the lower left corner is to represent that around the amplification of the initial data part that is circled lenticule image 1150 one of them optical sensor value 1140 is irised out in lenticule.Since the image that this circular image 1150 is lens stops, (u, the v) coordinate of light 110 original positions shown in therefore selected locations of pixels provides on the main lens in the dish.The position of lenticule image 1150 provides (x, y) coordinate of light 1120 in the sensor image 1170.
Though with respect to accompanying drawing (and other example embodiment) mapping of sensor element to light has been discussed, the value that is associated with each sensor element is optionally represented by the value that optics arrives the light set of each specific sensor element by transmission.Therefore in the environment of Fig. 1, each optical sensor in the photosensor array can be implemented, and arrives the value of the light set of optical sensor by main lens 110 and microlens array 120 so that the expression transmission to be provided.That is, each optical sensor produces output in response to the light that incides on the optical sensor, and each optical sensor is used to provide directional information about incident light with respect to the position of microlens array 120.
In an example embodiment, the resolution of microlens array 120 is selected to the expectation resolution of final image in the coupling application-specific.The resolution of photosensor array 130 is selected to and makes each lenticule cover optical sensor as much as possible as required, to mate the desired orientation resolution of this application, and the perhaps best resolution of attainable optical sensor.Like this, consider such as imaging type, cost, complexity and be used to reach the available devices of specified resolution that the resolution of imaging system 100 (and this paper discuss other system) optionally is adjusted to and is suitable for application-specific.
In case view data is caught via optics and transducer (for example using the imaging device 190 among Fig. 1), just can realize that multiple computing function and device are with image data processing optionally.In an example embodiment of the present invention, the set of different optical sensor is caught from each lenticular separately light, and will be sent to calculating unit such as processor about the information of catching light.The image of scene calculates according to measured light set.
In the environment of Fig. 1, realize that sensing data treatment circuit 140 comprises the scene image of target 105,106 and 107 with image data processing and calculating.In some applications, also realize pre-viewing device 150 to produce preview image with preview processor 160, wherein preview image is presented on the preview screen 170.Preview processor 160 is optionally realized by sensing data treatment circuit 140, is wherein produced preview image in for example consistent with method discussed below mode.
In another embodiment, for each pixel from the image of sensor device output, calculating unit is weighted the subclass of measuring beam and sues for peace.In addition, calculating unit can be analyzed and make up the image collection that for example uses image combining method to calculate in the above described manner.Though the present invention is not necessarily limited to this application, various aspects of the present invention can be understood by the discussion to some concrete example embodiment of this calculating unit.
Relevant with each example embodiment, view data is handled at least a portion relate to the image that focusing more catching.In some embodiments, output image produces according to the photo on the expectation element that focuses on special scenes.In some embodiments, photo as calculated is focused in the certain desired degree of depth of universe (scene), and defocusing blurring is the same as with conventional photograph increasing away from desired depth.Different depths of focus may be selected to different target in the scene is focused on.
Figure 12 illustrates some image 1200-1240 that the single light field that focuses on different depth again, measure from another example embodiment according to the present invention is calculated.Method shown in Figure 12 can for example be used such as imaging device as shown in Figure 1 and realize.
Another example embodiment illustrates focus method again according to the present invention for Figure 13 A and 13B.This method can for example realize by the calculating/processing unit such as the imaging system of sensing data treatment circuit 140 among Fig. 1.From each output pixel of imaging device (for example 1301) corresponding to three-dimensional (3D) point on the empty film plane 1310 (for example 1302).This void film plane 1310 is positioned at after the main lens 1330, wherein the expectation focal plane optical conjugate in this plane 1310 and the universe (not shown).That is, empty film plane 1310 is positioned at such position: the film plane expectation is positioned at this position to catch simple two dimension (2D) image (for example this position can be compared with the position of catching the 2D image conventional camera location with photographic film).By by direction separated light (for example using the microlens array 120 of Fig. 1), optionally calculate the light that arrives empty film plane 1310.Like this, the value of output pixel 1301 can be by calculating pencil 1320 summations that converge on the corresponding 3D point 1302.The value of these light can be according to the data collection of being gathered by light sensor 1350.Simple in order to check, Figure 13 A illustrates the imaging configuration of 2D.In Figure 13 B, when more close main lens, use selected universe depth of focus, be 1301 pairs of pencil 1330 summations of same pixel from 3D point 1340.
In some embodiments, required values of light is accurately not corresponding with the discrete sample position of being caught by light sensor.In some embodiments, values of light is estimated as the weighted sum of selected tight sample position.In some implementations, this method of weighting is corresponding to the four-dimensional filter kernel (filter kernel) of rebuilding continuous four-dimensional light field according to the discrete sensor sample.In some implementations, this four-dimension filter is by realizing corresponding to the four-dimensional tent function (tent function) of four linear interpolations of 16 adjacent sample in the space-time.
Figure 35 is that another example embodiment is used for according to the light sensor data computation that the receives flow chart of focusedimage again according to the present invention.At frame 3520, extract sub-aperture image set from light sensor data 3510, wherein each sub-aperture image is made up of the single pixel of pixel under each lenticule image at identical relative position place under its lenticule.At frame 3530, make up sub-aperture image set to generate final output image.Can be randomly with sub-aperture image translation relative to one another and synthetic so that desired plane focusing.
In another example embodiment, the darkening that is associated with near the output image edge pixel is alleviated.For example, near the pixel the output image edge, some required light are at large to be obtained in measuring light field (they can exceed such as Fig. 1 in the space or the direction border of imaging device of microlens array 120 and photosensor array 130).For this darkening is in the application of not expecting, pixel value comes normalization by value (for example being caught by photosensor array) that will be related with pixel divided by the part of the light that in fact finds in measuring light field.
As mentioned above, to the various different calculation methods of different application choices.The various such methods of setting forth below are discussed.In some applications, accompanying drawing is carried out reference, and in other is used, all methods are usually described.In using each these, ad hoc approach can use the calculation type parts of all sensing data treatment circuits 140 as shown in Figure 1 to realize.
In another example embodiment, the formation method of each output pixel of specific imaging system is corresponding to empty camera model, and wherein empty film is rotated arbitrarily and/or optionally or distortion and empty main lens aperture correspondingly move as required and change its size.As example, Figure 25 illustrates an example embodiment, if wherein the virtual image be allowed to 2510 inconsistent empty lens planes 2530, physics main lens plane on arbitrarily appearance after the empty lens stop 2520 of size, then calculate the virtual image on the empty film 2560 as appearing at.By to transmission by vignette circle 2520 and converge in a little 2550, sue for peace according to its joining with at the light of the incident direction appointment on the light sensor 2570 and put 2550 pixel value on calculating corresponding to empty film 2560.
Figure 26 illustrates the method on the empty lens of the manipulation plane of another example embodiment according to the present invention.Optionally physics main lens or other benchmark tilt relatively for empty lens plane 2630 and/or empty film plane 2660.The image that uses this method to calculate has and the uneven gained universe of imaging plane focal plane.
In another example embodiment, as illustrated among Figure 27, empty film 2560 need not to be the plane, and can adopt Any shape.
The selectivity that the whole bag of tricks relates to different apertures realizes.In an example embodiment, the vignette circle on the empty lens plane is circular hole normally, and in other example embodiment, and the vignette circle is normally non-circular and/or be embodied as a plurality of zoness of different with Any shape.In these or other execution mode, the notion of " vignette circle " can be by vague generalization, and in some applications, corresponding to relating to the light data processing with the method corresponding to the light that can receive by selected " void " aperture.
In each execution mode, vignette circle method is by predetermined but function realization arbitrarily on the empty lens plane.Figure 31 A-31D illustrates the different scalar functions that optionally are embodied as vignette circle function in conjunction with one or more example embodiment.Each function for example comprises smoothly changing value (as Figure 31 B illustration), realizes a plurality of zoness of different (as Figure 31 A illustration) and adopts negative value (as Figure 31 D illustration).In order to calculate the value of putting on the empty film, difference and all light of converging in the point on the empty film are by weighting of vignette circle function and summation from the empty lens.In each other execution mode, end value is calculated by any computing function that depends on values of light.For example, computing function can not correspond to the weighting by vignette circle function, but can comprise the discontinuous program branches that depends on the test function value that values of light is calculated.
In other example embodiment,, therefore can select to calculate the method for output pixel independently owing to can realize in combination with other execution mode as herein described.For example, in an example embodiment, comprise that the orientation on empty lens plane and the parameter of vignette circle size change continuously to each output pixel.In another example, shown in figure 32, the vignette circle function individual element ground that is used for the light of each output pixel of integration changes.In output image 3200, pixel 3201 is used vignette circle function 3210 and pixel 3251 use vignette circle functions 3250.
In another example embodiment, vignette circle function individual element ground changes.In one embodiment, this function is chosen to block the light of not expecting part from special scenes, such as the not expectation target in the prospect.
In another example embodiment, vignette circle parameter is selected on user interactions ground, and selects to handle the light data according to this.Figure 33 is the process flow diagram that such example embodiment is shown.In first frame 3310, this process receives data from light sensor.In frame 3320, the user selects the zone of output image; In frame 3330, the user selects image forming method; And in frame 3340, the user change institute's choosing method parameter, and visually check the scene image (for example on computer monitor) that calculates at frame 3350.Frame 3360 checks whether the user has finished editor and whether turned back to frame 3330 image section.Whether frame 3370 inspection users treat editor's image section has been finished selection and whether has turned back to frame 3320.If editor finishes, then frame 3380 is preserved final edited image.
In another example embodiment, has the image that extends the depth of field by simultaneously an above target being focused on to calculate.In one realized, the depth of field of output image obtained extension by the conventional pattern imaging with the main lens aperture of stop down (size reduces).For each output pixel, use the light that is focused on the output pixel by the aperture littler (on empty lens plane) to carry out evaluation than the aperture that in the light sensing, uses.
In a realization that relates to example system shown in Figure 1 100, the depth of field obtains extending by the optical sensor value of extracting under each lenticule image, and wherein each optical sensor places identical relative position in each lenticule image.For Fig. 1, the extension of the depth of field has produced an image, wherein not only target 105 (because the correlation between distance ' ' d ' ' and " s ") on the focus and also possible owing to defocus blur such as object 106 and 107 at the object at different depth place also on focus.With to the gained image randomly the method for the extension depth of field that combines of down-sampling be to calculate effectively.This method is realized optionally in the sustainable application of noise with the image generation that therein for example the image that is wherein produced is for preview purpose (for example being used for showing) on the preview screen 170 of Fig. 1.Fig. 4 discussed below further relates to the method that produces preview image.
Figure 14 A illustrates the method for calculating the image with different depth of field in conjunction with one or more example embodiment with 14B.Figure 14 A illustrates by focusing on image and the feature that calculates again.Notice that face in the feature is owing to the more shallow depth of field is blured.Row illustrates and uses such as the preceding paragraph final image that described extension depth of field method calculates that falls in the middle of Figure 14 B.
Fig. 8 is the flow chart of another computational methods of the depth of field in the extension bitmap picture of another example embodiment according to the present invention.In frame 810, an image set that focuses on all depths of focus of special scenes is again focused on once more.At frame 820,, determine a pixel according to a image set with the highest local contrast for each pixel.At frame 830, the pixel with the highest local contrast is formed a final virtual image.Use this method, the signal to noise ratio (snr) that uses a large amount of relatively pixel (for example with respect to selecting single pixel (optical sensor)) to obtain to expect as each lenticule in the microlens array.With reference to Figure 14 C, shown example image is to use and combines that the described similar method of Fig. 8 produces, and presents relatively low picture noise.
In an optional embodiment, according to each the more empty film plane of focusedimage and image optical transmission to empty film plane via the main lens primary flat between distance, the minimal set of focusedimage more to be calculated is defined as follows.Minimum range is set in the focal length of main lens, and ultimate range is set in the alternate depths of nearest target in the scene.Spacing between each empty film plane is not more than Δ x mF/A, wherein Δ x mBe lenticular width, f is the spacing between main lens and the microlens array, and A is the width of lens stop.
In another example embodiment, a plurality of focusedimages again are combined, and extend the pixel that depth image is retained in optimum focusing in any of focused view image set again to produce at each final pixel place.In another embodiment, pixel to be kept is selected by the local contrast and the coherence of enhancing with neighborhood pixels.For about the general information of imaging and about relating to the specifying information of the formation method that strengthens the local contrast, can be with reference to No. 3 292-300 pages or leaves of 2004 the 23rd volumes of ACM Transaction on Graphics, " Interactive Digital Photomontage (the interactive digital picture collection of choice specimens) " of A.Agarwala, M.Dontcheva, M.Agrawala, S.Drucker, A.Colburn, B.Curless, D.Salesin, M.Cohen, this article is incorporated into this by using integral.
In another example embodiment of the present invention, extend that depth image is following to be calculated.For each output image pixel, focus on again to calculate and carry out to focus at different depth at the pixel place.In each degree of depth, calculate the tolerance that converges the light homogeneity.Select to produce the degree of depth of (relatively) maximum homogeneity and this pixel value is kept this degree of depth.Use this method, in focus the time, all its light are derived from the same point in the scene at image pixel, and therefore might have similar color and intensity.
Though can define homogeneity tolerance with distinct methods, but for many application, use following homogeneity tolerance:, calculate the variance of colouring intensity according to the respective color component of central light beam (arriving the light of pixel with the angle of the most close main lens optical axis) for each color component of each light.To the summation of all these variances, and with homogeneity be taken as this and inverse.
Figure 34 is the flow chart of the depth of field in the extension bitmap picture of another example embodiment according to the present invention.At frame 3410, in the virtual image to be calculated, select pixel.At frame 3420, this pixel is focused on a plurality of focal lengths again, and calculation combination becomes to focus on the homogeneity of the light of each degree of depth again.At frame 3430, the value of focused pixel again that is associated with the maximum homogeneity of the light that is combined is left final output image pixel value.This process continues at frame 3440, and is processed up to all pixels.
In another example embodiment, said process is adjusted to the selection that makes final pixel value and considers adjacent pixel values and the combined homogeneity of calculating the related light of these pixel values.
In the another example embodiment of the present invention, the depth of field obtains extending by the degree of depth that each pixel is focused on this direction near object.Fig. 9 is the flow chart that is used for the extension bitmap picture depth of field according to a this example embodiment.At frame 910, select the pixel in the final virtual image to be calculated.At frame 920,, estimate the degree of depth of nearest object for the light (or light collection) that enters scene from selected pixel scioptics central transmission.
At frame 930, in the image that focuses on the estimation degree of depth again, calculate the value of selected pixel.If obtained expectedly handling, then select another pixel, and this process continues in 920 pairs of new pixels of selecting of frame at frame 910 in frame 940 additional pixels.When there not being additional pixels expectedly to be handled at frame 940, then the calculated value of each selected pixel is used to set up the final virtual image.
Relate in the execution mode of depth of field extension at some, by not considering to be derived from the light of the degree of depth nearer to alleviate or to eliminate around the pseudomorphism (artifact) that for example is commonly referred to " blooming (blooming) " or " halation " of the object edge of more close lens than the degree of depth of desired object.As example, Figure 23 illustrates the light set of transmission by the expectation focus 2301 of universe on the interesting target 2310.Part in these light by object 2320 from main lens blocking-up, and these light corresponding to 2350 detected by light sensor, but the light of in the image value of calculation level 2301, not considering 2340.In some embodiments, the pixel value that obtains is by the normalization divided by a part of light of not considering.These execution modes can be individually or are used with being bonded to each other, and relate to any other execution mode that extends depth of field use that combines with comprising.
As mentioned above, handle the light data to focus on and/or correcting image according to each example embodiment.The whole bag of tricks that relates to the post-equalization method is as described below.In in these execution modes some, by following the trail of light, and the light of being followed the trail of is mapped to the specific light transducer of catching this light comes aberration correction by the actual optical components (for example lens or set of lenses) of the optics that when catching light, uses.Known defect that use is represented by optics and the known location that detects the transducer of this light, the light data are rearranged.
In a correction type execution mode, calculate the light universe that contributes to each pixel that forms by idealized optics for each pixel on the film of synthetic photograph.In one realizes, calculate these light by following the trail of the light of getting back to universe by the perfect optics parts from empty film position.Figure 15 illustrates in conjunction with such example embodiment and follows the trail of from the 3D point on the empty film plane 1,510 1501 by the method for desirable thin main lens 1520 to the light of universe pencil 1530.In some implementations, expectation light set 1525 can still can produce any light set of desired image value corresponding to being weighted and suing for peace corresponding to the direction by actual lens.
Figure 16 illustrates and obtains the method for the value of the light propagated along ideal light rays in conjunction with another example embodiment for application-specific.These values are calculated by the desirable universe light 1630 of the expectation that follow the trail of to see through actual main lens 1650, and this main lens is to have the discrete component of globular interface and be used for when measuring (detection) this light actual gamut light direct light line sensor 1640 physically.In this embodiment, the light that ideal converges in single 3D point (1601) does not converge, and this expression has the defective that is called ball-shaped aberration of the lens of globular interface.Light sensor 1640 provides aberration light (such as 1651) each each value that is used to proofread and correct ball-shaped aberration.
Figure 17 A-17C illustrates the example results of the computer simulation of using the scioptics bearing calibration.Image among Figure 17 A is desirable 512 x, 512 photographs (visible by the perfect optics parts).The image that image among Figure 17 B is to use actual f/2 biconvex globe lens to produce, it has the loss of contrast and fuzzy.Image among Figure 17 C is to be convenient at each 512 x 10 x of 512 lenticule places 10 directions (u, the v) optics of resolution and sensor device, the photograph that uses above-mentioned method for correcting image to calculate by use.
In another example embodiment of the present invention, proofread and correct the aberration of the main lens be used to catch image.Aberration be by light physics guiding during by optics since dispersing of depending on that the difference of optical wavelength on physical direction causes caused.Consider to occur in the relevant optical index of wavelength in the actual optics, incident ray is followed the trail of by actual optics.In some applications, distinguish each color component of tracing system according to dominant wavelength.
In another example embodiment, shown in Figure 18 A, follow the trail of each redness, green and the blue component that coexist in the color imaging system respectively.Green universe light returns imaging system with generation green light 1830 by calculating to follow the trail of, and determines that they intersect with colored light transducer 1810 wherein and they in what direction and colored light transducer 1810 intersect.Similarly, Figure 18 B illustrates by calculating and follows the trail of the blue universe light 1820 of expectation, compares these light of green light and is refracted into bigger scope.Figure 18 C illustrates by calculating and follows the trail of the red universe light 1830 of expectation, compares these light of green light and is refracted into small range.Use the value of for example calculating each light from the value of light sensor 1810 in conjunction with the method basis of described other example embodiment description herein.The light field value of each light is carried out integration to calculate the correcting image value of each particular film pixel.Use for some, aberration is by being improved on the plane that each color channel is focused on its wavelength optimum focusing.
The not accurately convergence on one of discrete values of light of sampling of expectation light by light sensor.In some embodiments, the value that is used for these light is calculated as the function of discrete values of light.In some embodiments, this function is corresponding to the weighted sum of the discrete values of light in the expectation light field.In some embodiments, this weighted sum is corresponding to the 4D convolution of the discrete sampling value of using predetermined convolution core (kernel) function.In other embodiments, weighting can be corresponding to four linear interpolations of carrying out according to 16 arest neighbors.In other execution mode, weighting can be corresponding to three times that carry out from 16 arest neighbors or bicubic interpolation.
It should be noted that for notion and described the example trimming process according to ray tracing simply; The also available correction of various other methods realizes.In one embodiment, for each desired output pixel, calculate optical sensor value collection and the relative weighting thereof that contributes in advance.As mentioned above, these weights be comprise optics, transducer, will be to the attribute of many factors of the expectation light set of each output pixel weighting and summation and expectation light field reconstruction filter.These weights are optionally used ray tracing to calculate in advance, and storage.Calibrated image forms by the suitable sensing light field value of weighting and each output pixel that adds up.
Figure 29 and Figure 30 illustrate other example embodiment of using in conjunction with above-mentioned bearing calibration.Figure 29 is the process flow diagram that is used for calculating in advance weight database that is associated with light (light) transducer and the output pixel value that is associated with each light sensor.In preceding two frames 2910 and 2920, for the desired image forming process, reception is by requiring and forms the data set (for example in the database) of (for each output image pixel) with the desirable universe light set that produces the output image pixel value, and reception is used for light physically is transmitted to the standard of the actual main lens optics of light sensor.At frame 2925, select an image pixel.For the output valve of this pixel, follow the trail of universe light relation integration to light sensor ground by the virtual representation of main lens optics in frame 2930 usefulness computational methods.This causes putting on each light sensor value to calculate the weight sets of output pixel value.These values are stored in the output database in frame 2940.All pixels that whether frame 2950 inspections are treated are if not then returning frame 2925.If handled all pixels, the database that then final frame 2960 protections are finished.
Figure 30 is to use the flow chart that calculates the process of output image by the weight database of calculating as the process among Figure 29.In frame 3010 and 3020, this process receives database and the set of the light sensor value that the main lens optics that uses when calculating this database is caught.At frame 3025, select the pixel in the output image, make its final image value to calculate.For selected pixel, the light sensor that frame 3030 uses database to seek and makes contributions is gathered and weight.At frame 3040, for this image pixel value, each sensor values that provides in 3020 is weighted and sues for peace.At frame 3050, check to check and handled all images pixel whether.If not, then this process is returned frame 3025, if then preserve output image at frame 3060.
In various example embodiment, use some method that focuses on computational methods again that relates to computing in Fourier domain, in frequency domain, handle the light data.Figure 20 is the flow chart that a this method is shown in conjunction with another example embodiment.The input of this algorithm is to be expressed as L (s, t, u, discrete 4D light field 2010 v), (u v) begins and (s, t) light of Zhong Zhiing (for example coming to stop since the main lens 110 of Fig. 1 and on the plane of microlens array 120) on the lenticule plane from main lens in its expression.The first step is to calculate the discrete 4D Fourier transform 2020 of light field.Be called M (k s, k t, k u, k v) at (k s, k t, k u, k v) on 4D Fourier transform value define by following equation:
M ( k s , k t , k u , k v ) = ∫ ∫ ∫ ∫ L ( s , t , u , v ) exp ( - 2 π - 1 ( s k s + tk t + uk u + v k v ) ) dsdtdudv - - - ( 1 )
Wherein the exp function is the index function, exp (x)=e xIn some embodiments, on the rectilinear grid in 4D space, discrete light field is sampled, and use fast Fourier transform (FFT) algorithm computation Fourier transform.
Next step, to each degree of depth of expecting again focusedimage carry out once be the suitable 2D sheet 2030 that extracts the 4D Fourier transform, and calculate the 2D inverse fourier transform that extracts sheet, they are the photographs 2040 that focus on different depth.For function G (k x, k y), 2D inverse fourier transform g (x, y) by following equation definition:
g ( x , y ) = ∫ ∫ G ( k x , k y ) exp ( 2 π - 1 ( xk x + yk y ) ) dk x dk y - - - ( 2 )
The value of extracting on the 2D sheet is determined according to the degree of depth that expectation focuses on.Consider the conjugate planes (in the image-side of lens) of expectation universe focal plane, when the spacing between these conjugate planes and the main lens is D and spacing between lenticule plane and the main lens when being F, coordinate (k then x, k y) in extract the 2D sheet value provide by following formula
G(k x,k y)=1/F 2·M(k x(1-D/F),k y(1-D/F),k xD/F,k yD/F) (3)
Use diverse ways, the pseudomorphism that causes because of discretization, resampling and Fourier transform is optionally improved.Interim when general signal processing, when signal was sampled, it periodically duplicated in two territories (dual domain).When rebuilding this sampled signal by convolution, it multiplies each other with the Fourier transform of convolution filter in two territories.Like this, initially, center duplicate (replica) is separated, thereby eliminates other all duplicate.The expectation filter is a 4D sinc function, sinc (s) sinc (t) sinc (u) sinc (v), sinc (x)=sin (π x)/(π x) wherein; Yet this function has unlimited extension.
In the whole bag of tricks, limited extension filter is used for frequency domain to be handled; This filter can present the defective that can be alleviated by selectivity.Figure 21 A alleviates or the corresponding discussion of this defective illustrates these defectives about concrete 1D filter in conjunction with hereinafter relating to.Figure 21 A represents to realize with the 1D linear interpolation triangular filter of the basis of 4D four linear filters (or as).Figure 21 B illustrates the Fourier transform of triangular filter method, and it is the non-unit value (referring to 2010) in the frequency band limits, and is reduced to littler fractional value gradually with the frequency increase.In addition, this filter is not real frequency band limits, and it is included in the energy (2020) under the outer frequency of expectation rejection band.
Above-mentioned first defective causes causing " (rolloff) pseudomorphism tilts " of calculating the obfuscation of photograph border.The decay that filter spectrum increases with frequency represents by the spatial light field value of this spectrum modulation also that towards the border " inclination " is to fractional value.
Above-mentioned second defective relate to in frequency band limits to obscure pseudomorphism (aliasing artifact) in the relevant calculating photograph of the energy of upper frequency.Extension surpasses the non-zero energy of frequency band limits and represents that the periodic repetitions product are not eliminated fully, thereby causes two kinds to obscure.The first, occur as the 2D duplicate of occupying final photograph border with the duplicate of the parallel appearance in amplitude limit plane (slicing plane).The second, be projected and be added on the plane of delineation with the duplicate of this plane perpendicular positioning, thereby create the loss of afterimage and contrast.
In an example embodiment, the correction of above-mentioned apsacline defective is eliminated by the reciprocal multiplication that will import light field and the reverse fourier spectrum of filter, to offset the effect of in the resampling process, introducing.In the illustrated embodiment, multiply each other and in the pre-treatment step of algorithm, carry out carrying out before the 4D Fourier transform.Though it has proofreaied and correct heeling error, multiply each other in advance and can increase the weight of the energy of light field, thereby maximize overlapping go back to expectation visual field as the energy of obscuring near its border.
Three kinds of inhibition are obscured the method-over-sampling (oversampling) of pseudomorphism, senior filtration (superiorfiltering) and zero padding (zero-padding)-in following various example embodiment separately or be used in combination.The over-sampling that extracts in the 2D sheet has increased the replicative cycle in the spatial domain.Less energy in this presentation surface in the duplicate afterbody (tail) drops in the border of final photograph.The sampling rate that increases in the territory causes the increase of visual field in another territory.The energy of obscuring in the contiguous duplicate drops in these outer peripheral areas, and is pruned away to isolate interested initial center image.
Alleviate the other method of obscuring and relate to the limited extension filter of the approximate perfect frequency spectrum of near-earth (can present) as far as possible by using ideal filter.In an example embodiment, and 4D Caesar-Bezier (Kaiser-Bessel) separable function kb4 (s, t, u, v)=kb (s) kb (t) kb (u) kb is (v) as filter, wherein
kb ( x ) = 1 / W · I 0 ( P · 1 - ( 2 x / W ) 2 ) - - - ( 4 )
In this equation, I 0Be first kind Caesar-Bessel function that the standard zeroth order is proofreaied and correct, W is the width of expectation filter, and P is the parameter that depends on W.In the illustrated embodiment, the value of W is 5,4.5,4.0,3.5,3.0,2.5,2.0 and 1.5, and the value of P is respectively 7.4302,6.6291,5.7567,4.9107,4.2054,3.3800,2.3934 and 1.9980.For general information about obscuring, and for about alleviate the specifying information of the method for obscuring in conjunction with one or more example embodiment of the present invention, can be with reference to No. 3 473-478 page or leaf J.L.Jackson of 10 volumes of IEEETransactions on Medical Imaging in 1997, C.H.Meyer, " Selection of convolution function for Fourierinversion using gridding (the using gridding to select the convolution function of reverse Fourier) " of D.G.Nishimura and A.Macovski, document integral body by reference is incorporated into this.In one realizes, realize less than about 2.5 width " W " to reach the desired images quality.
In another example embodiment, obscure by before multiplying each other in advance, filling to have the light field on less null value border and carry out Fourier transform and alleviated.This pushes away the border a little with energy, and minimizes the energy of obscuring that is caused by multiplying each other in advance of slant correction and amplify.
Figure 22 is that another example embodiment illustrates and uses above-mentioned various corrections flow chart of method of focusing again in frequency domain according to the present invention.At frame 2210, receive discrete 4D light field.Carry out once pretreatment stage in each input light field, frame 2215 checks whether obscure that to reduce be desired, and if then carries out frame 2220 with little null value border (for example on this dimension 5% of width) filling light field.At frame 2225, check determining whether that slant correction is desired, and if then by the inverse of the Fourier transform of resample filter light field is modulated at frame 2230.In the after-frame of pretreatment stage, calculate the 4D Fourier transform of light fields at frame 2240.
Expect that at each focal length carries out the stage of focusing more once, this process receives the expectation focal length of focusedimage again in frame 2250 such as the guiding by the user.At frame 2260, check that to reduce be desired to determine whether to obscure.If not, then frame 2270 uses expectation 4D resample filter to extract the 2D sheet of light field Fourier transform, and wherein the track of 2D sheet is corresponding to the expectation focal length; And frame 2275 calculates the 2D inverse fourier transform of the sheet that extracts and proceeds to frame 2290.To reduce be desired if obscure at frame 2260, and then process proceeds to frame 2280, wherein uses expectation 4D resample filter and over-sampling (for example 2x over-sampling on each of two dimensions) to extract the 2D sheet.At frame 2283, calculate the 2D inverse fourier transform of sheet and the image trimming that obtains become original dimension and need not at frame 2286 over-samplings, process proceeds to frame 2290 after frame 2286.At frame 2290, detect to determine to focus on again whether finish.If not, then select another focal length and process to carry out as described above at frame 2250.Finish if focus on, then process withdraws from frame 2295 again.
By described clearly to the light summation as above optional execution mode, the progressive computational complexity of this frequency domain algorithm is less than focusing on again.Suppose that the discrete light field of input has N sample on each of its four dimensions.Then for focusing in each new degree of depth, the computational complexity to the algorithm of light summation is O (N clearly again 4).For focusing in each new degree of depth, the computational complexity of frequency domain algorithm is O (N again 2LogN), mainly be the cost of 2D inverse fourier transform.Yet pre-treatment step consumes O (N for each new light field data collection 4LogN).
In another example embodiment, the light of being caught is by the optics filtering.Though be not limited to this application, some examples of this filter are neutral density filters, chromatic filter, Polarization filter.The filter that any existing filter maybe will be developed can be used for light is carried out required filtration.In one implementation, light is made each group or indivedual light by differently filtering by in groups or optics filtering individually.In another was realized, the spatial variations filter that is additional to main lens by use was realized filtering.In an example application, be used for filter light such as the gradient filter of neutral density gradient filter.In another was realized, the spatial variations filter used before one or more in light sensor, microlens array or photosensor array.With reference to Fig. 1, as example, optionally one or more such filters are placed main lens 110, microlens array 120 and photosensor array 130 one or more before.
In another example embodiment of the present invention, programming such as the calculating unit of processor to be optionally selecting to be combined in the light that calculates in the output pixel, so that the clean filtration that this pixel value is expected.As example, consider to relate to the neutral gradient density of the optics filter at main lens place, each lens stop image that occurs under lenticule is according to the filter gradient weighting of its scope of leap.In one realized, the optical sensor by on the point that is chosen in the gradient that is in the aspiration level coupling that the neutral density with the output image pixel filters under each lenticule calculated output image.For example, in order to produce the image that each pixel is wherein filtered on bigger scope, each pixel value is set to and is in the extreme sensor values of the gradient corresponding with maximum filtering under corresponding lenticule.
Fig. 2 is the data flowchart that the method for handling image is shown in conjunction with other example embodiment of the present invention.Image sensor apparatus 210 with for example use lenticule/optical sensor chip apparatus 212 acquisition of image data shown in Fig. 1 and at above-described microlens array 120 and photosensor array 130 similar modes.Image sensor apparatus 210 comprises the integrated treatment circuit 214 that carries a certain treatment circuit optically, with the preparation for acquiring view data so as the transmission.
The sensing data that produces on image sensor apparatus 210 is transferred to signal processor 220.This signal processor comprises one of low-resolution image processor 222 and compression processor 224 and (light) line direction processor 226 or both; Each of these processors depends on that application realizes separately or uses common processor to realize functionally.In addition, each processor selection ground programming as shown in Figure 2 has the one or more processing capacities in conjunction with other accompanying drawing or other paragraph description of this paper.Signal processor 220 is randomly realized on common device or parts with image sensor apparatus 210, for example in common circuit and/or on the common images device.
Low-resolution image processor 222 uses the sensing data that receives from image sensor apparatus 210 to produce low resolution image data, and this low resolution image data is sent to visits display 230.Send images such as the entering apparatus 235 of button on camera or the video camera and catch and ask signal processor 220, for example be captured in request and visit the specific image that shows in the display 230 and/or when so realizing, begin video imaging.
In response to the image request of catching or other guide, signal processor 220 uses the sensing data of being caught by image sensor apparatus 210 to produce treated sensing data.In some applications, compression processor 224 is realized as and produces the former data of compression of transferring to data storage device 240 (for example memory).Then, this former data are optionally handled at signal processor 220 and/or outer computer 260 or other processing unit, thereby realize such as the radiation direction processing that realizes with radiation direction processor as described below 226.
In some applications, radiation direction processor 226 is realized as the processor data that processing receives at signal processor 220, to rearrange the sensing data that is used to produce focusing and/or image correcting data.Radiation direction processor 226 uses one of the sensing data that receives from image sensor apparatus 210 and the former data that are sent to data storage device 240 or both.In these are used, radiation direction processor 226 uses the light map feature of specific imaging devices (for example camera, video camera or mobile phone), and image sensor apparatus 210 is realized as and determines rearranging with the light of lenticule/optical sensor chip 212 sensings in these imaging devices.To send to data storage device 240 and/or be used for various application with the view data that radiation direction processor 226 produces, such as making image data stream be sent to long-range place or view data being sent to long-range place to communication link 250.
In some applications, integrated treatment circuit 214 comprises some or all of processing capacity of signal processor 220 by other processor of realizing CMOS type processor for example or having a suitable function.For example, low-resolution image processor 222 optionally includes integrated treatment circuit 214, and low resolution image data directly sends to visiting display 230 from image sensor apparatus 210.Similarly, compression processor 224 or its similar functions optionally realize with integrated treatment circuit 214.
In some applications, the calculating of final image can be carried out (for example only exporting in the Digital Still Camera of final image at some) on integrated treatment circuit 214.In other was used, image sensor apparatus 210 can send to the compressed version of original light data or these data the external computing device such as desktop PC simply.Externally carry out on the device then according to these data computation final images.
Fig. 3 is the flow chart of the method for processing image data of another example embodiment according to the present invention.At frame 310, use main lens or have the set of lenses of all lenticule/photosensor arrays as shown in Figure 1, on camera or other imaging device, catch view data.If at frame 320 preview images is desired, then use the display of viewfinder for example or other type to produce preview image at frame 330.The subclass of the view data that use is caught shows this preview image on the viewfinder of for example camera or video camera.
At frame 340, from the former data of photosensor array processed and the compression for use.At frame 350, from extracting data light data treated and compression.This extraction relates to light shafts or the set that specific light transducer in the photosensor array is incided in for example detection.Catch wherein imaging device retrieves ray mapping (enum) data at frame 360 for view data.At frame 370, light mapping (enum) data and the light data of being extracted are used for the synthetic image that rearranges.For example, extraction, mapping and synthesising frame 350-370 are gathered the Ray Of Light of the specific pixel of scene by determining light, and the integral light heat input is optionally realized with the value of synthesizing specific pixel.In some applications, the light mapping (enum) data is used for following the trail of the light of each specific pixel by the actual lens that is used to obtain view data.For example, so that focus on the suitable light set of the selected target at particular focal length place, this light can be rearranged to arrive focusedimage by determining to be accumulated in together at frame 370.Similarly, by determining suitable light arrangement to proofread and correct the situation such as the lens aberration in the imaging device, this light can be rearranged the image that is not relevant to the characteristic of aberration or other situation comparatively speaking to produce.
The whole bag of tricks optionally is used to produce the preview image of camera-type and other application.Fig. 4 is the process flow diagram of this preview image of generation of another example embodiment according to the present invention.Shown in Fig. 4 and hereinafter the method for describing can with for example produce preview images and realize in combination at the frame 330 of Fig. 3.
In the instruction for previewing of frame 410 receptions to original sensor data.At frame 420, center pixel each lenticule image from the raw sensor view data is selected.Collect selected center pixel to form high depth image at frame 430.At frame 440, high depth image is carried out down-sampling to be suitable for visiting exploration on display resolution ratio.With reference to Fig. 2, as example, this down-sampling optionally carries out on one or more image sensor apparatus 210 or signal processor 220.At frame 450 the preview image data that produce are sent to the visiting display, and at frame 460, viewfinder preview image data display image.
Fig. 5 is the processing of another example embodiment according to the present invention and the flow chart of compressing image data.Shown in Fig. 5 and hereinafter the method for describing can with handle at Fig. 3 center 340 and compressing image data is realized in combination.When realizing with device as shown in Figure 2, method shown in Figure 5 can realize at one of for example image sensor apparatus 210 and signal processor 220 or both.
At frame 510, receive raw image data from sensor array.If carry out paintedly in frame 520 expectation, then remove mosaic, to produce color at the transducer place in 530 pairs of chromatic filter array of values of frame.If adjust and align in frame 540 expectation, then lenticule is adjusted and aligns with photosensor array at frame 550.If carry out interpolation in frame 560 expectations, then pixel value be interpolated into the integer related with each a lenticule pixel 570.At frame 580, treated raw image data is compressed and presents so that synthetic processing the (for example form and focus on and/or correcting image again).
Fig. 6 is the synthetic flow chart of image of another example embodiment according to the present invention.The image combining method that the method for also describing hereinafter shown in Fig. 6 can illustrate and further describe hereinafter with the frame 370 at Fig. 3 is realized in combination.
At frame 610, receive raw image data from photosensor array.If focus on again in frame 620 expectations, then use method for example as herein described that view data is focused on again at frame 630, rearrange the light of representing by raw image data with selectivity.If carry out image rectification in frame 640 expectations, then at frame 650 image correcting datas.In different application, in expectation focuses on application with image rectification again, before the image rectification of frame 650 or carrying out simultaneously with the focusing again of frame 630.At frame 660, use to comprise through focusing on again and treated view data (but time spent) the generation gained image of corrected data.
Fig. 7 A is that the use lens devices of another example embodiment according to the present invention carries out the flow chart that image focuses on again.The method that illustrates and describe hereinafter at Fig. 7 can be for example focuses in combination with the view data of Fig. 6 center 630 again and realizes.
At frame 710, select to be used for again the empty focal plane of focusedimage part.At frame 720, select the virtual image pixel of empty focal plane.If proofread and correct (for example for lens aberration), then calculate vignette line (or the set of the vignette line) value of between selected pixel and each certain lenses position, passing through at frame 740 in frame 730 expectations.In one uses, drop on conjugate rays on the selected pixel and this light of path tracing in the scioptics device is convenient to this calculating by calculating.
At frame 750, the values of light of each lens position of the specific focal plane of adding up (or the set of vignette line) sum is to determine the total value of selected pixel.In some applications, that add up and be weighted sum at frame 750, the given bigger weight of some light (or light set) wherein than other.If have the additional pixels that is used for focusing on again, then select another pixel and continue this process up to there not being pixel to need to focus on again at frame 720 at frame 760.After pixel is focused on again, in of the again focusing virtual image of frame 770 packed-pixel data with the place, empty focal plane that is created in frame 710 and selects.The part or all of focus method again that relates to Fig. 7 center 720,730,740 and 750 is implemented by the more specifically function of various application.
The sensing data treatment circuit that uses one or more example embodiment as herein described to realize depends on that realization can comprise one or more microprocessors, application-specific integrated circuit (ASIC) (ASIC), digital signal processor (DSP) and/or programmable gate array (for example field programmable gate array (FPGA)).Like this, the sensing data treatment circuit can be present known or any kind of later exploitation or the circuit of form.For example, the sensing data treatment circuit can comprise the realization that is coupled, provides and/or carry out active and/or passive single parts or a plurality of parts (microprocessor, ASIC and DSP) of desired operation/function/application.
In different application, the sensing data treatment circuit is realized or is carried out and realize described herein and/or the ad hoc approach that illustrates, one or more application, routine, program and/or the data structure of task exclusive disjunction.The function of application, routine or program is optionally made up or is distributed in some applications.In some applications, application, routine or program are used one or more realizations in the various programming languages of known or later exploitation by transducer (or other) data processing circuit.This programming language for example comprises the compiling that optionally realizes in conjunction with the one or more aspects of the present invention or FORTRAN, C, C++, Java and the BASIC of not compiling.
Above-mentioned various execution mode only provides as an illustration and should not be interpreted as limitation of the present invention.According to above description and explanation, it will be apparent for a person skilled in the art that and to carry out various changes and change to the present invention, and needn't strict follow illustrative embodiments and the application that this paper illustrates and describes.For example, this variation can be included in the dissimilar application realize various optical imagery application and device, increase reduce light number that each pixel (or other selected image-region) gathered or algorithm that realization is different with described example and/or equation with collection or image data processing.Other variation can comprise that use is except cartesian coordinate or be additional to the coordinate representation of cartesian coordinate, for example polar coordinates.This change and variation do not deviate from connotation of the present invention and scope.

Claims (12)

1. digital imaging system that is used for according to the light of being caught set composograph, described system comprises:
Main lens;
Be used to catch the photosensor array of light set;
Microlens array between described main lens and described photosensor array, described light set physically is transmitted to described photosensor array from described main lens through described microlens array;
Data processor resets always calculating through synthetic focusedimage again by the virtual of described light set that is caught by described photosensor array.
2. the system as claimed in claim 1 is characterized in that, described data processor is by optionally making up selected light beam so that described light beam set is reset always computed image virtually.
3. the system as claimed in claim 1 is characterized in that, described data processor resets always computed image by the selected light that optionally adds up virtually so that described light is gathered.
4. the system as claimed in claim 1 is characterized in that, described data processor is by optionally the weighting and the light that adds up reset always computed image virtually so that described light is gathered.
5. the system as claimed in claim 1, it is characterized in that described data processor resets always computed image by the described physical direction that is integrated into the spatial distribution on the described photosensor array according to described light and arrives described photosensor array according to the set of described light from described main lens through described microlens array virtually to described light beam set.
6. digital imaging system as claimed in claim 1 is characterized in that, described data processor is redirected virtually to described light beam set, focuses on the different focal plane, plane that is placed in one with described microlens array again with the part with described image.
7. digital imaging system as claimed in claim 1 is characterized in that, described data processor is redirected with the correcting lens aberration virtually to described light.
8. digital imaging system as claimed in claim 1 is characterized in that, described data processor is redirected to extend the described depth of field of described composograph virtually to described light.
9. digital imaging system as claimed in claim 1 is characterized in that, for each lenticule in the described microlens array, described photosensor array comprises a plurality of optical sensors.
10. digital imaging system as claimed in claim 9, it is characterized in that, described main lens focuses on the two dimensional image of scene on the described microlens array, and each lenticule in the wherein said microlens array is adjusted to disperse by described main lens and focuses on light on it, and described divergent rays is transmitted to described lenticular a plurality of optical sensors.
11. digital imaging system as claimed in claim 1, it is characterized in that described data processor uses from the data of described photosensor array to set up basis by described focuson image of catching the scene different piece shown in the light by the different depths of focus of resolving each number of sub images.
12. digital imaging system as claimed in claim 11 is characterized in that, described data processor is by the synthetic final image of the described focuson image of combination.
CN2008101691410A 2004-10-01 2005-09-30 Imaging arrangements and methods therefor Expired - Fee Related CN101426085B (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US61517904P 2004-10-01 2004-10-01
US60/615,179 2004-10-01
US64749205P 2005-01-27 2005-01-27
US60/647,492 2005-01-27

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CNB200580039822XA Division CN100556076C (en) 2004-10-01 2005-09-30 Imaging device and method thereof

Publications (2)

Publication Number Publication Date
CN101426085A true CN101426085A (en) 2009-05-06
CN101426085B CN101426085B (en) 2012-10-03

Family

ID=38965761

Family Applications (2)

Application Number Title Priority Date Filing Date
CNB200580039822XA Expired - Fee Related CN100556076C (en) 2004-10-01 2005-09-30 Imaging device and method thereof
CN2008101691410A Expired - Fee Related CN101426085B (en) 2004-10-01 2005-09-30 Imaging arrangements and methods therefor

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CNB200580039822XA Expired - Fee Related CN100556076C (en) 2004-10-01 2005-09-30 Imaging device and method thereof

Country Status (1)

Country Link
CN (2) CN100556076C (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102802520A (en) * 2009-06-17 2012-11-28 3形状股份有限公司 Focus Scanning Apparatus
CN103945115A (en) * 2013-01-22 2014-07-23 三星电子株式会社 Photographing device and photographing method for taking picture by using a plurality of microlenses
CN104041004A (en) * 2012-01-13 2014-09-10 佳能株式会社 Image pickup apparatus, control method therefor and image pickup system
CN104469194A (en) * 2013-09-12 2015-03-25 佳能株式会社 Image processing apparatus and control method thereof
CN105378556A (en) * 2013-07-09 2016-03-02 三星电子株式会社 Image generating apparatus and method and non-transitory recordable medium
CN106303208A (en) * 2015-08-31 2017-01-04 北京智谷睿拓技术服务有限公司 Image acquisition control method and device
CN106303210A (en) * 2015-08-31 2017-01-04 北京智谷睿拓技术服务有限公司 Image acquisition control method and device
CN106303209A (en) * 2015-08-31 2017-01-04 北京智谷睿拓技术服务有限公司 Image acquisition control method and device
CN106604017A (en) * 2010-06-16 2017-04-26 株式会社尼康 Image display device
CN109708193A (en) * 2018-06-28 2019-05-03 永康市胜时电机有限公司 Heating device inlet valve aperture control platform
CN110192127A (en) * 2016-12-05 2019-08-30 弗托斯传感与算法公司 Microlens array
US11701208B2 (en) 2014-02-07 2023-07-18 3Shape A/S Detecting tooth shade

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4941332B2 (en) * 2008-01-28 2012-05-30 ソニー株式会社 Imaging device
JP5472584B2 (en) * 2008-11-21 2014-04-16 ソニー株式会社 Imaging device
JP4706882B2 (en) * 2009-02-05 2011-06-22 ソニー株式会社 Imaging device
JP5463718B2 (en) * 2009-04-16 2014-04-09 ソニー株式会社 Imaging device
ATE551841T1 (en) * 2009-04-22 2012-04-15 Raytrix Gmbh DIGITAL IMAGING METHOD FOR SYNTHESIZING AN IMAGE USING DATA RECORDED BY A PLENOPTIC CAMERA
JP5515396B2 (en) * 2009-05-08 2014-06-11 ソニー株式会社 Imaging device
DE102009027372A1 (en) 2009-07-01 2011-01-05 Robert Bosch Gmbh Camera for a vehicle
JP2012205111A (en) * 2011-03-25 2012-10-22 Casio Comput Co Ltd Imaging apparatus
TW201322048A (en) * 2011-11-25 2013-06-01 Cheng-Xuan Wang Field depth change detection system, receiving device, field depth change detecting and linking system
JP5913934B2 (en) 2011-11-30 2016-05-11 キヤノン株式会社 Image processing apparatus, image processing method and program, and imaging apparatus having image processing apparatus
CN103297677B (en) * 2012-02-24 2016-07-06 卡西欧计算机株式会社 Generate video generation device and the image generating method of reconstruct image
JP2013198016A (en) * 2012-03-21 2013-09-30 Casio Comput Co Ltd Imaging apparatus
JP5459337B2 (en) * 2012-03-21 2014-04-02 カシオ計算機株式会社 Imaging apparatus, image processing method, and program
JP5914192B2 (en) * 2012-06-11 2016-05-11 キヤノン株式会社 Imaging apparatus and control method thereof
US9398264B2 (en) 2012-10-19 2016-07-19 Qualcomm Incorporated Multi-camera system using folded optics
CN103417181B (en) * 2013-08-01 2015-12-09 北京航空航天大学 A kind of endoscopic method for light field video camera
US10178373B2 (en) 2013-08-16 2019-01-08 Qualcomm Incorporated Stereo yaw correction using autofocus feedback
JP2015185998A (en) 2014-03-24 2015-10-22 株式会社東芝 Image processing device and imaging apparatus
US9613417B2 (en) * 2015-03-04 2017-04-04 Ricoh Company, Ltd. Calibration of plenoptic imaging systems using fourier transform
WO2016177914A1 (en) * 2015-12-09 2016-11-10 Fotonation Limited Image acquisition system
EP3182697A1 (en) * 2015-12-15 2017-06-21 Thomson Licensing A method and apparatus for correcting vignetting effect caused on an image captured by lightfield cameras
GB201602836D0 (en) * 2016-02-18 2016-04-06 Colordyne Ltd Lighting device with directable beam
CN108868213B (en) * 2018-08-20 2020-05-15 浙江大丰文体设施维保有限公司 Stage disc immediate maintenance analysis mechanism
WO2020194025A1 (en) * 2019-03-22 2020-10-01 Universita' Degli Studi Di Bari Aldo Moro Process and apparatus for the capture of plenoptic images between arbitrary planes

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5672575A (en) * 1979-11-19 1981-06-16 Toshiba Corp Picture input unit
JP3429755B2 (en) * 1990-04-27 2003-07-22 株式会社日立製作所 Depth of field control device for imaging device
US5757423A (en) * 1993-10-22 1998-05-26 Canon Kabushiki Kaisha Image taking apparatus
JPH08107194A (en) * 1994-10-03 1996-04-23 Fuji Photo Optical Co Ltd Solid state image sensor
NO305728B1 (en) * 1997-11-14 1999-07-12 Reidar E Tangen Optoelectronic camera and method of image formatting in the same
US6320979B1 (en) * 1998-10-06 2001-11-20 Canon Kabushiki Kaisha Depth of field enhancement
CN2394240Y (en) * 1999-02-01 2000-08-30 王德胜 TV image magnifier

Cited By (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11539937B2 (en) 2009-06-17 2022-12-27 3Shape A/S Intraoral scanning apparatus
US10326982B2 (en) 2009-06-17 2019-06-18 3Shape A/S Focus scanning apparatus
CN102802520A (en) * 2009-06-17 2012-11-28 3形状股份有限公司 Focus Scanning Apparatus
US8878905B2 (en) 2009-06-17 2014-11-04 3Shape A/S Focus scanning apparatus
US11368667B2 (en) 2009-06-17 2022-06-21 3Shape A/S Intraoral scanning apparatus
CN102802520B (en) * 2009-06-17 2015-04-01 3形状股份有限公司 Focus Scanning Apparatus
US11076146B1 (en) 2009-06-17 2021-07-27 3Shape A/S Focus scanning apparatus
US11671582B2 (en) 2009-06-17 2023-06-06 3Shape A/S Intraoral scanning apparatus
US10349041B2 (en) 2009-06-17 2019-07-09 3Shape A/S Focus scanning apparatus
US11051002B2 (en) 2009-06-17 2021-06-29 3Shape A/S Focus scanning apparatus
US11831815B2 (en) 2009-06-17 2023-11-28 3Shape A/S Intraoral scanning apparatus
US11622102B2 (en) 2009-06-17 2023-04-04 3Shape A/S Intraoral scanning apparatus
US10595010B2 (en) 2009-06-17 2020-03-17 3Shape A/S Focus scanning apparatus
US10097815B2 (en) 2009-06-17 2018-10-09 3Shape A/S Focus scanning apparatus
US10349042B1 (en) 2009-06-17 2019-07-09 3Shape A/S Focus scanning apparatus
CN106604017B (en) * 2010-06-16 2018-10-16 株式会社尼康 Image display device
CN106604017A (en) * 2010-06-16 2017-04-26 株式会社尼康 Image display device
CN104041004B (en) * 2012-01-13 2018-01-30 佳能株式会社 Picture pick-up device and its control method and camera system
CN104041004A (en) * 2012-01-13 2014-09-10 佳能株式会社 Image pickup apparatus, control method therefor and image pickup system
CN103945115A (en) * 2013-01-22 2014-07-23 三星电子株式会社 Photographing device and photographing method for taking picture by using a plurality of microlenses
CN105378556A (en) * 2013-07-09 2016-03-02 三星电子株式会社 Image generating apparatus and method and non-transitory recordable medium
CN104469194B (en) * 2013-09-12 2018-02-16 佳能株式会社 Image processing apparatus and its control method
CN104469194A (en) * 2013-09-12 2015-03-25 佳能株式会社 Image processing apparatus and control method thereof
US11701208B2 (en) 2014-02-07 2023-07-18 3Shape A/S Detecting tooth shade
US11707347B2 (en) 2014-02-07 2023-07-25 3Shape A/S Detecting tooth shade
US11723759B2 (en) 2014-02-07 2023-08-15 3Shape A/S Detecting tooth shade
CN106303210A (en) * 2015-08-31 2017-01-04 北京智谷睿拓技术服务有限公司 Image acquisition control method and device
CN106303210B (en) * 2015-08-31 2019-07-12 北京智谷睿拓技术服务有限公司 Image Acquisition control method and device
CN106303209B (en) * 2015-08-31 2019-06-21 北京智谷睿拓技术服务有限公司 Image Acquisition control method and device
CN106303208B (en) * 2015-08-31 2019-05-21 北京智谷睿拓技术服务有限公司 Image Acquisition control method and device
CN106303209A (en) * 2015-08-31 2017-01-04 北京智谷睿拓技术服务有限公司 Image acquisition control method and device
CN106303208A (en) * 2015-08-31 2017-01-04 北京智谷睿拓技术服务有限公司 Image acquisition control method and device
CN110192127B (en) * 2016-12-05 2021-07-09 弗托斯传感与算法公司 Microlens array
CN110192127A (en) * 2016-12-05 2019-08-30 弗托斯传感与算法公司 Microlens array
CN109708193A (en) * 2018-06-28 2019-05-03 永康市胜时电机有限公司 Heating device inlet valve aperture control platform

Also Published As

Publication number Publication date
CN101426085B (en) 2012-10-03
CN100556076C (en) 2009-10-28
CN101065955A (en) 2007-10-31

Similar Documents

Publication Publication Date Title
CN100556076C (en) Imaging device and method thereof
US8953064B1 (en) Imaging arrangements and methods therefor
US8493432B2 (en) Digital refocusing for wide-angle images using axial-cone cameras
US10021340B2 (en) Method and an apparatus for generating data representative of a light field
US20150029386A1 (en) Microlens array architecture for avoiding ghosting in projected images
CN100538264C (en) The optical imagery distance measuring equipment of single aperture multiple imaging
US20090128669A1 (en) Correction of optical aberrations
US10762612B2 (en) Method and an apparatus for generating data representative of a pixel beam
TW201707437A (en) Image processing device and image processing method
CN108780574A (en) Device and method for calibrating optical system for collecting
US10909704B2 (en) Apparatus and a method for generating data representing a pixel beam
US20190101765A1 (en) A method and an apparatus for generating data representative of a pixel beam
Neumann Computer vision in the space of light rays: plenoptic video geometry and polydioptric camera design
Hua 3D Lensless Imaging: Theory, Hardware, and Algorithms
Vaughan Computational Imaging Approach to Recovery of Target Coordinates Using Orbital Sensor Data
Georgiev et al. Introduction to the JEI Focal Track Presentations
Zraqou Automated system design for the efficient processing of solar satellite images. Developing novel techniques and software platform for the robust feature detection and the creation of 3D anaglyphs and super-resolution images for solar satellite images.

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C53 Correction of patent for invention or patent application
CB02 Change of applicant information

Address after: American California

Applicant after: The Board of Trustees of the Leland Stanford Junior University

Address before: American California

Applicant before: Univ Leland Stanford Junior

COR Change of bibliographic data

Free format text: CORRECT: APPLICANT; FROM: UNIV LELAND STANFORD JUNIOR TO: THE BOARD OF TRUSTEES OF THE LELAND STANFORD JUNIOR UNIVERSITY

C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20121003

Termination date: 20190930