CA2711727A1 - Method and camera for the real-time acquisition of visual information from three-dimensional scenes - Google Patents
Method and camera for the real-time acquisition of visual information from three-dimensional scenes Download PDFInfo
- Publication number
- CA2711727A1 CA2711727A1 CA2711727A CA2711727A CA2711727A1 CA 2711727 A1 CA2711727 A1 CA 2711727A1 CA 2711727 A CA2711727 A CA 2711727A CA 2711727 A CA2711727 A CA 2711727A CA 2711727 A1 CA2711727 A1 CA 2711727A1
- Authority
- CA
- Canada
- Prior art keywords
- focal stack
- real
- image
- resolution
- object space
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 86
- 230000000007 visual effect Effects 0.000 title claims description 10
- 238000005259 measurement Methods 0.000 claims description 23
- 230000005672 electromagnetic field Effects 0.000 claims description 19
- 238000012545 processing Methods 0.000 claims description 13
- 230000015572 biosynthetic process Effects 0.000 claims description 6
- 238000011156 evaluation Methods 0.000 claims description 4
- 230000009466 transformation Effects 0.000 claims description 3
- 238000011084 recovery Methods 0.000 abstract description 3
- 210000000695 crystalline len Anatomy 0.000 description 18
- 230000008569 process Effects 0.000 description 9
- 230000003044 adaptive effect Effects 0.000 description 8
- 238000004364 calculation method Methods 0.000 description 7
- 230000003287 optical effect Effects 0.000 description 6
- 238000003325 tomography Methods 0.000 description 6
- 229910052704 radon Inorganic materials 0.000 description 5
- SYUHGPGVQRZVTB-UHFFFAOYSA-N radon atom Chemical compound [Rn] SYUHGPGVQRZVTB-UHFFFAOYSA-N 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000006073 displacement reaction Methods 0.000 description 3
- 230000002207 retinal effect Effects 0.000 description 3
- 238000012937 correction Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 210000001747 pupil Anatomy 0.000 description 2
- 108050005509 3D domains Proteins 0.000 description 1
- 235000005749 Anthriscus sylvestris Nutrition 0.000 description 1
- 241000276457 Gadidae Species 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 210000001742 aqueous humor Anatomy 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000001093 holography Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 208000013441 ocular lesion Diseases 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000005498 polishing Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 210000004127 vitreous body Anatomy 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
- G06T15/205—Image-based rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/97—Determining parameters from multiple pictures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/557—Depth or shape recovery from multiple images from light fields, e.g. from plenoptic cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/571—Depth or shape recovery from multiple images from focus
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Computing Systems (AREA)
- Computer Graphics (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Processing (AREA)
- Investigating Or Analysing Materials By Optical Means (AREA)
- Cameras In General (AREA)
- Focusing (AREA)
Abstract
The invention relates to a method for calculating the focal stack associated with an object space from the plenoptic function thereof, using a sum transform along the length of constrained planes in discrete hypercubes, which allows the computing time to be considerably reduced. The invention also relates to a method for increasing the resolution of the focal stack obtained. In addition, the invention relates to two methods for the real-time recovery of the depths and moduli and phases of the complex amplitude of the wavefront respectively in each position of the surfaces of a three-dimensional scene and to a system adapted for carrying out the aforementioned methods.
Description
METHOD AND CAMERA FOR THE REAL-TIME ACQUISITION OF VISUAL
INFORMATION FROM THREE-DIMENSIONAL SCENES
Object of the Invention The present invention relates to a method for calculating the focal stack associated with an object volume, to a method for improving the resolution of the images of the focal stack obtained, to a method for the real-time measurement of distances in three-dimensional scenes and to a method for the real-time tomographic measurement of the complex amplitude of the electromagnetic field associated with a wavefront.
The present invention allows knowing the distance and the complex amplitude of the electromagnetic field in the positions of the surfaces of the scene.
In addition, the invention relates to a camera for the real-time acquisition of the visual information of three-dimensional scenes in a wide range of volumes, characterized by the use of an objective lens and a lenslet array located in the image space of the objective lens, a sensor placed in the focal point of the lenslets (which collects the image formed by the latter) and parallel computation processing means adapted for calculating the focal stack associated with the object volume measured by the camera, and for calculating with regard to the latter the complex amplitude of the electromagnetic field (modulus and phase) and the three-dimensional position of the radiant surfaces at any point of the sensed object space.
This invention can be useful in any area or application in which it is required to know the wavefront: terrestrial astronomic observation, ophthalmology, holography, etc., as well as in which metrology is required: real scenes, 3D
television, polishing CODs, automobile mechanics, etc.
Field of the Art Optics. Image processing.
INFORMATION FROM THREE-DIMENSIONAL SCENES
Object of the Invention The present invention relates to a method for calculating the focal stack associated with an object volume, to a method for improving the resolution of the images of the focal stack obtained, to a method for the real-time measurement of distances in three-dimensional scenes and to a method for the real-time tomographic measurement of the complex amplitude of the electromagnetic field associated with a wavefront.
The present invention allows knowing the distance and the complex amplitude of the electromagnetic field in the positions of the surfaces of the scene.
In addition, the invention relates to a camera for the real-time acquisition of the visual information of three-dimensional scenes in a wide range of volumes, characterized by the use of an objective lens and a lenslet array located in the image space of the objective lens, a sensor placed in the focal point of the lenslets (which collects the image formed by the latter) and parallel computation processing means adapted for calculating the focal stack associated with the object volume measured by the camera, and for calculating with regard to the latter the complex amplitude of the electromagnetic field (modulus and phase) and the three-dimensional position of the radiant surfaces at any point of the sensed object space.
This invention can be useful in any area or application in which it is required to know the wavefront: terrestrial astronomic observation, ophthalmology, holography, etc., as well as in which metrology is required: real scenes, 3D
television, polishing CODs, automobile mechanics, etc.
Field of the Art Optics. Image processing.
Background of the Invention The present invention relates both to the need to achieve a three-dimensional measurement of the complex amplitude of the wavefront associated with any optical problem in which image quality is essential (e.g. for diagnosing), and to the need to obtain a sufficiently reliable and precise depth map in a wide range of volumes, from a few microns up to several kilometers, and also to the real-time generation of three-dimensional information for 3D television, 3D movies, medicine, etc.
State of the Art The adaptive optics for current telescopes with a large diameter (GRANTECAN, Keck,...) and future giant telescopes (50 or 100 meters in diameter) focuses on measuring the three-dimensional distribution of the atmospheric phase using a form of tomography called multi-conjugate optics. The absence in the sky of a sufficient number of natural point sources, such that there is always one present in the field of vision of the object observed by the telescope, forces using artificial point sources: Na stars (90 km in height).
In order to correct, preventing the focal anisoplanatism, the entire atmosphere affecting the light beam coming from the object in the sky, it is necessary to use several of these artificial stars (at least 5). In order to be generated, each of them needs a very high resolution high-powered pulse laser, which translates into a very expensive technology. In addition, after such a high cost, the multi-conjugate optics is only capable of measuring the atmospheric phase associated with, at most, three horizontal layers of turbulence (with three simultaneously measuring phase sensors), i.e., it scans a very small proportion of the three-dimensional cylinder affecting the image. Furthermore, an estimation of the phase with such complicated calculations which seriously complicate the adaptive correction of the optic beam within the time of stability of the atmosphere in the visible range (10 ms) is recovered.
However, the background of the invention is not exclusively focused on the field of astrophysics. In the sector of optics, or ophthalmology, the main interest in performing the tomography of the human eye is based on medical specialists obtaining and having a clear image of the retinal fundus of the patient, in order to be able to more readily perform diagnostics. The aqueous humor, the vitreous humor and the crystalline lens in the eye behave like means which aberrate the image that can be obtained of the retinal fundus.
Even though for this case it is not necessary to take measurements as frequently as in the terrestrial atmosphere (every 10 ms), since it is a stable deformation, sufficient three-dimensional resolution is required to not only obtain a good image of retinal fundus, but to also detect the spatial location of possible ocular lesions.
Finally, and in another sector such as that of television or cinematographic images, there are challenges relating to three-dimensional television, in which one of the essential problems is the real-time generation of contents, given that the techniques are so complex and laborious that they need human intervention during the process for the generation of 3D contents which can be shown on existing 3D
displays. In this sense, the optimized implementation on parallel computing hardware (GPUs and FPGAs) of the techniques herein proposed allows generating real-time three-dimensional contents.
Approaches are known in the state of the art in the mentioned fields in which lenslets have been placed in the image plane of a converging lens, giving rise to devices and methods for the measurement of image parameters, nevertheless, they do not use said assembly for taking the tomographic measurement of the optical aberration, or for obtaining distances in the scene.
State of the Art The adaptive optics for current telescopes with a large diameter (GRANTECAN, Keck,...) and future giant telescopes (50 or 100 meters in diameter) focuses on measuring the three-dimensional distribution of the atmospheric phase using a form of tomography called multi-conjugate optics. The absence in the sky of a sufficient number of natural point sources, such that there is always one present in the field of vision of the object observed by the telescope, forces using artificial point sources: Na stars (90 km in height).
In order to correct, preventing the focal anisoplanatism, the entire atmosphere affecting the light beam coming from the object in the sky, it is necessary to use several of these artificial stars (at least 5). In order to be generated, each of them needs a very high resolution high-powered pulse laser, which translates into a very expensive technology. In addition, after such a high cost, the multi-conjugate optics is only capable of measuring the atmospheric phase associated with, at most, three horizontal layers of turbulence (with three simultaneously measuring phase sensors), i.e., it scans a very small proportion of the three-dimensional cylinder affecting the image. Furthermore, an estimation of the phase with such complicated calculations which seriously complicate the adaptive correction of the optic beam within the time of stability of the atmosphere in the visible range (10 ms) is recovered.
However, the background of the invention is not exclusively focused on the field of astrophysics. In the sector of optics, or ophthalmology, the main interest in performing the tomography of the human eye is based on medical specialists obtaining and having a clear image of the retinal fundus of the patient, in order to be able to more readily perform diagnostics. The aqueous humor, the vitreous humor and the crystalline lens in the eye behave like means which aberrate the image that can be obtained of the retinal fundus.
Even though for this case it is not necessary to take measurements as frequently as in the terrestrial atmosphere (every 10 ms), since it is a stable deformation, sufficient three-dimensional resolution is required to not only obtain a good image of retinal fundus, but to also detect the spatial location of possible ocular lesions.
Finally, and in another sector such as that of television or cinematographic images, there are challenges relating to three-dimensional television, in which one of the essential problems is the real-time generation of contents, given that the techniques are so complex and laborious that they need human intervention during the process for the generation of 3D contents which can be shown on existing 3D
displays. In this sense, the optimized implementation on parallel computing hardware (GPUs and FPGAs) of the techniques herein proposed allows generating real-time three-dimensional contents.
Approaches are known in the state of the art in the mentioned fields in which lenslets have been placed in the image plane of a converging lens, giving rise to devices and methods for the measurement of image parameters, nevertheless, they do not use said assembly for taking the tomographic measurement of the optical aberration, or for obtaining distances in the scene.
For example, Adelson and Wang ("Single lens stereo with a plenoptic camera", IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 14, No. 2, 99-106, 1992) use the device to obtain distances with a technique that is completely different from that of the present invention.
The authors Ren Ng et al. ("Light field photography with a hand-held plenoptic camera", Stanford Tech Report CTSR 2005-02) use the Fourier Slice technique associated with lenslets in the image plane only for obtaining focused photographs of real scenes in ranges of a few cubic meters of volume, with quality apparently better than the typical field depth technique. In this case, the proposed technique allows calculating the focal stack if it is applied repeatedly for the distances covering the required volume, involving computational requirements which would make real-time processing impossible.
Throughout the present specification, focal stack of a scene will be interpreted as the image volume that would result from taking a group of conventional photographic images, from the same point of view, but varying the focusing distance.
In relation to the processes of information extraction, it is known that obtaining a photographic image from the light-field or four-dimensional plenoptic function, f(2tv,x,1=), is possible by integrating all the rays reaching each point (x, y) of the sensor, coming from each point (u, v) of the plane of the lens. By means of said integration the plenoptic capture effect is undone. In other words, if the rays have been redirected to different spatial positions when captured with the lenslets, in order to redo the conventional image it is necessary to regroup, i.e., again integrate in common for a position (x, y) what came from different angular values (u, v) .
The image obtained by the operator EF(x, y) _ ff f (v v:, _x, y)du dv, or photographic formation integral, -reimages the photographic image that had been obtained with a conventional sensor focused on the plane at a distance F, conjugated with respect to the objective lens of the lenslets-sensor assembly. If it is desired to reimage in a plane at a 5 distance'' = a - x, prior to or after F, Ren Ng demonstrates, by the similarity of triangles, that the following must be evaluated:
E,..(x,y) = fff ('It, '.. 11.' v i Q) w du.dv.
The evaluation of this poperator for each possible distance at' requires U(N') operations and, therefore, for N
planes would require O(N) operations, where N is the resolution with which each of the dimensions of the plenoptic function is sampled.
Ren Ng also demonstrates that if 4D Fourier transform of the light-field is calculated, which involves O(Al log N) complex summation and multiplication operations, the different refocusing planes can be obtained by performing a truncated 2D
rotation and 2D anti-Fourier transform of the 4D transform function of the light-field, each of them with a computational complexity of O(N`)+ O(N' log. N) to be added to the initial cost of the Fourier transform of the sensed 4D function.
A method is therefore necessary which allows reducing the computational cost of the calculation of the focal stack and, accordingly, its calculation time.
The most closely related background document of the method for calculating the focal stack of the present invention is the discrete Radon fast (or approximate, according to the author) transform proposed by Gotz and Druckmuller ("A fast digital Radon transform - an efficient means for evaluating the Hough transform". Pattern Recognition, vol. 29, no. 4, pp. 711-718, 1996.), and Brady ("A Fast Discrete Approximation Algorithm for the Radon Transform". SIAM J. Comput., vol. 27, no. 1, pp. 107-119, 1998) independently, which simultaneously evaluates the summation of the values along the length of a series of discrete lines, each one characterized by a slope and a displacement with respect to the origin, arranged on a two-dimensional data grid, by means of Q(N` log,N) sums in which the direct evaluation of the summation on each line would require a(,') operations, and therefore, the evaluation for N
slopes and N displacements would require ON) sums.
Finally, disregarding the processes for obtaining the information of the scene and considering the system of the present invention, Richard M. Clare and Richard G. Lane ("Wave-front sensing from subdivision of the focal plane with a lenslet array", J. Opt. Soc. Am. A, Vol. 22, No. 1, 117-125, 2005) have proposed a system in which a lenslet array is placed right in the focus of the converging lens, not in any position of the image plane, and by means of which the phase of the wavefront is obtained only in the pupil of the lens.
Therefore, a method is necessary which allows determining the phase of the wavefront topographically, i.e., at any distance within the three-dimensional volume of the object space, not only in the pupil of the lens.
Description of the Invention The present invention solves the previously described drawback by providing in a first aspect a method for calculating the focal stack associated with a scene according to claim 1, which method allows significantly reducing the computational cost of the process and the calculation time, which involves a considerable improvement with respect to the methods known in the state of the art.
An additional object of the present invention is to propose a method for measuring distances in scenes and a method for measuring the complex amplitude of the electromagnetic field which are computationally optimized and suitable for their parallel computation.
The present invention allows:
The authors Ren Ng et al. ("Light field photography with a hand-held plenoptic camera", Stanford Tech Report CTSR 2005-02) use the Fourier Slice technique associated with lenslets in the image plane only for obtaining focused photographs of real scenes in ranges of a few cubic meters of volume, with quality apparently better than the typical field depth technique. In this case, the proposed technique allows calculating the focal stack if it is applied repeatedly for the distances covering the required volume, involving computational requirements which would make real-time processing impossible.
Throughout the present specification, focal stack of a scene will be interpreted as the image volume that would result from taking a group of conventional photographic images, from the same point of view, but varying the focusing distance.
In relation to the processes of information extraction, it is known that obtaining a photographic image from the light-field or four-dimensional plenoptic function, f(2tv,x,1=), is possible by integrating all the rays reaching each point (x, y) of the sensor, coming from each point (u, v) of the plane of the lens. By means of said integration the plenoptic capture effect is undone. In other words, if the rays have been redirected to different spatial positions when captured with the lenslets, in order to redo the conventional image it is necessary to regroup, i.e., again integrate in common for a position (x, y) what came from different angular values (u, v) .
The image obtained by the operator EF(x, y) _ ff f (v v:, _x, y)du dv, or photographic formation integral, -reimages the photographic image that had been obtained with a conventional sensor focused on the plane at a distance F, conjugated with respect to the objective lens of the lenslets-sensor assembly. If it is desired to reimage in a plane at a 5 distance'' = a - x, prior to or after F, Ren Ng demonstrates, by the similarity of triangles, that the following must be evaluated:
E,..(x,y) = fff ('It, '.. 11.' v i Q) w du.dv.
The evaluation of this poperator for each possible distance at' requires U(N') operations and, therefore, for N
planes would require O(N) operations, where N is the resolution with which each of the dimensions of the plenoptic function is sampled.
Ren Ng also demonstrates that if 4D Fourier transform of the light-field is calculated, which involves O(Al log N) complex summation and multiplication operations, the different refocusing planes can be obtained by performing a truncated 2D
rotation and 2D anti-Fourier transform of the 4D transform function of the light-field, each of them with a computational complexity of O(N`)+ O(N' log. N) to be added to the initial cost of the Fourier transform of the sensed 4D function.
A method is therefore necessary which allows reducing the computational cost of the calculation of the focal stack and, accordingly, its calculation time.
The most closely related background document of the method for calculating the focal stack of the present invention is the discrete Radon fast (or approximate, according to the author) transform proposed by Gotz and Druckmuller ("A fast digital Radon transform - an efficient means for evaluating the Hough transform". Pattern Recognition, vol. 29, no. 4, pp. 711-718, 1996.), and Brady ("A Fast Discrete Approximation Algorithm for the Radon Transform". SIAM J. Comput., vol. 27, no. 1, pp. 107-119, 1998) independently, which simultaneously evaluates the summation of the values along the length of a series of discrete lines, each one characterized by a slope and a displacement with respect to the origin, arranged on a two-dimensional data grid, by means of Q(N` log,N) sums in which the direct evaluation of the summation on each line would require a(,') operations, and therefore, the evaluation for N
slopes and N displacements would require ON) sums.
Finally, disregarding the processes for obtaining the information of the scene and considering the system of the present invention, Richard M. Clare and Richard G. Lane ("Wave-front sensing from subdivision of the focal plane with a lenslet array", J. Opt. Soc. Am. A, Vol. 22, No. 1, 117-125, 2005) have proposed a system in which a lenslet array is placed right in the focus of the converging lens, not in any position of the image plane, and by means of which the phase of the wavefront is obtained only in the pupil of the lens.
Therefore, a method is necessary which allows determining the phase of the wavefront topographically, i.e., at any distance within the three-dimensional volume of the object space, not only in the pupil of the lens.
Description of the Invention The present invention solves the previously described drawback by providing in a first aspect a method for calculating the focal stack associated with a scene according to claim 1, which method allows significantly reducing the computational cost of the process and the calculation time, which involves a considerable improvement with respect to the methods known in the state of the art.
An additional object of the present invention is to propose a method for measuring distances in scenes and a method for measuring the complex amplitude of the electromagnetic field which are computationally optimized and suitable for their parallel computation.
The present invention allows:
- Being restricted to a single measurement and to a single sensor, within each time of atmospheric stability.
- A recovery of the modulus and phase associated with each turbulent horizontal layer, i.e., tomography of the entire atmosphere by means of the method for calculating the focal stack, that is fast in consideration of the number and type of operations (sums) it uses, but which can be accelerated with an intelligent adaptation thereof to graphic processing units (GPU) or to reconfigurable hardware units, such as FPGAs (Field Programmable Gate Arrays).
- Preventing the need to use artificial laser stars, because it is capable of performing the real-time recovery of the image of the object upon its arrival to the terrestrial atmosphere, since this new technique does not require calibration with a point signal for subsequent deconvolution.
The method for calculating the focal stack of the invention, referred to as SCPH (Summation of Constrained Planes in a Hypercube) Transform, allows obtaining a series of planes of a three-dimensional scene, focused in different positions along the length of the optical axis, reducing the computational complexity of the process.
The method for calculating the focal stack of the invention is based on the principles of multiscale methods for the computation of the discrete fast Radon transform, and minimizes the number of operations to be performed by means of reusing partial results.
For the purpose of reducing the computational cost of the calculation of the focal stack, the method of the present invention uses a sum transform along the length of constrained planes in discrete hypercubes. It must be observed that the photographic formation integral is geometrically equivalent to evaluating the integral along the length of planes in a function the domain of which is a 4-dimensional hypercube.
With this being understood, the photographic formation integral is a particular case of ( j , k) = if f (2t_ v, z = r r - d, U 'r_ - k) du dv, adding the restriction that slopes r1 and r, defining the integration planes are equal for the case at hand, which allows reducing the number of operations to be performed.
The proposed method for calculating the focal stack consists of simultaneously computing the sum of the positioned values in the discrete 4D function, f(uvx,y), on planes such that the coordinates of the points located therein simultaneously meet _x = 1t - r - j and v _ v - r k, under certain discretization conditions, reusing the partial sums of points contained by more than one discrete plane, where u and ' are the horizontal and vertical dimensions on the plane of the lens and x and V are the horizontal and vertical dimensions on the sensor plane, and j, k and r are the horizontal, vertical and depth dimensions of the focal stack that is to be obtained. In other words, the algorithm computing the discrete fast approximate Radon transform, which already existed for the case of line integrals in the plane, is extended until it reaches integrals 4-dimensional hyperplanes on the 4-dimensional hypercube, with the added factor that the horizontal and vertical slopes are the same.
The partial transformation of the data, up to step art, is defined as:
7~ ^r-rn bits .$ s. .S .t r PG
f m (r :$ r. ~ ti,) = f 7N cJ"[s r -1 r ) .. r M r.." , t -1 f r, ra-1 , ??r a_ 1P \-c F .:.a n-,,t an't-a n n-)Sm $ef.../
where it is considered that the function has '.xNr_xN.xN
dimensions, with n = log. N, the function describes the discrete manner in which the succession of points (u, I} (u) - d), with u E [a, N), joins points (O,d) and (N- 1,s d) forming a discrete line of slope s/(.V- 1) , and the function rt(v0 ...,'tt,~_1) = 2'' s u. returns the value it u) corresponding to the binary nuple "U E J
If the transformed data up to step 0, f (r,s,t, j,k) , are made to be equal to the captured data, f(s,t, j,k) :
j, k) _ (s,t,j,k), then:
fr,rrs,t,j,k) ,f(r,j k) f(A(U)r,t(v),I -,;(it)--j1 (v) k) ILI, =ai.l=~ t which approximates the photographic formation integral E,_(j, k) _ `~ , (1t Rte, "tt -r -j,,-,,7 -r -k) dit clx3, for a volume of N
depth planes.
Other N depth planes can be similarly computed for negative slopes. Since both half-volumes share depth 0, a total focal stack made up of 2N-1 images refocused at different distances would be obtained.
The recurring formula which maps two partial steps n and rrat 1, completely describes the method, and must be applied n.
times:
2r:-gin-1 b:<s n 1 _ ~. k_) =
t p;nb s sÃn-*n-1bsts r-n:-1b s i FM Y bz4 m(p. 0. s, 0. t, I. k) 1's, 0, t, j - r7*, A (p), k) -f Pn(p, 0, s, i, it, f, k - " - (P)) is, -r A(p), k ( )) It must be observed that the domain for which the partial transformation f''"1 is described is half that required by f n, being progressively transformed from a 4D domain into a 3D domain, in a process requiring O(.N') sums, which translates into a savings in computing time of greater than eighty percent with respect to current processes.
By following this process, and with the methods herein proposed, it is possible to recover depths and moduli and phases of the complex amplitude of the wavefront in each position of the surfaces of the scene, which allows complete three-dimensional and real-time scanning of the scene, therefore being very applicable in the aforementioned fields.
The method for calculating the focal stack of the present invention advantageously does not require multiplications or trigonometric operations, only summations and its computational complexity is of the order of O(N') for a final volume containing 2N-1 photographic planes focused at different depths, from a light-field of dimension N.
The method of the present invention allows computing the focal stack for an entire volume and with a smaller number of operations than what the methods for obtaining the focal stack described in the state of the art (Ren Ng) use for computing a single plane.
Nevertheless, a problem which arises in capturing the 4D
light-field is the need to use a 2-dimensional sensor. For obtaining the entire 4D volume of information on a 2D sensor, a sensor with a very high resolution is necessary. Thus, for a sensor of O(N4) pixels, it is only possible to obtain a focal stack made up of images of 0(N2) pixels and therefore only O(N2) distances. This reduction of resolution of the order of 0(N2) makes it necessary to use very expensive sensors.
To solve this problem, a second aspect of the present invention provides a method for improving the resolution of the images of the focal stack according to claim 2. In said method, it is assumed that the elements of the scene have Lambertian-type reflectance, i.e., the intensity emitted by a point in the object is independent of the angle. In this case, the light-field has redundant information of the order of 0(aN2), where a is an arbitrary constant with 0<a<l. This redundant information can be used to increase the resolution of the images of the focal stack.
- A recovery of the modulus and phase associated with each turbulent horizontal layer, i.e., tomography of the entire atmosphere by means of the method for calculating the focal stack, that is fast in consideration of the number and type of operations (sums) it uses, but which can be accelerated with an intelligent adaptation thereof to graphic processing units (GPU) or to reconfigurable hardware units, such as FPGAs (Field Programmable Gate Arrays).
- Preventing the need to use artificial laser stars, because it is capable of performing the real-time recovery of the image of the object upon its arrival to the terrestrial atmosphere, since this new technique does not require calibration with a point signal for subsequent deconvolution.
The method for calculating the focal stack of the invention, referred to as SCPH (Summation of Constrained Planes in a Hypercube) Transform, allows obtaining a series of planes of a three-dimensional scene, focused in different positions along the length of the optical axis, reducing the computational complexity of the process.
The method for calculating the focal stack of the invention is based on the principles of multiscale methods for the computation of the discrete fast Radon transform, and minimizes the number of operations to be performed by means of reusing partial results.
For the purpose of reducing the computational cost of the calculation of the focal stack, the method of the present invention uses a sum transform along the length of constrained planes in discrete hypercubes. It must be observed that the photographic formation integral is geometrically equivalent to evaluating the integral along the length of planes in a function the domain of which is a 4-dimensional hypercube.
With this being understood, the photographic formation integral is a particular case of ( j , k) = if f (2t_ v, z = r r - d, U 'r_ - k) du dv, adding the restriction that slopes r1 and r, defining the integration planes are equal for the case at hand, which allows reducing the number of operations to be performed.
The proposed method for calculating the focal stack consists of simultaneously computing the sum of the positioned values in the discrete 4D function, f(uvx,y), on planes such that the coordinates of the points located therein simultaneously meet _x = 1t - r - j and v _ v - r k, under certain discretization conditions, reusing the partial sums of points contained by more than one discrete plane, where u and ' are the horizontal and vertical dimensions on the plane of the lens and x and V are the horizontal and vertical dimensions on the sensor plane, and j, k and r are the horizontal, vertical and depth dimensions of the focal stack that is to be obtained. In other words, the algorithm computing the discrete fast approximate Radon transform, which already existed for the case of line integrals in the plane, is extended until it reaches integrals 4-dimensional hyperplanes on the 4-dimensional hypercube, with the added factor that the horizontal and vertical slopes are the same.
The partial transformation of the data, up to step art, is defined as:
7~ ^r-rn bits .$ s. .S .t r PG
f m (r :$ r. ~ ti,) = f 7N cJ"[s r -1 r ) .. r M r.." , t -1 f r, ra-1 , ??r a_ 1P \-c F .:.a n-,,t an't-a n n-)Sm $ef.../
where it is considered that the function has '.xNr_xN.xN
dimensions, with n = log. N, the function describes the discrete manner in which the succession of points (u, I} (u) - d), with u E [a, N), joins points (O,d) and (N- 1,s d) forming a discrete line of slope s/(.V- 1) , and the function rt(v0 ...,'tt,~_1) = 2'' s u. returns the value it u) corresponding to the binary nuple "U E J
If the transformed data up to step 0, f (r,s,t, j,k) , are made to be equal to the captured data, f(s,t, j,k) :
j, k) _ (s,t,j,k), then:
fr,rrs,t,j,k) ,f(r,j k) f(A(U)r,t(v),I -,;(it)--j1 (v) k) ILI, =ai.l=~ t which approximates the photographic formation integral E,_(j, k) _ `~ , (1t Rte, "tt -r -j,,-,,7 -r -k) dit clx3, for a volume of N
depth planes.
Other N depth planes can be similarly computed for negative slopes. Since both half-volumes share depth 0, a total focal stack made up of 2N-1 images refocused at different distances would be obtained.
The recurring formula which maps two partial steps n and rrat 1, completely describes the method, and must be applied n.
times:
2r:-gin-1 b:<s n 1 _ ~. k_) =
t p;nb s sÃn-*n-1bsts r-n:-1b s i FM Y bz4 m(p. 0. s, 0. t, I. k) 1's, 0, t, j - r7*, A (p), k) -f Pn(p, 0, s, i, it, f, k - " - (P)) is, -r A(p), k ( )) It must be observed that the domain for which the partial transformation f''"1 is described is half that required by f n, being progressively transformed from a 4D domain into a 3D domain, in a process requiring O(.N') sums, which translates into a savings in computing time of greater than eighty percent with respect to current processes.
By following this process, and with the methods herein proposed, it is possible to recover depths and moduli and phases of the complex amplitude of the wavefront in each position of the surfaces of the scene, which allows complete three-dimensional and real-time scanning of the scene, therefore being very applicable in the aforementioned fields.
The method for calculating the focal stack of the present invention advantageously does not require multiplications or trigonometric operations, only summations and its computational complexity is of the order of O(N') for a final volume containing 2N-1 photographic planes focused at different depths, from a light-field of dimension N.
The method of the present invention allows computing the focal stack for an entire volume and with a smaller number of operations than what the methods for obtaining the focal stack described in the state of the art (Ren Ng) use for computing a single plane.
Nevertheless, a problem which arises in capturing the 4D
light-field is the need to use a 2-dimensional sensor. For obtaining the entire 4D volume of information on a 2D sensor, a sensor with a very high resolution is necessary. Thus, for a sensor of O(N4) pixels, it is only possible to obtain a focal stack made up of images of 0(N2) pixels and therefore only O(N2) distances. This reduction of resolution of the order of 0(N2) makes it necessary to use very expensive sensors.
To solve this problem, a second aspect of the present invention provides a method for improving the resolution of the images of the focal stack according to claim 2. In said method, it is assumed that the elements of the scene have Lambertian-type reflectance, i.e., the intensity emitted by a point in the object is independent of the angle. In this case, the light-field has redundant information of the order of 0(aN2), where a is an arbitrary constant with 0<a<l. This redundant information can be used to increase the resolution of the images of the focal stack.
The method according to claim 2 allows going from a focal stack with images with resolution O(N2) to another with resolution 0 ( (1-a) N4) .
Given an image of the focal stack corresponding to a plane at a determined distance, the method of increasing resolution of the focal stack comprises:
1. Back-projecting the O(N4) rays of the light-field from that plane constructing an image at a high resolution with the fractional positions of those rays.
2. Determining the redundancies due to the Lambertian assumption.
3. In this image with an increased resolution (with super-resolution) there are two types of pixels: those for which there are no redundancies, where the value of the back-projected ray is placed, and those for which there are redundancies, where a representative value of the redundant rays is placed, such as the mean.
Assuming 0(aN2) redundancies for each of the 0(N2) elements of the image of the original focal stack, the final resolution of the image of the focal stack would be increased from 0(N2) to 0((l-a)N4).
In the context of the present specification, super-resolution is interpreted as any increased definition of both the image and of the associated distance map, with respect to the resolution in lenslets, O(N2). The maximum limit thereof is therefore in the number of pixels O(N4), a resolution of the order of 0((l-a)N4) being advantageously obtained with the method of the invention.
Typically, the value of a depends on the image of the focal stack being considered. For the calculation of distances, a minimum resolution is fixed and the images are implicitly or explicitly resampled to obtain said minimum resolution.
A third aspect of the invention presents a method for the real-time measurement of distances in three-dimensional scenes according to claim 3. Said method comprises obtaining an image of the object space with a phase camera, calculating the focal stack by means of the method of claim 1, applying a focus quality measurement operator in the focal stack (variance, Laplacian, gradient), and calculating the optimal state on a random Markov field.
The steps of the aforementioned method can be adapted to be implemented on parallel computing hardware, such as a GPU
or an FPGA, which results in an even more advantageous optimization of the processing time.
In a fourth aspect, the invention provides a method for the real-time tomographic measurement of the complex amplitude of the electromagnetic field associated with a wavefront of a scene according to claim 5. Said method comprises obtaining an image of the object space with a phase camera, calculating the focal stack by means of the method of claim 1, which provides the tomography of the squared modulus of the complex amplitude of the wavefront, applying an operator for the generation of the gradients of the wavefront phase at any point of the volume of the object space, and recovering the phase of the wavefront of the associated electromagnetic field.
The steps of the aforementioned method can be adapted to be implemented on parallel computing hardware, such as a GPU
or an FPGA, which results in an even more advantageous optimization of the processing time.
Both in the method for the real-time measurement of distances in three-dimensional scenes and in the method for the real-time tomographic measurement of the complex amplitude of the electromagnetic field associated with a wavefront, the measurements of the object space are performed only once, i.e., a single image contains enough information for recovering the three-dimensional environment. Such image can be understood as being made up of 4 dimensions: two coordinates on the detector associated with the inside of each lenslet and two other coordinates associated with the lenslet array.
The assembly would consist of a single lenslet array, forming an image on a detecting surface for detecting sufficient resolution (for example a CCD device) which is located in a position of the image space of a converging lens, which allows taking tomographic measurements of the three-dimensional object space.
Finally, a fifth aspect of the present invention provides a phase camera for the real-time acquisition of the visual information of three-dimensional scenes according to claim 7 which comprises a converging lens, a lenslet array placed in a position of the image space of the converging lens and which forms an image on a detecting surface for detecting sufficient resolution, and parallel computation processing means adapted for calculating the focal stack associated with the object space measured by the camera by means of the method of claim 1, for obtaining the complex amplitude of the electromagnetic field (modulus and phase) and for obtaining the distance in any position of the sensed object space.
Compared to the methods and systems of the state of the art, the method for calculating the focal stack according to the present invention allows topographically determining the phase of the wavefront, i.e., at any distance within the three-dimensional volume of the object space. Furthermore, the method for calculating the focal stack of the present invention allows characterizing the electromagnetic field not only by the phase of the wavefront in a plane, but rather by the complex amplitude of the electromagnetic field associated with the wavefront in the entire volume.
Description of the Drawings To complement the description that will be provided below and for the purpose of aiding to better understand the features of the invention, a set of drawings is attached as an integral part of said description in which the following has been depicted with an illustrative and non-limiting character:
Figure 1 shows a schematic depiction of the main elements of the camera for the real-time acquisition of the visual information of three-dimensional scenes according to the invention.
Figure 2 shows a conceptual diagram of the invention applied to a telescope with a large main mirror (1) for performing the atmospheric tomography in the astrophysical observation of a star (8) with adaptive optics.
Figure 3 shows a conceptual diagram of a classic astrophysical observation of a star (8) using multiconjugate adaptive optics in two layers of turbulence in the atmosphere (9) and (10).
Detailed Description of a Preferred Embodiment of the Invention In a first practical embodiment of the invention, the measurement of the distance at which the objects of a scene are located is considered.
The particular case of the observation from this invention of a scene consisting of the inside of a furnished room is considered, where several objects located at depths ranging from 0.5 up to 4 meters from the position of the camera can be distinguished as components.
Figure 1 schematically depicts the arrangement of an aperture lens (1), a lenslet array (2) and a detecting surface (3) included in the phase camera according to the invention.
Furthermore, the distance (5) from the converging lens to the one focusing on a determined object of the object space, the focal point (6) of each lenslet of the lenslet array, the local angle of inclination of the wavefront (7) and the displacement in the optical path (4) experienced by turbulent wavefront with respect to another one that is not aberrated are shown. The camera of the invention also comprises processing means not depicted in the drawing.
Given an image of the focal stack corresponding to a plane at a determined distance, the method of increasing resolution of the focal stack comprises:
1. Back-projecting the O(N4) rays of the light-field from that plane constructing an image at a high resolution with the fractional positions of those rays.
2. Determining the redundancies due to the Lambertian assumption.
3. In this image with an increased resolution (with super-resolution) there are two types of pixels: those for which there are no redundancies, where the value of the back-projected ray is placed, and those for which there are redundancies, where a representative value of the redundant rays is placed, such as the mean.
Assuming 0(aN2) redundancies for each of the 0(N2) elements of the image of the original focal stack, the final resolution of the image of the focal stack would be increased from 0(N2) to 0((l-a)N4).
In the context of the present specification, super-resolution is interpreted as any increased definition of both the image and of the associated distance map, with respect to the resolution in lenslets, O(N2). The maximum limit thereof is therefore in the number of pixels O(N4), a resolution of the order of 0((l-a)N4) being advantageously obtained with the method of the invention.
Typically, the value of a depends on the image of the focal stack being considered. For the calculation of distances, a minimum resolution is fixed and the images are implicitly or explicitly resampled to obtain said minimum resolution.
A third aspect of the invention presents a method for the real-time measurement of distances in three-dimensional scenes according to claim 3. Said method comprises obtaining an image of the object space with a phase camera, calculating the focal stack by means of the method of claim 1, applying a focus quality measurement operator in the focal stack (variance, Laplacian, gradient), and calculating the optimal state on a random Markov field.
The steps of the aforementioned method can be adapted to be implemented on parallel computing hardware, such as a GPU
or an FPGA, which results in an even more advantageous optimization of the processing time.
In a fourth aspect, the invention provides a method for the real-time tomographic measurement of the complex amplitude of the electromagnetic field associated with a wavefront of a scene according to claim 5. Said method comprises obtaining an image of the object space with a phase camera, calculating the focal stack by means of the method of claim 1, which provides the tomography of the squared modulus of the complex amplitude of the wavefront, applying an operator for the generation of the gradients of the wavefront phase at any point of the volume of the object space, and recovering the phase of the wavefront of the associated electromagnetic field.
The steps of the aforementioned method can be adapted to be implemented on parallel computing hardware, such as a GPU
or an FPGA, which results in an even more advantageous optimization of the processing time.
Both in the method for the real-time measurement of distances in three-dimensional scenes and in the method for the real-time tomographic measurement of the complex amplitude of the electromagnetic field associated with a wavefront, the measurements of the object space are performed only once, i.e., a single image contains enough information for recovering the three-dimensional environment. Such image can be understood as being made up of 4 dimensions: two coordinates on the detector associated with the inside of each lenslet and two other coordinates associated with the lenslet array.
The assembly would consist of a single lenslet array, forming an image on a detecting surface for detecting sufficient resolution (for example a CCD device) which is located in a position of the image space of a converging lens, which allows taking tomographic measurements of the three-dimensional object space.
Finally, a fifth aspect of the present invention provides a phase camera for the real-time acquisition of the visual information of three-dimensional scenes according to claim 7 which comprises a converging lens, a lenslet array placed in a position of the image space of the converging lens and which forms an image on a detecting surface for detecting sufficient resolution, and parallel computation processing means adapted for calculating the focal stack associated with the object space measured by the camera by means of the method of claim 1, for obtaining the complex amplitude of the electromagnetic field (modulus and phase) and for obtaining the distance in any position of the sensed object space.
Compared to the methods and systems of the state of the art, the method for calculating the focal stack according to the present invention allows topographically determining the phase of the wavefront, i.e., at any distance within the three-dimensional volume of the object space. Furthermore, the method for calculating the focal stack of the present invention allows characterizing the electromagnetic field not only by the phase of the wavefront in a plane, but rather by the complex amplitude of the electromagnetic field associated with the wavefront in the entire volume.
Description of the Drawings To complement the description that will be provided below and for the purpose of aiding to better understand the features of the invention, a set of drawings is attached as an integral part of said description in which the following has been depicted with an illustrative and non-limiting character:
Figure 1 shows a schematic depiction of the main elements of the camera for the real-time acquisition of the visual information of three-dimensional scenes according to the invention.
Figure 2 shows a conceptual diagram of the invention applied to a telescope with a large main mirror (1) for performing the atmospheric tomography in the astrophysical observation of a star (8) with adaptive optics.
Figure 3 shows a conceptual diagram of a classic astrophysical observation of a star (8) using multiconjugate adaptive optics in two layers of turbulence in the atmosphere (9) and (10).
Detailed Description of a Preferred Embodiment of the Invention In a first practical embodiment of the invention, the measurement of the distance at which the objects of a scene are located is considered.
The particular case of the observation from this invention of a scene consisting of the inside of a furnished room is considered, where several objects located at depths ranging from 0.5 up to 4 meters from the position of the camera can be distinguished as components.
Figure 1 schematically depicts the arrangement of an aperture lens (1), a lenslet array (2) and a detecting surface (3) included in the phase camera according to the invention.
Furthermore, the distance (5) from the converging lens to the one focusing on a determined object of the object space, the focal point (6) of each lenslet of the lenslet array, the local angle of inclination of the wavefront (7) and the displacement in the optical path (4) experienced by turbulent wavefront with respect to another one that is not aberrated are shown. The camera of the invention also comprises processing means not depicted in the drawing.
To form the camera of the invention for the real-time acquisition of the visual information of three-dimensional scenes, a CCD sensor with a maximum resolution of 4000x2672, Imperx IPX-llM5 model, is used. Following the assembly of Figure 1, placed before the CCD there is an objective lens with the same focal ratio as the subsequent lenslet array (16x16 and in an F-Nikon mount) focused on the CCD covering 1024 x 1024 pixels. The camera in this arrangement has a horizontal angular aperture of 30 , and is focused on the central region of the scene, at approximately 2 meters.
The detected image is treated with processing means, in this example a GPU nVidia 8800 GTX graphics card adapted for:
- calculating the focal stack by means of the method for calculating the focal stack of the invention, adapted to be optimally implemented on parallel computing hardware;
- applying a focus quality measurement operator in the focal stack, for example, the operator "Variance" for estimating the quality of the focus, optimally adapted for the same parallel computing hardware; and - recovering the distances by means of calculating the optimal state on a random Markov field, for example, propagation of the belief based on the re-weighting of Kolmogorov trees, implemented in an optimized manner on parallel computing hardware.
The depth map of the scene is obtained with the camera and the method of the invention.
In a second practical embodiment of the invention the measurement of the complex amplitude of the electromagnetic field is considered.
The particular case of an astrophysical observation with a telescope having a diameter greater than the coherence diameter ro of the atmosphere (approximately 20 cm in the visible range) is considered. The turbulence of the atmosphere causes a loss of resolution in the image obtained with the telescope, i.e., loss of the information of high spatial frequencies. To prevent this, the manner in which the atmospheric turbulence degrades the wavefront of the light coming from the star under study must be known. To that end, natural or artificial point stars which allow characterizing the deformation introduced by the atmosphere in the wavefront can be used as a reference.
Figure 3 schematically depicts an astrophysical observation of a star (8) using classic multiconjugate adaptive optics in two layers of turbulence in the atmosphere.
With multiconjugate adaptive optics, a wavefront phase sensor must be used by each deformable mirror conjugated to an individual layer of turbulence, i.e., two different phase sensors (WFS) which must be aligned and operated in parallel and in different positions of the optical axis. The figure depicts a telescope (1) and two wavefront sensors (11) and (12) associated in a conjugated manner to each turbulent layer (9) and (10) . It is only possible to recover a very small number of individual layers of turbulence (three layers at the most) with classic multiconjugate adaptive optics. The complexity of the calculations and the need for speed, since the atmosphere changes every 10 milliseconds in the visible range, currently makes it impossible to overcome the correction in only three layers of atmospheric turbulence.
According to the design shown in Figure 1, and the operation of which in this case is shown in Figure 2, only one sensor is used, placed in a single position of the optical axis, with the present invention.
Figure 2 schematically shows a system according to the invention for performing atmospheric tomography in the astrophysical observation of a star (8) with adaptive optics.
The individual layers of turbulence in the atmosphere correspond to (9) and (10) . In this case, the telescope (1) itself described above functions as the objective lens. A
lenslet array (2) (32x32, mount C) focusing on an IXON ANDOR
The detected image is treated with processing means, in this example a GPU nVidia 8800 GTX graphics card adapted for:
- calculating the focal stack by means of the method for calculating the focal stack of the invention, adapted to be optimally implemented on parallel computing hardware;
- applying a focus quality measurement operator in the focal stack, for example, the operator "Variance" for estimating the quality of the focus, optimally adapted for the same parallel computing hardware; and - recovering the distances by means of calculating the optimal state on a random Markov field, for example, propagation of the belief based on the re-weighting of Kolmogorov trees, implemented in an optimized manner on parallel computing hardware.
The depth map of the scene is obtained with the camera and the method of the invention.
In a second practical embodiment of the invention the measurement of the complex amplitude of the electromagnetic field is considered.
The particular case of an astrophysical observation with a telescope having a diameter greater than the coherence diameter ro of the atmosphere (approximately 20 cm in the visible range) is considered. The turbulence of the atmosphere causes a loss of resolution in the image obtained with the telescope, i.e., loss of the information of high spatial frequencies. To prevent this, the manner in which the atmospheric turbulence degrades the wavefront of the light coming from the star under study must be known. To that end, natural or artificial point stars which allow characterizing the deformation introduced by the atmosphere in the wavefront can be used as a reference.
Figure 3 schematically depicts an astrophysical observation of a star (8) using classic multiconjugate adaptive optics in two layers of turbulence in the atmosphere.
With multiconjugate adaptive optics, a wavefront phase sensor must be used by each deformable mirror conjugated to an individual layer of turbulence, i.e., two different phase sensors (WFS) which must be aligned and operated in parallel and in different positions of the optical axis. The figure depicts a telescope (1) and two wavefront sensors (11) and (12) associated in a conjugated manner to each turbulent layer (9) and (10) . It is only possible to recover a very small number of individual layers of turbulence (three layers at the most) with classic multiconjugate adaptive optics. The complexity of the calculations and the need for speed, since the atmosphere changes every 10 milliseconds in the visible range, currently makes it impossible to overcome the correction in only three layers of atmospheric turbulence.
According to the design shown in Figure 1, and the operation of which in this case is shown in Figure 2, only one sensor is used, placed in a single position of the optical axis, with the present invention.
Figure 2 schematically shows a system according to the invention for performing atmospheric tomography in the astrophysical observation of a star (8) with adaptive optics.
The individual layers of turbulence in the atmosphere correspond to (9) and (10) . In this case, the telescope (1) itself described above functions as the objective lens. A
lenslet array (2) (32x32, mount C) focusing on an IXON ANDOR
model 512x512 pixel camera (3) is placed in its image space.
The phase camera of the invention allows scanning the complete cylinder of atmospheric turbulence (13) affecting the final image of the telescope. The data are collected and treated by means of a Virtex ML501 model FPGA previously adapted for performing by means of the following process:
calculating the focal stack by means of the method of claim 1, where the square root of the focal stack directly supplies the modulus of the complex amplitude of the electromagnetic field at any point of the volume of the object space, applying an operator for the generation of the gradients of the wavefront phase at any point of the volume of the object space (Clarke and Lane operator, for example), and recovering the phase of the wavefront of the associated electromagnetic field, for example, by means of the method with a development in complex exponentials, the development as a function of Zernike polynomials or the Hudgin algorithm.
Unlike previous techniques, in the method of the invention the operator for the generation of the gradients of the wavefront phase is applied locally, rather than to the plenoptic image as a whole.
A single measurement, subsequently processed as described, allows the real-time acquisition of the three-dimensional map of turbulences (complex amplitude of the wavefront) associated with the entire atmospheric column affecting the observation with the telescope used for the invention and the height at which these layers of turbulence are located, as well as the distance and three-dimensional profile if an artificial laser star was used.
The phase camera of the invention allows scanning the complete cylinder of atmospheric turbulence (13) affecting the final image of the telescope. The data are collected and treated by means of a Virtex ML501 model FPGA previously adapted for performing by means of the following process:
calculating the focal stack by means of the method of claim 1, where the square root of the focal stack directly supplies the modulus of the complex amplitude of the electromagnetic field at any point of the volume of the object space, applying an operator for the generation of the gradients of the wavefront phase at any point of the volume of the object space (Clarke and Lane operator, for example), and recovering the phase of the wavefront of the associated electromagnetic field, for example, by means of the method with a development in complex exponentials, the development as a function of Zernike polynomials or the Hudgin algorithm.
Unlike previous techniques, in the method of the invention the operator for the generation of the gradients of the wavefront phase is applied locally, rather than to the plenoptic image as a whole.
A single measurement, subsequently processed as described, allows the real-time acquisition of the three-dimensional map of turbulences (complex amplitude of the wavefront) associated with the entire atmospheric column affecting the observation with the telescope used for the invention and the height at which these layers of turbulence are located, as well as the distance and three-dimensional profile if an artificial laser star was used.
Claims (10)
1. A method for calculating a focal stack associated with an object space from the discrete plenoptic function thereof, f(s,tj,k), which comprises evaluating the photographic formation integral as a sum along the length of planes in a 4D
hypercube, said evaluation of the photographic formation integral in turn comprising the following steps:
at the start of the computation, making the captured data, .function.(s,t,j,k), be equal to the transformed data up to step 0, ~0(r,s,t,j,k), i.e.: ~0(r,s,t,j,k) -.function.(s,t,j,k); and then applying n= log2N times the following partial transformation:
hypercube, said evaluation of the photographic formation integral in turn comprising the following steps:
at the start of the computation, making the captured data, .function.(s,t,j,k), be equal to the transformed data up to step 0, ~0(r,s,t,j,k), i.e.: ~0(r,s,t,j,k) -.function.(s,t,j,k); and then applying n= log2N times the following partial transformation:
2. A method for improving the resolution of the images of the focal stack obtained by means of the method according to claim 1, which comprises the following steps:
given an image of the focal stack at a determined distance, back-projecting the O(N4) rays of the light-field, constructing an image at a high resolution with the fractional positions of these rays, determining the redundancies resulting from assuming that the elements of the scene have a Lambertian-type reflectance, and in the positions of the focal stack with super-resolution where there are no redundancies, placing the value of the back-projected ray and where there are redundancies, placing a representative value of the redundant rays, such as the mean.
given an image of the focal stack at a determined distance, back-projecting the O(N4) rays of the light-field, constructing an image at a high resolution with the fractional positions of these rays, determining the redundancies resulting from assuming that the elements of the scene have a Lambertian-type reflectance, and in the positions of the focal stack with super-resolution where there are no redundancies, placing the value of the back-projected ray and where there are redundancies, placing a representative value of the redundant rays, such as the mean.
3. A method for the real-time measurement of distances in three-dimensional scenes which comprises the following steps:
obtaining an image of the object space with a phase camera, calculating the focal stack by means of the method of claim 1, applying a focus quality measurement operator in the focal stack, and calculating the optimal state on a random Markov field.
obtaining an image of the object space with a phase camera, calculating the focal stack by means of the method of claim 1, applying a focus quality measurement operator in the focal stack, and calculating the optimal state on a random Markov field.
4. The method for the real-time measurement of distances in three-dimensional scenes according to claim 3 which in addition comprises improving the resolution of the images of the focal stack by means of the method of claim 2.
5. A method for the real-time tomographic measurement of the complex amplitude of the electromagnetic field associated with a wavefront which comprises the following steps:
obtaining an image of the object space with a phase camera, calculating the focal stack by means of the method of claim 1, where the square root of the focal stack directly supplies the modulus of the complex amplitude of the electromagnetic field at any point of the volume of the object space, applying an operator for the generation of the gradients of the wavefront phase at any point of the volume of the object space, and recovering the phase of the wavefront of the associated electromagnetic field.
obtaining an image of the object space with a phase camera, calculating the focal stack by means of the method of claim 1, where the square root of the focal stack directly supplies the modulus of the complex amplitude of the electromagnetic field at any point of the volume of the object space, applying an operator for the generation of the gradients of the wavefront phase at any point of the volume of the object space, and recovering the phase of the wavefront of the associated electromagnetic field.
6. The method for the real-time tomographic measurement of the complex amplitude of the electromagnetic field associated with a wavefront according to claim 5 which in addition comprises improving the resolution of the images of the focal stack by means of the method of claim 2.
7. A phase camera for the real-time acquisition of the visual information of three-dimensional scenes which comprises a converging lens, a lenslet array placed in a position of the image space of the converging lens and which forms an image on a detecting surface for detecting sufficient resolution, and parallel computation processing means adapted for calculating the focal stack associated with the object space measured by the camera by means of the method of claim 1, obtaining the complex amplitude of the electromagnetic field (modulus and phase) and obtaining the distance in any position of the sensed object space.
8. The phase camera for the real-time acquisition of the visual information of three-dimensional scenes according to claim 7, wherein the processing means are adapted for improving the resolution of the images of the focal stack by means of the method of claim 2.
9. The phase camera for the real-time acquisition of the visual information of three-dimensional scenes according to claim 7, wherein the processing means are adapted for obtaining the distance in any position of the object space by means of the method of claim 3.
10. The phase camera for the real-time acquisition of the visual information of three-dimensional scenes according to claim 7, wherein the processing means are adapted for obtaining the complex amplitude of the electromagnetic field by means of the method of claim 5.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
ESP200800126 | 2008-01-15 | ||
ES200800126A ES2372515B2 (en) | 2008-01-15 | 2008-01-15 | CHAMBER FOR THE REAL-TIME ACQUISITION OF THE VISUAL INFORMATION OF THREE-DIMENSIONAL SCENES. |
PCT/ES2009/000031 WO2009090291A1 (en) | 2008-01-15 | 2009-01-15 | Method and camera for the real-time acquisition of visual information from three-dimensional scenes |
Publications (2)
Publication Number | Publication Date |
---|---|
CA2711727A1 true CA2711727A1 (en) | 2009-07-23 |
CA2711727C CA2711727C (en) | 2016-12-06 |
Family
ID=40885085
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA2711727A Active CA2711727C (en) | 2008-01-15 | 2009-01-15 | Method and camera for the real-time acquisition of visual information from three-dimensional scenes |
Country Status (21)
Country | Link |
---|---|
US (1) | US8471897B2 (en) |
EP (1) | EP2239706B1 (en) |
JP (1) | JP5269099B2 (en) |
KR (1) | KR101590778B1 (en) |
CN (1) | CN101952855B (en) |
AU (1) | AU2009204788B2 (en) |
BR (1) | BRPI0907392A2 (en) |
CA (1) | CA2711727C (en) |
CO (1) | CO6300819A2 (en) |
DK (1) | DK2239706T3 (en) |
EG (1) | EG26176A (en) |
ES (1) | ES2372515B2 (en) |
HK (1) | HK1153297A1 (en) |
IL (1) | IL206807A (en) |
MA (1) | MA32069B1 (en) |
MX (1) | MX2010007697A (en) |
NZ (1) | NZ586721A (en) |
RU (1) | RU2502104C2 (en) |
UA (1) | UA103759C2 (en) |
WO (1) | WO2009090291A1 (en) |
ZA (1) | ZA201004763B (en) |
Families Citing this family (49)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8913149B1 (en) | 2010-11-30 | 2014-12-16 | Integrity Applications Incorporated | Apparatus and techniques for enhanced resolution imaging |
EP2535681B1 (en) * | 2011-06-17 | 2016-01-06 | Thomson Licensing | Device for estimating the depth of elements of a 3D scene |
DK2541258T3 (en) | 2011-06-30 | 2013-11-04 | Siemens Ag | Method and apparatus for recording 3D data for one or more airborne particles |
US8998411B2 (en) | 2011-07-08 | 2015-04-07 | Carl Zeiss Meditec, Inc. | Light field camera for fundus photography |
JP5132832B1 (en) * | 2011-07-11 | 2013-01-30 | キヤノン株式会社 | Measuring apparatus and information processing apparatus |
WO2013058777A1 (en) * | 2011-10-21 | 2013-04-25 | Hewlett-Packard Development Company, L. P. | Color image capture system and method for light modulation |
US9581966B1 (en) | 2012-02-15 | 2017-02-28 | Integrity Applications Incorporated | Systems and methodologies related to 3-D imaging and viewing |
US9354606B1 (en) | 2012-07-31 | 2016-05-31 | Integrity Applications Incorporated | Systems and methodologies related to generating projectable data for 3-D viewing |
IL221491A (en) * | 2012-08-15 | 2016-06-30 | Aspect Imaging Ltd | Mri apparatus combined with lightfield camera |
US9219905B1 (en) | 2012-08-31 | 2015-12-22 | Integrity Applications Incorporated | Systems and methodologies related to formatting data for 3-D viewing |
US9036080B2 (en) | 2012-09-04 | 2015-05-19 | Canon Kabushiki Kaisha | Apparatus and method for acquiring information about light-field data |
US9091628B2 (en) | 2012-12-21 | 2015-07-28 | L-3 Communications Security And Detection Systems, Inc. | 3D mapping with two orthogonal imaging views |
US9247874B2 (en) | 2013-02-01 | 2016-02-02 | Carl Zeiss Meditec, Inc. | Systems and methods for sub-aperture based aberration measurement and correction in interferometric imaging |
US9132665B2 (en) * | 2013-08-22 | 2015-09-15 | Ricoh Company, Ltd. | Substrate defect detection mechanism |
US9569853B2 (en) * | 2013-10-25 | 2017-02-14 | Ricoh Company, Ltd. | Processing of light fields by transforming to scale and depth space |
US9460515B2 (en) * | 2013-10-25 | 2016-10-04 | Ricoh Co., Ltd. | Processing of light fields by transforming to scale and depth space |
WO2015162098A1 (en) | 2014-04-24 | 2015-10-29 | Carl Zeiss Meditec, Inc. | Functional vision testing using light field displays |
WO2016033590A1 (en) | 2014-08-31 | 2016-03-03 | Berestka John | Systems and methods for analyzing the eye |
WO2016040582A1 (en) * | 2014-09-11 | 2016-03-17 | University Of Delaware | Real-time object capturing and 3d display systems and methods |
CN104298638B (en) * | 2014-10-15 | 2017-02-08 | 沈阳理工大学 | Self-adaptive optical processing platform |
ES2578356B1 (en) * | 2014-12-22 | 2017-08-04 | Universidad De La Laguna | METHOD FOR DETERMINING THE COMPLEX WIDTH OF THE ELECTROMAGNETIC FIELD ASSOCIATED WITH A SCENE |
EP3262606A1 (en) | 2015-02-25 | 2018-01-03 | BAE Systems PLC | An image processing method and apparatus for determining depth within an image |
GB2535726B (en) * | 2015-02-25 | 2020-12-30 | Bae Systems Plc | An image processing method and apparatus for determining depth within an image |
CN104658031B (en) * | 2015-03-06 | 2017-05-10 | 新维畅想数字科技(北京)有限公司 | Device and method for rendering three-dimensional image by directly using depth data |
EP3088954A1 (en) | 2015-04-27 | 2016-11-02 | Thomson Licensing | Method and device for processing a lightfield content |
EP3106912A1 (en) * | 2015-06-17 | 2016-12-21 | Thomson Licensing | An apparatus and a method for obtaining a registration error map representing a level of fuzziness of an image |
CA2998670A1 (en) * | 2015-09-17 | 2017-03-23 | Thomson Licensing | Light field data representation |
EP3144885A1 (en) * | 2015-09-17 | 2017-03-22 | Thomson Licensing | Light field data representation |
EP3144879A1 (en) * | 2015-09-17 | 2017-03-22 | Thomson Licensing | A method and an apparatus for generating data representative of a light field |
MX2018003263A (en) * | 2015-09-17 | 2018-05-16 | Thomson Licensing | Method for encoding a light field content. |
US9955861B2 (en) | 2015-10-16 | 2018-05-01 | Ricoh Company, Ltd. | Construction of an individual eye model using a plenoptic camera |
CN105807550B (en) * | 2016-03-02 | 2019-04-23 | 深圳大学 | Inverting ultrahigh speed imaging method |
US10136116B2 (en) | 2016-03-07 | 2018-11-20 | Ricoh Company, Ltd. | Object segmentation from light field data |
CN105739091B (en) * | 2016-03-16 | 2018-10-12 | 中国人民解放军国防科学技术大学 | A kind of imaging method and device weakening atmospheric turbulance influence |
EP3443735B1 (en) | 2016-04-12 | 2024-09-18 | Quidient, LLC | Quotidian scene reconstruction engine |
KR101835796B1 (en) | 2016-08-08 | 2018-04-19 | 박준현 | System for Supporting Tactics Using Realtime 3D Ocean Spatial Data |
CN106296811A (en) * | 2016-08-17 | 2017-01-04 | 李思嘉 | A kind of object three-dimensional reconstruction method based on single light-field camera |
CN107788947A (en) * | 2016-09-07 | 2018-03-13 | 爱博诺德(北京)医疗科技有限公司 | Eye examination apparatus and method |
FI3358321T3 (en) * | 2017-02-03 | 2024-10-04 | Wooptix S L | Method and optical system for acquiring the tomographical distribution of wave fronts of electromagnetic fields |
CA3097202A1 (en) | 2018-05-02 | 2019-11-07 | Quidient, Llc | A codec for processing scenes of almost unlimited detail |
CN112740666A (en) | 2018-07-19 | 2021-04-30 | 艾科缇弗外科公司 | System and method for multi-modal depth sensing in an automated surgical robotic vision system |
WO2020139662A1 (en) * | 2018-12-26 | 2020-07-02 | Bloomfield Robotics, Inc. | Method and apparatus for measuring plant trichomes |
EP3709258B1 (en) * | 2019-03-12 | 2023-06-14 | L & T Technology Services Limited | Generating composite image from multiple images captured for subject |
KR20220021920A (en) | 2019-04-08 | 2022-02-22 | 액티브 서지컬, 인크. | Systems and methods for medical imaging |
CN110389140A (en) * | 2019-07-11 | 2019-10-29 | 中国科学院上海应用物理研究所 | Realize the dual intensity focus storehouse three-dimensional reconstruction method of element space positioning |
WO2021035094A1 (en) | 2019-08-21 | 2021-02-25 | Activ Surgical, Inc. | Systems and methods for medical imaging |
CN110648298A (en) * | 2019-11-01 | 2020-01-03 | 中国工程物理研究院流体物理研究所 | Optical aberration distortion correction method and system based on deep learning |
CN110928113B (en) * | 2019-12-03 | 2021-10-08 | 西北工业大学 | Light field acquisition device with variable spatial resolution |
CN115016035B (en) * | 2022-05-31 | 2023-12-22 | 中国科学院光电技术研究所 | Real-time atmospheric turbulence layered intensity measurement method based on wavefront detection |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE19632637C2 (en) * | 1996-08-13 | 1999-09-02 | Schwertner | Process for generating parallactic sectional image stack pairs for high-resolution stereomicroscopy and / or 3D animation with conventional, non-stereoscopic light microscopes |
CN100416613C (en) * | 2002-09-29 | 2008-09-03 | 西安交通大学 | Intelligent scene drawing system and drawing & processing method in computer network environment |
CN1216347C (en) * | 2003-04-10 | 2005-08-24 | 上海交通大学 | Scene re-lighting method based on image |
KR101134208B1 (en) * | 2004-10-01 | 2012-04-09 | 더 보드 어브 트러스티스 어브 더 리랜드 스탠포드 주니어 유니버시티 | Imaging arrangements and methods therefor |
ES2325698B1 (en) * | 2006-01-20 | 2010-10-19 | Universidad De La Laguna | PHASE CHAMBER FOR THE MEASUREMENT OF DISTANCES AND ABERRATIONS OF FRONT WAVE IN VARIOUS ENVIRONMENTS THROUGH SLICE DE FOURIER. |
US7620309B2 (en) * | 2006-04-04 | 2009-11-17 | Adobe Systems, Incorporated | Plenoptic camera |
-
2008
- 2008-01-15 ES ES200800126A patent/ES2372515B2/en active Active
-
2009
- 2009-01-15 AU AU2009204788A patent/AU2009204788B2/en not_active Ceased
- 2009-01-15 CA CA2711727A patent/CA2711727C/en active Active
- 2009-01-15 DK DK09701883.2T patent/DK2239706T3/en active
- 2009-01-15 MX MX2010007697A patent/MX2010007697A/en active IP Right Grant
- 2009-01-15 NZ NZ586721A patent/NZ586721A/en not_active IP Right Cessation
- 2009-01-15 US US12/812,957 patent/US8471897B2/en active Active
- 2009-01-15 UA UAA201010048A patent/UA103759C2/en unknown
- 2009-01-15 CN CN2009801023120A patent/CN101952855B/en active Active
- 2009-01-15 RU RU2010133943/28A patent/RU2502104C2/en active
- 2009-01-15 EP EP09701883.2A patent/EP2239706B1/en active Active
- 2009-01-15 WO PCT/ES2009/000031 patent/WO2009090291A1/en active Application Filing
- 2009-01-15 BR BRPI0907392-2A patent/BRPI0907392A2/en not_active IP Right Cessation
- 2009-01-15 JP JP2010542653A patent/JP5269099B2/en active Active
- 2009-01-15 KR KR1020107017933A patent/KR101590778B1/en active IP Right Grant
-
2010
- 2010-07-05 IL IL206807A patent/IL206807A/en active IP Right Grant
- 2010-07-06 ZA ZA2010/04763A patent/ZA201004763B/en unknown
- 2010-07-13 EG EG2010071181A patent/EG26176A/en active
- 2010-08-09 MA MA33072A patent/MA32069B1/en unknown
- 2010-08-13 CO CO10100015A patent/CO6300819A2/en active IP Right Grant
-
2011
- 2011-07-15 HK HK11107395.3A patent/HK1153297A1/en unknown
Also Published As
Publication number | Publication date |
---|---|
MA32069B1 (en) | 2011-02-01 |
IL206807A0 (en) | 2010-12-30 |
AU2009204788A1 (en) | 2009-07-23 |
JP2011514964A (en) | 2011-05-12 |
AU2009204788B2 (en) | 2013-03-21 |
DK2239706T3 (en) | 2014-03-03 |
UA103759C2 (en) | 2013-11-25 |
EP2239706B1 (en) | 2013-12-04 |
BRPI0907392A2 (en) | 2015-07-21 |
EP2239706A1 (en) | 2010-10-13 |
CA2711727C (en) | 2016-12-06 |
US20110032337A1 (en) | 2011-02-10 |
WO2009090291A1 (en) | 2009-07-23 |
KR20100136448A (en) | 2010-12-28 |
US8471897B2 (en) | 2013-06-25 |
CN101952855B (en) | 2013-04-03 |
RU2010133943A (en) | 2012-02-27 |
EG26176A (en) | 2013-04-04 |
ES2372515B2 (en) | 2012-10-16 |
CN101952855A (en) | 2011-01-19 |
MX2010007697A (en) | 2010-08-03 |
ZA201004763B (en) | 2011-10-26 |
JP5269099B2 (en) | 2013-08-21 |
CO6300819A2 (en) | 2011-07-21 |
IL206807A (en) | 2015-03-31 |
KR101590778B1 (en) | 2016-02-18 |
HK1153297A1 (en) | 2012-03-23 |
RU2502104C2 (en) | 2013-12-20 |
EP2239706A4 (en) | 2012-10-03 |
ES2372515A1 (en) | 2012-01-23 |
NZ586721A (en) | 2012-11-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
AU2009204788B2 (en) | Method and camera for the real-time acquisition of visual information from three-dimensional scenes | |
US8305485B2 (en) | Digital camera with coded aperture rangefinder | |
AU2007206873B2 (en) | Wavefront aberration and distance measurement phase camera | |
US20080123961A1 (en) | Scalable method for rapidly detecting potential ground vehicle under cover using visualization of total occlusion footprint in point cloud population | |
CN110517211B (en) | Image fusion method based on gradient domain mapping | |
US11651506B2 (en) | Systems and methods for low compute high-resolution depth map generation using low-resolution cameras | |
Ma et al. | An operational superresolution approach for multi-temporal and multi-angle remotely sensed imagery | |
KR102501402B1 (en) | Method for determining the complex amplitude of the electromagnetic field associated with a scene | |
CN113506372A (en) | Environment reconstruction method and device | |
CN105352482B (en) | 332 dimension object detection methods and system based on bionic compound eyes micro lens technology | |
CN110476043B (en) | Method and optical system for acquiring tomographic distribution of wave fronts of electromagnetic fields | |
CN113808070A (en) | Binocular digital speckle image related parallax measurement method | |
US20240354982A1 (en) | Systems and methods for low compute high-resolution depth map generation using low-resolution cameras | |
Paparoditis et al. | 3D data acquisition from visible images | |
Wong et al. | 3D reconstruction through the ray calibration of an optically distorted stereo-photogrammetric system | |
Lin et al. | ThermalNeRF: Thermal Radiance Fields | |
Xia et al. | A Target Depth Measurement Method Based on Light Field Imaging | |
CN117576199A (en) | Driving scene visual reconstruction method, device, equipment and medium | |
WO2024110507A1 (en) | Device for localizing a vehicle and method for localizing a vehicle | |
CN117911483A (en) | Luminosity stereoscopic vision depth imaging method based on double-channel local high precision | |
Tu | Image based 3D Sensing and Modeling Technology for High-end Digital Cameras | |
YasirSalih et al. | Distance and Size Measurements of Objects in the Scene from a Single 2D Image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
EEER | Examination request |
Effective date: 20140110 |