WO1993014384A1 - Multifocal optical apparatus - Google Patents

Multifocal optical apparatus Download PDF

Info

Publication number
WO1993014384A1
WO1993014384A1 PCT/US1993/000330 US9300330W WO9314384A1 WO 1993014384 A1 WO1993014384 A1 WO 1993014384A1 US 9300330 W US9300330 W US 9300330W WO 9314384 A1 WO9314384 A1 WO 9314384A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
ccc
imager
scene
sensor
Prior art date
Application number
PCT/US1993/000330
Other languages
French (fr)
Other versions
WO1993014384B1 (en
Inventor
Shelemyahu Zacks
Ziv Soferman
Original Assignee
Shelemyahu Zacks
Ziv Soferman
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shelemyahu Zacks, Ziv Soferman filed Critical Shelemyahu Zacks
Priority to JP5512658A priority Critical patent/JPH07505725A/en
Priority to EP19930904495 priority patent/EP0623209A4/en
Publication of WO1993014384A1 publication Critical patent/WO1993014384A1/en
Priority to GB9413367A priority patent/GB9413367D0/en
Publication of WO1993014384B1 publication Critical patent/WO1993014384B1/en

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B7/00Mountings, adjusting means, or light-tight connections, for optical elements
    • G02B7/28Systems for automatic generation of focusing signals
    • G02B7/36Systems for automatic generation of focusing signals using image sharpness techniques, e.g. image processing techniques for generating autofocus signals

Definitions

  • the present invention relates to focusing optical systems.
  • a significant limitation in the operation of many optical devices is their limited depth of field, i.e. the depth over which an object in a field of view remains in acceptable focus.
  • the present invention seeks to provide optical apparatus capable of in-focus imaging of a significantly greater depth than that imaged by conventional optical apparatus.
  • optical apparatus including multifocal imaging apparatus defining a depth domain, sensor apparatus receiving light from a scene via the multifocal imaging apparatus and image processing apparatus for receiving an output of the sensor apparatus and providing an in-focus image of the portion of the scene falling within the depth domain.
  • the depth domain includes all planes falling between and parallel to first and second planes which are both orthogonal to the optical axis.
  • the depth domain includes a first subdomain including all planes falling between and parallel to first and second planes which are both orthogonal to the optical axis and a second subdomain, disjoint to the first subdomain, which includes all planes falling between and parallel to third and fourth planes which are both orthogonal to the optical axis and which are not included in the first subdomain.
  • the domain includes more than two disjoint subdomains such as those described above.
  • optical apparatus including imaging apparatus defining a plurality of surfaces each arranged to receive light from a scene, each of the plurality of surfaces having a different focal length, sensor apparatus receiving light from the scene from each of the plurality of
  • the image processing apparatus is operative to provide a restoration procedure which produces a sharp image from a blurred image provided by the optical apparatus.
  • the imaging apparatus includes lens apparatus.
  • the imaging apparatus includes mirror apparatus.
  • an image processing method including the steps of providing a digital representation of a viewed scene including at least two scene locations whose distances from the point of view are nonequal, dividing the digital
  • the step of dividing and sharpening includes the steps of operating a plurality of restoration filters on each of the multiplicity of digital
  • optical apparatus including a
  • multifocal imager defining a depth domain
  • a sensor receiving light from a scene via the multifocal imager
  • an image processor for receiving an output of the sensor and for providing an in-focus image of the portion of the scene falling within the depth domain.
  • the image processor operates in real time.
  • the image processor operates off line with respect, to the sensor.
  • the senor includes photographic film.
  • the image processor includes an image digitizer operative to provide a digital representation of the scene to the image processor.
  • the multifocal imager defines a plurality of surfaces each arranged to receive light from a scene, each of the plurality of surfaces having a different focal length.
  • the senor receives light from the scene via each of the plurality of surfaces, such that a different part of the light from the scene received via each of the plurality of surfaces is in focus.
  • the image processor is operative to provide a composite image built up using in-focus parts received from each of the plurality of surfaces.
  • the xmager includes a lens and/or a mirror system.
  • the imager has an invertible transfer function and the image processor includes a computational unit for inverting the transfer function, thereby to restore sharp details of the scene.
  • the transfer function includes no zeros.
  • absolute value of the transfer function has a predetermined lower bound which is sufficiently large to permit image restoration.
  • the absolute value of the transfer function has a predetermined lower bound which is
  • an image processing method including the steps of providing a digital representation of a viewed scene including at least two scene locations whose distances from the point of view are nonequal, dividing the digital
  • the step of dividing and sharpening includes the steps of operating a plurality of restoration filters on each of the multiplicity of digital
  • video camera apparatus including an imager defining a depth domain and a sensor receiving light from a scene via the imager for generating video images, wherein the imager includes a multifocal imager, and wherein the video camera apparatus also includes an image processor for receiving an output of the sensor and for providing an in-focus image of the portion of the scene falling within the depth domain.
  • electronic still camera apparatus including an imager defining a depth domain and a sensor receiving light from a scene via the imager for generating electronic still images, wherein the imager includes a multifocal imager, and wherein the electronic still camera apparatus also includes an image processor for receiving an output of the sensor and for providing an in-focus image of the portion of the scene falling within the depth domain.
  • camcorder apparatus including an imager defining a depth domain and a sensor receiving light from a scene via the imager for generating video images, wherein the imager includes a multifocal imager, and wherein the camcorder apparatus also includes an image processor for receiving an output of the sensor and for providing an in-focus image of the portion of the scene falling within the depth domain.
  • film camera development apparatus including a multifocal-combining image processor operative to receive a multifocal representation of a scene and to provide an in-focus
  • microscope apparatus in accordance with a preferred embodiment of the present invention, microscope apparatus
  • an objective lens including a lens
  • multifocal imager defining a depth domain
  • a sensor receiving light from a scene, having more than two dimensions, via the multifocal imager, and an image processor for receiving an output of the sensor and for providing an in-focus image of a plurality of planes within the scene which fall within the depth domain.
  • pparatus for internal imaging of the human body including a multifocal imager defining a depth domain, which is configured for insertion into the human body, a sensor
  • processor for receiving an output of the sensor and for providing an in-focus image of the internal portion of the human body falling within the depth domain.
  • multifocal imager is operative to provide an image, including a superposition of in-focus contributions of a plurality of planes, to a single sensor.
  • the multifocal imager is operative to provide an image, including a superposition of in-focus contributions of an infinite number of planes.
  • the sensor includes a single sensor.
  • an imaging method including the steps of providing a multifocal imager defining a depth domain, sensing light from a scene via the multifocal imager, and receiving an output of the sensor and image processing the output, thereby to provide an in-focus image of the portion of the scene falling within the depth domain.
  • depth domain and “working domain” are used substantially interchangeably.
  • Fig. 1 is a generalized block diagram illustration of the optical apparatus of the invention
  • Fig. 2A is a conceptual illustration of imaging lens apparatus useful in the
  • Fig. 2B is a conceptual illustration of imaging mirror apparatus useful in the apparatus of Fig. 1;
  • Fig. 3 is a simplified illustration of the images produced at a sensor by the apparatus of Figs. 2A and 2B;
  • Fig. 4 is an optical diagram of a typical design of an imaging lens assembly useful in the apparatus of Fig. 1;
  • Fig. 5 is an optical diagram of a typical design of an imaging mirror useful in the apparatus of Fig. 1;
  • Fig. 6 is a simplified block diagram of image processing apparatus particularly suited for processing gray level images and useful in the apparatus in Fig. 1;
  • Fig. 7 is a simplified block diagram of another embodiment of image processing apparatus particularly suited for processing gray level images and useful in the apparatus of Fig. 1;
  • Fig. 8 is a conceptual illustration of a method for computing a restoration transfer function provided in accordance with one
  • Fig. 9 is a conceptual illustration of an alternative method to the method of Fig. 8;
  • Fig. 10 is a conceptual illustration of a method for fusing a plurality of' restored images into a single restored image
  • Fig. 11 is a simplified block diagram of an alternative embodiment of image processing apparatus particularly suited for processing color images and useful in the apparatus of Fig. 1;
  • Fig. 12 is a simplified block diagram of video or electronic still camera apparatus or camcorder apparatus in which conventional functions of these devices are combined with the functions of the invention shown and described herein;
  • Fig. 13 is a simplified block diagram of film camera apparatus in which conventional functions of film cameras are combined with the functions of the invention shown and described herein;
  • Fig. 14 is a simplified block diagram of microscopy apparatus in which conventional microscope functions are combined with the functions of the invention shown and described herein;
  • Fig. 15 is a simplified block diagram of a laproscope or endocscope in which
  • Fig. 16 is a simplified block diagram of a laproscopy or endocscopy apparatus which is a modification of the apparatus of Fig. 15;
  • Fig. 17 is a simplified block diagram of multifocal imaging apparatus constructed and operative in accordance with another embodiment of the present invention.
  • FIG. 1 illustrates optical apparatus constructed and operative in accordance with a preferred
  • imaging apparatus 10 such as
  • multifocal lens apparatus or multifocal mirror apparatus, arranged to receive light from a scene 11 which includes objects located at various distances from the imaging apparatus.
  • the imaging apparatus is operative to direct light from the scene to a sensor 12 such that focused light from various objects in the scene at different distances from the imaging apparatus
  • Sensor 12 is typically a charge coupled device (CCD).
  • CCD charge coupled device
  • the apparatus 10 has a transfer function which varies just slightly with the distance between the imaging apparatus and an object in the working domain (depth domain), as measured parallel to the optical axis of the optical apparatus.
  • the working domain is a continuous or discontinuous range along the optical axis.
  • the transfer function preferably has no zeros in the working domain.
  • the absolute value of the transfer function in the working domain has a lower bound V, which is large enough to allow image restoration, which can be provided computationally, even in the presence of a reasonable amount of noise.
  • the transfer function is invertible such that the sharp details of the scene within the depth domain can be restored by computational means.
  • the composite image seen by the sensor 12 may normally appear not in focus to a human eye.
  • electronic image processing apparatus 14 is provided for converting the information received by sensor 12 to an in-focus image of the scene, which is displayed on an output device 16, such as a display, image analyzer, video recorder or other storage means, or printer.
  • record and playback functions can be provided downstream of sensor 12 and upstream of image processor 14.
  • Record and playback functions can be analog with subsequent digitization or, alternatively, digitization can be performed upstream of the record and playback functions, in which case the record and playback functions are digital.
  • Fig. 2A conceptually illustrates multifocal lens means 20 useful as the imaging apparatus of the present invention.
  • Fig. 2A shows that the multifocal lens means 20 images two objects, indicated as A and B, such as point light sources, which are situated at two different distances from the lens assembly 20, onto an image plane 22.
  • Each of the images of objects A and B in the image plane 22 is built up from a
  • the in-focus contribution comes from a different portion of the lens means 20.
  • the in-focus contribution comes from region 2 of the lens means and for object B, the in-focus contribution comes from region 1 of the lens means 20.
  • resolution refers to the number of resolvable lines per unit length, e.g. resolvable
  • FIG. 2B conceptually illustrates a multifocal mirror assembly 30 useful as the imaging apparatus of the present invention.
  • Fig. 3 shows that the multifocal mirror assembly 30 focuses two objects A and B, such as point light sources, which are situated at two different distances from the mirror assembly 30, onto an image plane 22.
  • each of the images of objects A and B in the image plane 22 is built up from a superposition of in- focus contributions made by some portion of mirror assembly 30 and of out-of-focus
  • the in-focus contribution comes from a different portion of the mirror assembly 30.
  • the in-focus contribution comes from region 2 of the mirror assembly 30 and for object B, the in- focus contribution comes from region 1 of the mirror assembly 30.
  • Fig. 3 provides a qualitative indication of the appearance of the images of the two objects A and B as seen in the image plane 22, for either or both of the embodiments of Figs. 2A and 2B. If it is assumed that in both Figs. 2A and 2B, the relative placement of objects A and B and the optical apparatus is the same, both the lens means 20 of Fig. 2A and the mirror assembly 30 of Fig. 2B can produce a substantially identical image.
  • the imaging means of the present invention may comprise a plurality of discrete regions of different focal lengths, as in the embodiment of Fig. 2A, for example.
  • the imaging means may comprise a continuum of locations each corresponding to a different focal length.
  • the "concentric ring" implementation of the multifocal imaging apparatus shown and described herein may be replaced by a "continuous" implementation in which the focal length of the lens varies continuously over the lens as a function of the distance from the center of the lens.
  • the final image may be based on a combination of high resolution (high frequency) components arriving from each plane in space which
  • the high resolution components of the image are based on information arriving from only a thin slice, orthogonal to the optical axis, which contains in-focus information. High resolution (high frequency) components arriving from all other planes is strongly suppressed or strongly attenuated.
  • Fig. 4 illustrates an optical design of a preferred embodiment of lens means of the Double Gauss type, having a continuously varying focal length.
  • the lens means of Fig. 4 includes a plurality of lens portions, each having
  • R the radius of curvature
  • T the distance from an individual surface to the next surface
  • N index of refraction
  • V Abbe number, (Nd-1) / (Nf-Nc);
  • Nd index of refraction for the Sodium d line
  • Nf index of refraction for the Hydrogen f line
  • Nc index of refraction for the Hydrogen c line (0.6563 microns).
  • Schmidt plate whose surfaces are referenced V and VI, which is located in the plane of the aperture stop and which is
  • surfaces V and VI are generally parallel except for an aspheric correction term, (+0.5 x 10 -3 x rho 4 ), for surface VI.
  • the surface sag, Z is defined as:
  • X, Y system axes which are orthogonal to one another and to the axis of the optical system.
  • the exit pupil is, for computation purposes, similar to a union of an infinite number of infinitesimal concentric rings among the optical axis, all with the same area.
  • the focal length of the rings varies continuously as a function of ring radius, whereas the magnification of the
  • the light energy which falls on the sensor may be computed as a sum or integral of the contributions from all of the infinitesimal concentric rings, because the image on the sensor is, for computation purposes, a
  • superpositioned image received by the sensor includes, inter alia, an in-focus image of each plane within the depth domain of the system.
  • Fig. 5 illustrates a preferred optical design of a multifocal mirror assembly 30 which is similar to that conceptually illustrated in Fig. 2B.
  • the mirror assembly of Fig. 2B includes a plurality of mirror surface portions.
  • a suitable number of rings is provided which together form a mirror, such that the respective focal lengths of the rings together cover a predetermined depth domain. It is appreciated that any suitable number of rings may be provided, since, theoretically, an infinite number of infinitesimal rings of different focal lengths may be employed so as to provide an entirely continuous dependence of focal length on ring radius.
  • multifocality is generated by providing an aspheric surface which applies a third order spherical aberration which is selected to be strong enough to generate multifocality in the desired depth domain.
  • aspheric aberrations may alternatively be achieved using only spherical surfaces or by using a
  • Fig. 6 is a simplified block diagram of image
  • processing unit 14 of Fig. 1 constructed and operative in accordance with an embodiment of the present invention which is particularly suited to processing gray level images.
  • the apparatus of Fig. 6 includes a Fourier transform unit 100 which is operative to compute a Fourier transform such as an FFT (Fast Fourier Transform) of a digitized blurred image I received from sensor 12 of Fig. 1.
  • a Fourier transform such as an FFT (Fast Fourier Transform) of a digitized blurred image I received from sensor 12 of Fig. 1.
  • the output of unit 100 is termed I ⁇ .
  • a digital storage unit 102 is operative to store a restoration transfer function T ⁇ for Fourier space restoration.
  • a computational unit 104 receives I ⁇ from Fourier transform unit 100 and T ⁇ from memory 102 and generates a restored image in the Fourier plane. For each pair of Fourier plane coordinates (u, v), the restored image is I ⁇ (u,v) /T ⁇ (u,v).
  • An inversion unit 106 inverts the restored image generated by unit 104 from the Fourier plane to the real plane, thereby to provide a sharpened output image.
  • inversion unit 106 performs an inverse FFT operation on the image supplied to it by unit 104.
  • the output I ⁇ of inversion unit 106 is supplied to unit 16 of Fig. 1.
  • Any suitable method may be employed to compute a restoration transfer function for storage in memory unit 102.
  • the following method may be used.
  • h(x,y,d) the PSF (point spread function) of the le or mirror of imaging apparatus 10 of Fig. 1, for a planar object which lies perpendicular to the lens axis and parallel to sensor 12 which may be a screen onto which the lens or mirror projects an image of the planar object.
  • h(x,y,d) describes the blurring imposed by the lens on the planar object.
  • x, y cartesian coordinates of axes orthogonal to the optical axis
  • d the distance between a planar object and the lens
  • d 1 , d 2 boundaries of the working domain of the lens.
  • d 1 and d 2 are the smallest and largest values
  • the restoration transfer function T ⁇ (u,v) may be defined, for each (u,v), as H(u,v,d 0 ) such that:
  • >
  • Equation 1 The above equation is referenced herein Equation 1.
  • a particular feature of the present invention is that in the working domain, the
  • the lower bound can be made large enough to allow restoration even in the presence of a substantial level of noise.
  • d a member of a work domain [d 1 ,d 2 ] of the lens, or of a fragmented work domain which includes a plurality of spaced non-overlapping intervals such as [d 1 , d 2 ], [d 3 , d 4 ] and [d 5 , d 6 ].
  • which is always between 0 and 1
  • is relatively large, and may for example have a value of 1/4 or 1/2.
  • the blurring process for a planar object at a distance d from a lens or mirror may be described as:
  • O ⁇ b (u,v,d) the planar Fourier transform of the planar object
  • n ⁇ (u,v) Fourier transform of the noise.
  • T ⁇ (u,v,), which is independent of d to restore an image of an object whose d falls within the working domain.
  • T ⁇ (u,v,) which is independent of d
  • Fig. 7 is a simplified block diagram of another
  • the apparatus of Fig. 7 includes a convolution unit 110 which convolves the output I of unit 12 of Fig. 1 with a convolution kernel t.
  • the convolution kernel t may be supplied by a kernel storage or kernel computation unit 112.
  • the kernel t(x,y) preferably is approximately equal to the inverse Fourier transform of
  • the adaptive method preferably comprises the
  • the working domain which may or may not be fragmented, is partitioned into a
  • n may be an integer within a suitable range such as 3 - 10.
  • the restoration transfer function is computed in accordance with Equation 1, set forth above, thereby to define a plurality of n-1 restoration transfer functions T ⁇ 1 , ..., T ⁇ n -1, as illustrated conceptually in
  • the kernel for real space restoration is computed, as explained hereinabove with reference to Fig.
  • the image is restored either in the Fourier plane or in real space, as convenient, using the restoration transfer functions in the first instance and the convolution kernels in the second instance.
  • the result of this step is a plurality of n-1 restored images I ⁇ 1 , ... , I ⁇ n-
  • n-1 restored images computed in step c are fused or merged into a single
  • the criteria for the selection process may be conventional statistical or morphological criteria for image quality.
  • the selected block may be that which is most similar to a preassembled directory of known image features such as edges, isolated local extreme points, roof edges and smooth functions, where a correlational similarity criterion is employed, iii. Assemble the single restored image I ⁇ by fusing the m 1 x m 2 sharpest square blocks selected in step ii.
  • the final t(x,y) for restoration of block (j,k) is preferably interpolated from the kernels selected for block (j,k) and for its neighbors, blocks (j,k+1), (j+1,k) and
  • interpolation method Any suitable interpolation method may be employed, such as bilinear interpolation methods.
  • An interpolation method which is believed to provide high quality results is described in the following publication, the disclosure of which is incorporated herein by reference:
  • Fig. 11 is a simplified block diagram of an alternative embodiment of image processing unit 14 of Fig. 1 which is particularly suited for processing color image data such as RGB data.
  • the apparatus of Fig. 11 includes a color component channeling unit 202 which channels each of a plurality of color components of the input color image I, such as R, G and B components, to a corresponding one of a plurality of image processing subunits 204, 206 and 208,
  • Each image processing subunit may be similar to the image processing apparatus of
  • I ⁇ outputs of image processing subunits 204, 206 and 208 are referenced I ⁇ R , I ⁇ G and I ⁇ B in Fig. 11. All three outputs are provided to a color combining unit 210 which combines them and provides an output color image I ⁇ . It is appreciated that many output devices, such as display monitors, video recorders, video
  • printers and computer with video acquisition equipment include a color combining function because they are capable of receiving a
  • plurality of channels such as 3 channels of R, G and B data respectively, and providing an output color image.
  • blurred images may be any suitable blurred images. According to one embodiment of the present invention, blurred images may be any suitable blurred images.
  • a particular feature of the invention shown and described herein is that, in contrast to conventional imaging systems, here, the mapping from the three-dimensional object to the two-dimensional image is almost independent of the depth of the object within the field of view. This is because the point spread function h(x,y,z) is, up to a first order, independent of the depth coordinate z within a predetermined depth domain, i.e. within a predetermined range of z's.
  • imaging apparatus shown and described herein are merely exemplary of the wide variety of ways in which it is possible to implement the multifocal imaging apparatus of Fig. 1.
  • Appendix C is a Fortran language listing of a software implementation of image processing unit 14 of Fig. 1 according to one alternative embodiment of the present invention. Appendix C also includes results of operation of the software implementation on data derived from a software simulation of imaging apparatus 10 and sensor 12. It is appreciated that inclusion of the listing of Appendix C is intended merely to provide an extremely detailed disclosure of the present invention and is not intended to limit the scope of the present invention to the particular implementation of Appendix C.
  • distance of an object or image from an optical system refers to that component of the distance which is parallel to the optical axis of the system.
  • distance of an object point from an optical system refers to the distance between a first plane and a second plane, both of which are perpendicular to the optical axis of the system, wherein the first plane includes the object point and the second plane includes the operative portion of the optical system.
  • depth of an object relates to distance of the object, along the optical axis, from the imaging plane.
  • FIG. 12 - 16 illustrate various applications of the imaging apparatus shown and described above, it being understood that the particular applications of Figs. 12 - 16 are merely exemplary of possible imaging applications and are not intended to be limiting.
  • the apparatus of Fig. 12 includes video or electronic still camera apparatus or camcorder apparatus 250 which is imaging a scene 252 in which various objects are located at different distances from the camera/camcorder 250.
  • the lens 254 of the camera/camcorder 250 is not a conventional lens but rather comprises an imaging device constructed and operative in accordance with the present invention such as any one of the imaging apparatus embodiments shown and described above with reference to Figs. 2A - 5.
  • a sensor 256 receives the raw image formed by the multifocal imager 254.
  • the sensed raw image is recorded and, optionally, played back by recording unit 258 and playback unit 260, respectively, for off-line reception by digitizer 264.
  • the sensed raw image is digitized by a digitizer 264 and the resulting digital
  • an image restoration computation unit 266 which has image processing capabilities as shown and described above with reference to Figs. 6 - 11.
  • computation unit 266 may be converted into analog representation by a D/A unit 270, and may then be displayed or printed by an output device 272 such as a TV monitor or printer.
  • the digital output of computation unit 266 may be stored on a
  • the digital output of computation unit 266 may also be outputted directly by a digital output device 280 which is capable of handling digital input, such as but not limited to a printer, recorder or monitor with digital interface.
  • image restoration computation unit 266 may receive distance information from a rangefinder 282 such as an ultrasound rangefinder or laser rangefinder.
  • a rangefinder 282 such as an ultrasound rangefinder or laser rangefinder.
  • Unit 266 may employ this information to improve the image of the features falling within the range found by the rangefinder.
  • the rangefinder allows a user to select an object in a scene which is of relative importance, and determine the distance to that object using the rangefinder.
  • the image restoration computation unit 266 is preferably operative to receive the distance information and to select a transfer function on the basis of that distance such that optimal restoration results are achieved for the depth at which the selected object lies.
  • camcorder may be constructed to include all components of Fig. 12.
  • the components of the apparatus of Fig. 12 which are other than conventional may be retrofitted to an existing conventional electronic still camera, video camera or camcorder.
  • Fig. 12 is useful in high-definition television (HDTV) applications where, due to very high resolution, the depth domain is, in conventional apparatus, very small and even small deviations from focus result in perceptible defocussing.
  • HDTV high-definition television
  • Fig. 13 illustrates filming apparatus for filming a scene 310 including a film camera 312 fitted with a multifocal lens 314 which is constructed and operative in accordance with the present invention and which may comprise any of the imaging apparatus embodiments shown and described hereinabove with reference to Figs. 2A - 5.
  • the film 320 generated by film camera 312 is developed conventionally and the
  • scanner 324 which provides a digital output.
  • the digital output of scanner 324 is provided to an image
  • restoration computation unit 326 which has image processing functions in accordance with the present invention such as those shown and described above with reference to Figs. 6 - 11.
  • the sharpened image generated by the restoration unit 326 is converted into a hard copy 330 by a suitable output device 332 such as a plotter.
  • an improved film camera system may be constructed to include all components of Fig. 13.
  • the components of the apparatus of Fig. 13 which are other than conventional, such as the multifocal lens and the image restorer, may be provided stand-alone, for use in conjunction with
  • Fig. 14 is a simplified block diagram of microscopy apparatus in which conventional microscope functions are combined with the functions of the invention shown and described herein.
  • the apparatus of Fig. 14 includes a microscope 350 fitted with an multifocus objective lens 352 which is constructed and operative in accordance with the present invention and which may comprise any of the imaging devices shown and described above with reference to Figs. 2A - 5.
  • a camera 354 such as a video, electronic still or film camera captures the magnified image from its sensor 355 which may be a CCD, tube or film.
  • the digital output of the camera is provided to an image restoration unit 356. If the output of the camera is other than digital, the output is first digitized by an A/D unit 358.
  • Image restoration unit 356 is a computational unit with image processing
  • the output of unit 356 is provided to an analog output device 360 via a D/A unit 362 or to a digital output device 364.
  • an improved microscopy system may be constructed to include all components of Fig. 14.
  • the components of the apparatus of Fig. 14 which are other than conventional, such as the multifocal lens and the image restorer, may be provided stand-alone, for use in conjunction with
  • the apparatus of Fig. 14 is useful in inspecting a wide variety of objects and scenes such as but not limited to a drop of liquid, a three-dimensional trasparent or semitransparent scene, a 2.5-dimensional surface such as the surface of a microchip and an object or material whose surface is not flat.
  • Fig. 15 illustrates a laproscope or endocscope for imaging and photography of interior portions of a body such as that of a human patient.
  • the apparatus of Fig. 15 includes an objective lens 400 which is inserted into the body using conventional medical techniques at a location depending on the location which it is desired to image.
  • the objective lens 400 comprises a multifocal imager such as any of those shown and described above.
  • a first bundle 402 of optical fibers is
  • optical fiber bundle 402 is operative associated with a relay lens 408 and a sensor 410 such that the image formed at end 404 of the bundle 402 is projected onto sensor 410 via relay lens 408.
  • the image captured by sensor 410 is provided to an image restoration computation unit 414 which may be similar to any of the image processing units shown and described above.
  • the image generated by the image is provided to an image restoration computation unit 414 which may be similar to any of the image processing units shown and described above.
  • processor has a much higher resolution over a large depth domain. This image is provided to a suitable output device 416.
  • Illumination of the body interior portion to be imaged may be provided by a second optical fiber bundle 420 which is operatively associated with a source of light located externally of the body of the patient.
  • Fig. 16 illustrates laproscopy or endocscopy apparatus which is a modification of the apparatus of Fig. 15 in that the entire camera apparatus is inserted into the body.
  • the apparatus of Fig. 16 is generally similar to the apparatus of Fig. 15 except that relay lens 408 is eliminated and, instead, the sensor 410 is attached to the first optical fiber bundle 402 at end 404 thereof. All elements of the
  • computation unit 414 and output device 416 may be inserted into the body.
  • the elements of the apparatus which operate interiorly of the human body may communicate with exteriorly operating elements 414 and 416 via a suitable electric cable 424.
  • image restoration computation unit 414 may also be inserted into the body, such that all elements of the
  • operating elements may, in this case,
  • Fig. 17 is a simplified block diagram of multifocal imaging apparatus constructed and operative in accordance with another embodiment of the present invention.
  • the apparatus of Fig. 17 includes lens 450 which is operatively associated with a plurality of semitransparent mirrors or beam splitters such as three
  • splitting elements 452, 454 and 456 are indicated by dotted lines 460 and 462 respectively.
  • the focal planes of splitting element 456 for point source 458 are indicated by dotted line 464 and dotted line 466
  • sensors are provided for respectively imaging the four images generated in the illustrated embodiment.
  • the locations of the four sensors are indicated by solid lines 470, 472, 474 and 476. All sensors are, as illustrated, parallel respectively to the corresponding focal planes.
  • the four sensors are located at different respective distances from their corresponding focal planes, respectively, thereby to provide a plurality of differently focussed images. These images are then
  • the rescaled differently focussed images may be added, i.e. combined "one on top of the other" by addition, averaging or other suitable methods, into a single image.
  • This image is processed by an image restoration computation unit 484, similar to those described above, so as to generate a final image including contributions from all four differently focussed images.
  • the present invention is not limited in its applicability to planar sensors such as planar CCD's or such as films lying in a plane. If a nonplanar sensor is employed, the present invention may be used, for example, by treating the digitized sensor output as though it had arrived from a planar sensor.
  • transfer function has been used generally to refer to the appropriate one of the following terms: contrast transfer function, optical transfer function (OTF), modulation transfer function (MTF) or phase transfer function.
  • the present invention is useful in applications in which the edge information is the most crucial portion of the total image information, such as in industrial robot vision applications.
  • the edge information is the most crucial portion of the total image information, such as in industrial robot vision applications.
  • a conventional edge detector function is provided upstream of or instead of the image processing unit 14 which is operative to enhance edges located in any of the planes falling within the depth domain of the
  • processor 266 instead of or in addition to playback and record functions 258 and 260.
  • record and playbaick functions are provided which are operative upstream of the image processing function. This allows image processing, such as image
  • the above optional feature has the advantage of allowing image restoration to be performed frame by frame, off-line, recording the output frame by frame for later viewing.
  • the present apparatus can be employed to generate an in- focus video movie.
  • record and playback functions are optionally provided intermediate camera 354 and A/D unit 358 and/or intermediate camera 354 and image restoration computational unit 356.
  • A/D recorder of digital image, playback
  • Suitable record and playback functions may also be provided in the embodiment of Fig. 15.
  • FOB1 1./F-1./DOB1
  • FOB2 1./F-1./DOB2
  • CCC (AS A SIMPLIFIED SIMULATION) BY A CCC SUPERPOSITION OF A NUMBER OF PSF'S CCC SEE DESCRIPTION IN THE ROUTINE AVPSF.
  • FOB3 1./F-1./DOB3
  • A(I) A(I)/PSFAC(I)
  • SUBROUTINE AVPSF (A,NX,NY,EZ,DLENS,F, FOB,DOB, FOB1, FOB2,NP)
  • FOB 1./ (1./F-1./DOB) ! FOCAL DISTANCE
  • DOB2 1./F-1./FOB2
  • RAD ABS (FF-FOB) *DLENS/2./FOB! RAD/
  • DLENS/ (2*FOB)
  • FOB 1./ (1. /F-1. /DOB) ! FOCAL DISTANCE
  • DOB2 1./F-1./FOB2
  • CCCCCCCCCCCCCCCCCCCCC RAD ABS (FF-FOB) *DLENS/2. /FOB!
  • CCC F 1/(2*PI*SIG**2) * EXP (- (X**2+Y**2) /

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

Optical apparatus including multifocal imaging apparatus defining a depth domain, sensor apparatus receiving light from a scene via the multifocal imaging apparatus (10) and image processing apparatus (14) for receiving an ouput of the sensor apparatus (12) and providing an in-focus image of the portion of the scene (11) falling within the depth domain.

Description

MULTIFOCAL OPTICAL APPARATUS
The present invention relates to focusing optical systems.
A significant limitation in the operation of many optical devices, such as but not limited to cameras, video recorders, microscopes and binoculars is their limited depth of field, i.e. the depth over which an object in a field of view remains in acceptable focus.
Various techniques are employed to overcome operational limitations imposed by a limited depth of field. These include the use of auto-focus devices which quickly and
automatically focus the device on a surface of an object of interest, which is located at a given distance from the device.
It is also known to increase the depth of field by closing the optical aperture of a device. This technique has the disadvantage that the amount of light received from the object is correspondingly decreased, requiring a correspondingly longer exposure time.
Post processing of an image to enhance its focus has been proposed and is employed, with very limited success.
The present invention seeks to provide optical apparatus capable of in-focus imaging of a significantly greater depth than that imaged by conventional optical apparatus.
There is thus provided in accordance with a preferred embodiment of the present invention optical apparatus including multifocal imaging apparatus defining a depth domain, sensor apparatus receiving light from a scene via the multifocal imaging apparatus and image processing apparatus for receiving an output of the sensor apparatus and providing an in-focus image of the portion of the scene falling within the depth domain.
According to one alternative embodiment of the present invention the depth domain includes all planes falling between and parallel to first and second planes which are both orthogonal to the optical axis.
According to another alternative embodiment of the present invention, the depth domain includes a first subdomain including all planes falling between and parallel to first and second planes which are both orthogonal to the optical axis and a second subdomain, disjoint to the first subdomain, which includes all planes falling between and parallel to third and fourth planes which are both orthogonal to the optical axis and which are not included in the first subdomain.
According to another embodiment of the present invention, the domain includes more than two disjoint subdomains such as those described above.
There is also provided in accordance with a preferred embodiment of the invention optical apparatus including imaging apparatus defining a plurality of surfaces each arranged to receive light from a scene, each of the plurality of surfaces having a different focal length, sensor apparatus receiving light from the scene from each of the plurality of
surfaces, such that a different part of the light from the scene received from each of the plurality of surfaces is in focus and image processing apparatus for receiving an output of the sensor apparatus and providing an image including in-focus parts received from each of the plurality of surfaces.
It is appreciated that the image processing apparatus is operative to provide a restoration procedure which produces a sharp image from a blurred image provided by the optical apparatus.
In accordance with a preferred embodiment of the invention, the imaging apparatus includes lens apparatus.
In accordance with another preferred embodiment of the invention, the imaging apparatus includes mirror apparatus.
There is also provided in accordance with a preferred embodiment of the present invention an image processing method including the steps of providing a digital representation of a viewed scene including at least two scene locations whose distances from the point of view are nonequal, dividing the digital
representation of the viewed scene into a multiplicity of digital representations of a corresponding plurality of portions of the viewed scene and separately sharpening each of the multiplicity of digital representations and assembling the multiplicity of sharpened digital representations into a single sharpened digital representation. Further in accordance with a preferred embodiment of the present invention, the step of dividing and sharpening includes the steps of operating a plurality of restoration filters on each of the multiplicity of digital
representations, and selecting, for each
individual one of the multiplicity of digital representations, an individual one of the plurality of restoration filters to be employed in providing the sharpened digital
representation of the individual one of the multiplicity of digital representations.
There is also provided, in accordance with a preferred embodiment of the present invention, optical apparatus including a
multifocal imager defining a depth domain, a sensor receiving light from a scene via the multifocal imager, and an image processor for receiving an output of the sensor and for providing an in-focus image of the portion of the scene falling within the depth domain.
Further in accordance with a preferred embodiment of the present invention, the image processor operates in real time. Alternatively, the image processor operates off line with respect, to the sensor.
Further in accordance with a preferred embodiment of the present invention, the sensor includes photographic film.
Still further in accordance with a preferred embodiment of the present invention, the image processor includes an image digitizer operative to provide a digital representation of the scene to the image processor.
Additionally in accordance with a preferred embodiment of the present invention, the multifocal imager defines a plurality of surfaces each arranged to receive light from a scene, each of the plurality of surfaces having a different focal length.
Further in accordance with a preferred embodiment of the present invention, the sensor receives light from the scene via each of the plurality of surfaces, such that a different part of the light from the scene received via each of the plurality of surfaces is in focus.
Additionally in accordance with a preferred embodiment of the present invention, the image processor is operative to provide a composite image built up using in-focus parts received from each of the plurality of surfaces.
Further in accordance with a preferred embodiment of the present invention, the xmager includes a lens and/or a mirror system.
Still further in accordance with a preferred embodiment of the present invention, the imager has an invertible transfer function and the image processor includes a computational unit for inverting the transfer function, thereby to restore sharp details of the scene.
Further in accordance with a preferred embodiment of the present invention, the
transfer function includes no zeros for
predetermined domains of distance from the imager.
Additionally in accordance with a preferred embodiment of the present invention, the transfer function includes no zeros.
Further in accordance with a preferred embodiment of the present invention, the
absolute value of the transfer function has a predetermined lower bound which is sufficiently large to permit image restoration.
Still further in accordance with a preferred embodiment of the present invention, the absolute value of the transfer function has a predetermined lower bound which is
sufficiently large to permit image restoration for a predetermined domain of distances from the imager.
There is also provided, in accordance with a preferred embodiment of the present invention, an image processing method including the steps of providing a digital representation of a viewed scene including at least two scene locations whose distances from the point of view are nonequal, dividing the digital
representation of the viewed scene into a multiplicity of digital representations of a corresponding plurality of portions of the viewed scene and separately sharpening each of the multiplicity of digital representations, and assembling the multiplicity of sharpened digital representations into a single sharpened digital representation.
Further in accordance with a preferred embodiment of the present invention, the step of dividing and sharpening includes the steps of operating a plurality of restoration filters on each of the multiplicity of digital
representations, and selecting, for each
individual one of the multiplicity of digital representations, an individual one of the plurality of restoration filters to be employed in providing the sharpened digital
representation of the individual one of the multiplicity of digital representations.
There is also provided, in accordance with a preferred embodiment of the present invention, video camera apparatus including an imager defining a depth domain and a sensor receiving light from a scene via the imager for generating video images, wherein the imager includes a multifocal imager, and wherein the video camera apparatus also includes an image processor for receiving an output of the sensor and for providing an in-focus image of the portion of the scene falling within the depth domain.
There is additionally provided, in accordance with a preferred embodiment of the present invention, electronic still camera apparatus including an imager defining a depth domain and a sensor receiving light from a scene via the imager for generating electronic still images, wherein the imager includes a multifocal imager, and wherein the electronic still camera apparatus also includes an image processor for receiving an output of the sensor and for providing an in-focus image of the portion of the scene falling within the depth domain.
There is further provided, in accordance with a preferred embodiment of the present invention, camcorder apparatus including an imager defining a depth domain and a sensor receiving light from a scene via the imager for generating video images, wherein the imager includes a multifocal imager, and wherein the camcorder apparatus also includes an image processor for receiving an output of the sensor and for providing an in-focus image of the portion of the scene falling within the depth domain.
There is also provided, in accordance with a preferred embodiment of the present invention, film camera development apparatus including a multifocal-combining image processor operative to receive a multifocal representation of a scene and to provide an in-focus
representation of a portion of the scene falling within a predetermined depth domain.
There is further provided, in
accordance with a preferred embodiment of the present invention, microscope apparatus
including an objective lens including a
multifocal imager defining a depth domain, a sensor receiving light from a scene, having more than two dimensions, via the multifocal imager, and an image processor for receiving an output of the sensor and for providing an in-focus image of a plurality of planes within the scene which fall within the depth domain.
There is further provided, in accordance with a preferred embodiment of the present invention, pparatus for internal imaging of the human body including a multifocal imager defining a depth domain, which is configured for insertion into the human body, a sensor
receiving light from an internal portion of the human body via the imager, and an image
processor for receiving an output of the sensor and for providing an in-focus image of the internal portion of the human body falling within the depth domain.
Further in accordance with a preferred embodiment of the present invention, the
multifocal imager is operative to provide an image, including a superposition of in-focus contributions of a plurality of planes, to a single sensor.
Still further in accordance with a preferred embodiment of the present invention, the multifocal imager is operative to provide an image, including a superposition of in-focus contributions of an infinite number of planes. Additionally in accordance with a preferred embodiment of the present invention, the sensor includes a single sensor.
There is also provided, in accordance with a preferred embodiment of the present invention, an imaging method including the steps of providing a multifocal imager defining a depth domain, sensing light from a scene via the multifocal imager, and receiving an output of the sensor and image processing the output, thereby to provide an in-focus image of the portion of the scene falling within the depth domain.
In the present specification and claims, the terms "depth domain" and "working domain" are used substantially interchangeably.
BRIEF DESCRIPTION OF THE DRAWINGS
The present invention will be understood and appreciated more fully from the following detailed description, taken in
conjunction with the drawings in which:
Fig. 1 is a generalized block diagram illustration of the optical apparatus of the invention;
Fig. 2A is a conceptual illustration of imaging lens apparatus useful in the
apparatus of Fig. 1;
Fig. 2B is a conceptual illustration of imaging mirror apparatus useful in the apparatus of Fig. 1;
Fig. 3 is a simplified illustration of the images produced at a sensor by the apparatus of Figs. 2A and 2B;
Fig. 4 is an optical diagram of a typical design of an imaging lens assembly useful in the apparatus of Fig. 1;
Fig. 5 is an optical diagram of a typical design of an imaging mirror useful in the apparatus of Fig. 1;
Fig. 6 is a simplified block diagram of image processing apparatus particularly suited for processing gray level images and useful in the apparatus in Fig. 1;
Fig. 7 is a simplified block diagram of another embodiment of image processing apparatus particularly suited for processing gray level images and useful in the apparatus of Fig. 1;
Fig. 8 is a conceptual illustration of a method for computing a restoration transfer function provided in accordance with one
preferred embodiment of the present invention;
Fig. 9 is a conceptual illustration of an alternative method to the method of Fig. 8;
Fig. 10 is a conceptual illustration of a method for fusing a plurality of' restored images into a single restored image;
Fig. 11 is a simplified block diagram of an alternative embodiment of image processing apparatus particularly suited for processing color images and useful in the apparatus of Fig. 1;
Fig. 12 is a simplified block diagram of video or electronic still camera apparatus or camcorder apparatus in which conventional functions of these devices are combined with the functions of the invention shown and described herein;
Fig. 13 is a simplified block diagram of film camera apparatus in which conventional functions of film cameras are combined with the functions of the invention shown and described herein;
Fig. 14 is a simplified block diagram of microscopy apparatus in which conventional microscope functions are combined with the functions of the invention shown and described herein;
Fig. 15 is a simplified block diagram of a laproscope or endocscope in which
conventional laproscopy/endoscopy functions are combined, with the functions of the invention shown and described herein;
Fig. 16 is a simplified block diagram of a laproscopy or endocscopy apparatus which is a modification of the apparatus of Fig. 15; and
Fig. 17 is a simplified block diagram of multifocal imaging apparatus constructed and operative in accordance with another embodiment of the present invention.
DETAILED DESCRIPTION OF A PREFERRED EMBODIMENT
Reference is now made to Fig. 1, which illustrates optical apparatus constructed and operative in accordance with a preferred
embodiment of the present invention and
including imaging apparatus 10, such as
multifocal lens apparatus or multifocal mirror apparatus, arranged to receive light from a scene 11 which includes objects located at various distances from the imaging apparatus.
In accordance with a preferred
embodiment of the present invention, the imaging apparatus is operative to direct light from the scene to a sensor 12 such that focused light from various objects in the scene at different distances from the imaging apparatus
simultaneously reaches the sensor. Sensor 12 is typically a charge coupled device (CCD).
In accordance with a preferred embodiment of the invention, the imaging
apparatus 10 has a transfer function which varies just slightly with the distance between the imaging apparatus and an object in the working domain (depth domain), as measured parallel to the optical axis of the optical apparatus. The working domain is a continuous or discontinuous range along the optical axis.
The transfer function preferably has no zeros in the working domain. Preferably the absolute value of the transfer function in the working domain has a lower bound V, which is large enough to allow image restoration, which can be provided computationally, even in the presence of a reasonable amount of noise.
Accordingly, the transfer function is invertible such that the sharp details of the scene within the depth domain can be restored by computational means.
It is a particular feature of the present invention that the composite image seen by the sensor 12 may normally appear not in focus to a human eye.
In accordance with a preferred
embodiment of the present invention, electronic image processing apparatus 14 is provided for converting the information received by sensor 12 to an in-focus image of the scene, which is displayed on an output device 16, such as a display, image analyzer, video recorder or other storage means, or printer.
In Fig. 1, optionally, record and playback functions can be provided downstream of sensor 12 and upstream of image processor 14. Record and playback functions can be analog with subsequent digitization or, alternatively, digitization can be performed upstream of the record and playback functions, in which case the record and playback functions are digital. Reference is now made to Fig. 2A which conceptually illustrates multifocal lens means 20 useful as the imaging apparatus of the present invention. Fig. 2A shows that the multifocal lens means 20 images two objects, indicated as A and B, such as point light sources, which are situated at two different distances from the lens assembly 20, onto an image plane 22.
Each of the images of objects A and B in the image plane 22 is built up from a
superposition of in-focus contributions made by some portion of lens means 20 and of out-of- focus contributions made by other portions of the lens means 20.
In the illustrated example, it is seen that for each object distance, the in-focus contribution comes from a different portion of the lens means 20. Thus for object A, the in- focus contribution comes from region 2 of the lens means and for object B, the in-focus contribution comes from region 1 of the lens means 20.
It may be appreciated, with reference to Fig. 2A, that there is a tradeoff between the proportion of light which contains in-focus information and between the depth of field. For example, if all three rings in Fig. 2A have the same area, the light intensity for each in-focus contribution is one third of what the light intensity would be with a conventional lens with a single focal length. On the other hand, the depth domain has been increased threefold, assuming that the depths of field of the three rings do not overlap. Low resolution (out of focus) contributions vary much more slowly as a function of the variation in object depth. Therefore, their depth domains may overlap rather than being disjoint and, consequently, they are attenuated by less than the attenuation factor of the high resolution in-focus
contributions, i.e., in the illustrated example, they are attenuated by a factor of less than 3.
In the present specification, the term "resolution" refers to the number of resolvable lines per unit length, e.g. resolvable
lines/ mm.
Reference is now made to Fig. 2B which conceptually illustrates a multifocal mirror assembly 30 useful as the imaging apparatus of the present invention. Fig. 3 shows that the multifocal mirror assembly 30 focuses two objects A and B, such as point light sources, which are situated at two different distances from the mirror assembly 30, onto an image plane 22.
As in the embodiment of Fig. 2A, each of the images of objects A and B in the image plane 22 is built up from a superposition of in- focus contributions made by some portion of mirror assembly 30 and of out-of-focus
contributions made by other portions of the mirror assembly 30.
In the illustrated example, it is seen that for each object distance, the in-focus contribution comes from a different portion of the mirror assembly 30. Thus for object A, the in-focus contribution comes from region 2 of the mirror assembly 30 and for object B, the in- focus contribution comes from region 1 of the mirror assembly 30.
Fig. 3 provides a qualitative indication of the appearance of the images of the two objects A and B as seen in the image plane 22, for either or both of the embodiments of Figs. 2A and 2B. If it is assumed that in both Figs. 2A and 2B, the relative placement of objects A and B and the optical apparatus is the same, both the lens means 20 of Fig. 2A and the mirror assembly 30 of Fig. 2B can produce a substantially identical image.
It is appreciated that the imaging means of the present invention may comprise a plurality of discrete regions of different focal lengths, as in the embodiment of Fig. 2A, for example. Alternatively, as shown and described in more detail below with reference to Fig. 4, the imaging means may comprise a continuum of locations each corresponding to a different focal length. For example, the "concentric ring" implementation of the multifocal imaging apparatus shown and described herein may be replaced by a "continuous" implementation in which the focal length of the lens varies continuously over the lens as a function of the distance from the center of the lens.
According to this alternative, the final image may be based on a combination of high resolution (high frequency) components arriving from each plane in space which
intersects the scene and is perpendicular to the optical axis. In contrast, in conventional systems, the high resolution components of the image are based on information arriving from only a thin slice, orthogonal to the optical axis, which contains in-focus information. High resolution (high frequency) components arriving from all other planes is strongly suppressed or strongly attenuated.
Reference is now made to Fig. 4, which illustrates an optical design of a preferred embodiment of lens means of the Double Gauss type, having a continuously varying focal length. The lens means of Fig. 4 includes a plurality of lens portions, each having
corresponding lens surfaces, labeled I - XIV. For each lens surface, preferred values for R, T, N and V are stated in Appendix A, where:
R = the radius of curvature;
T = the distance from an individual surface to the next surface;
N = index of refraction;
V = Abbe number, (Nd-1) / (Nf-Nc);
Nd = index of refraction for the Sodium d line; Nf = index of refraction for the Hydrogen f line; and
Nc = index of refraction for the Hydrogen c line (0.6563 microns).
The continual variation in focus is provided by a Schmidt plate, whose surfaces are referenced V and VI, which is located in the plane of the aperture stop and which is
configured and arranged to generate
multifocality by providing spherical aberration. Specifically, surfaces V and VI are generally parallel except for an aspheric correction term, (+0.5 x 10-3 x rho4), for surface VI. The surface sag, Z, is defined as:
Z = 0.5 x 10-3 x rho4
where rho2 = X2 + Y2 and
X, Y = system axes which are orthogonal to one another and to the axis of the optical system.
Background information on Schmidt surfaces (pp. 393 - 394 and 400), spherical aberrations (pp. 49 - 52) and Double Gauss systems (pp. 371 - 373) appears in Modern optical engineering: the design of optical systems by W. J. Smith, McGraw-Hill, New York, 1966, the disclosure of which is hereby
incorporated by reference.
It is appreciated that in the
continuously varying focal length embodiment of Fig. 4, a high degree of sharpness is achieved throughout the depth domain. In contrast, in conventional imaging devices, the degree of sharpness, even within the depth of field of the device, is not uniform and decreases as a function of the distance from a single focal plane.
In the embodiment of Fig. 4, the exit pupil is, for computation purposes, similar to a union of an infinite number of infinitesimal concentric rings among the optical axis, all with the same area. The focal length of the rings varies continuously as a function of ring radius, whereas the magnification of the
respective image contribution remains constant.
The light energy which falls on the sensor may be computed as a sum or integral of the contributions from all of the infinitesimal concentric rings, because the image on the sensor is, for computation purposes, a
superposition of the images contributed by the rings. A particular feature of the
superposition is that, preferably, for each plane of the object space perpendicular to the optical axis which falls within the depth domain, there exists a "contribution" from a particular ring which brings that plane into sharp focus. In other words, the
superpositioned image received by the sensor includes, inter alia, an in-focus image of each plane within the depth domain of the system.
Reference is now made to Fig. 5 which illustrates a preferred optical design of a multifocal mirror assembly 30 which is similar to that conceptually illustrated in Fig. 2B. As seen in Fig. 5, the mirror assembly of Fig. 2B includes a plurality of mirror surface portions.
Whereas in Fig. 2B, three surfaces each with different focal lengths are provided, here, a suitable number of rings is provided which together form a mirror, such that the respective focal lengths of the rings together cover a predetermined depth domain. It is appreciated that any suitable number of rings may be provided, since, theoretically, an infinite number of infinitesimal rings of different focal lengths may be employed so as to provide an entirely continuous dependence of focal length on ring radius.
In the embodiment of Fig. 5, as in Fig. 4, multifocality is generated by providing an aspheric surface which applies a third order spherical aberration which is selected to be strong enough to generate multifocality in the desired depth domain.
A prescription for the optical design of Fig. 5 is provided in Appendix B.
It is appreciated that aspheric aberrations may alternatively be achieved using only spherical surfaces or by using a
combination of spherical and aspherical
surfaces.
Reference is now made to Fig. 6 which is a simplified block diagram of image
processing unit 14 of Fig. 1 constructed and operative in accordance with an embodiment of the present invention which is particularly suited to processing gray level images.
The apparatus of Fig. 6 includes a Fourier transform unit 100 which is operative to compute a Fourier transform such as an FFT (Fast Fourier Transform) of a digitized blurred image I received from sensor 12 of Fig. 1. The output of unit 100 is termed I~.
Fourier transforms, including FFT's, are well known in the art and are described in Rosenfeld, A. and Kak, A. C, Digital picture processing. Academic Press, 1982, Vol. I, pp. 13-27, the disclosure of which is incorporated herein by reference.
A digital storage unit 102 is operative to store a restoration transfer function T~ for Fourier space restoration. A computational unit 104 receives I~ from Fourier transform unit 100 and T~ from memory 102 and generates a restored image in the Fourier plane. For each pair of Fourier plane coordinates (u, v), the restored image is I~ (u,v) /T~ (u,v).
An inversion unit 106 inverts the restored image generated by unit 104 from the Fourier plane to the real plane, thereby to provide a sharpened output image. Preferably, inversion unit 106 performs an inverse FFT operation on the image supplied to it by unit 104. The output I^ of inversion unit 106 is supplied to unit 16 of Fig. 1.
Any suitable method may be employed to compute a restoration transfer function for storage in memory unit 102. For example, the following method may be used. In the foregoing discussion:
h(x,y,d)
Figure imgf000021_0001
the PSF (point spread function) of the le
Figure imgf000021_0002
or mirror of imaging apparatus 10 of Fig. 1, for a planar object which lies perpendicular to the lens axis and parallel to sensor 12 which may be a screen onto which the lens or mirror projects an image of the planar object. h(x,y,d) describes the blurring imposed by the lens on the planar object.
x, y = cartesian coordinates of axes orthogonal to the optical axis;
d = the distance between a planar object and the lens;
(u,v) = Fourier plane coordinates; H(u,v,d) = the two-dimensional Fourier transform of h(x,y,d). H is real, i.e. non- complex, due to its axial symmetry.
d1, d2 = boundaries of the working domain of the lens. In other words, d1 and d2 are the smallest and largest values,
respectively, of d for which there is an in- focus contribution from any part of the lens.
Using the above notation, the restoration transfer function T~ (u,v) may be defined, for each (u,v), as H(u,v,d0) such that: |H(u,v,d0) |>= |H(u,v,d) | for all d1 <= d <= d2.
The above equation is referenced herein Equation 1.
According to alternative embodiments of the present invention, other restoration processes employing T~ may be employed, such as Wiener filters or constrained least squares, either of which may be desirable in certain situations such as noisy conditions. Wiener filters and other aspects of restoration are described in A. Rosenfeld and A. C. Kak, Digital Picture Processing, 2nd Edition, Academic Press, New York, 1982, Vol. I, pp. 268 - 352. The disclosure of the Rosenfeld and Kak document is incorporated herein by reference.
A particular feature of the present invention is that in the working domain, the |T~(u,v) | ffunction has a positive lower bound and therefore has no zeros. Consequently, the unit 104 does not divide by zero or by very small numbers. By appropriately designing the imaging apparatus of Fig. 1, the lower bound can be made large enough to allow restoration even in the presence of a substantial level of noise.
A particular feature of the H(u,v,d) function is as follows:
d = a member of a work domain [d1,d2] of the lens, or of a fragmented work domain which includes a plurality of spaced non-overlapping intervals such as [d1, d2], [d3, d4] and [d5, d6].
For each (u,v):
S~(u,v) = H(u,v,d') such that the absolute value of H(u,v,d') is less than or equal to the absolute value of H(u,v,d) for all d in the work domain, i.e. S~(u,v) = mind |H(u,v,d')|, for all d in the work domain; and
T~(u,v) = H(u,v,d0) such that the absolute value of H(u,v,d0) is greater than or equal to the absolute value of H(u,v,d) for all d in the work domain, i.e. T~(u,v) = maxd |H(u,v,d0) | for all d in the work domain.
Preferably, the quotient |S~(u,v)|/
|T~(u,v)|, which is always between 0 and 1, is relatively large, and may for example have a value of 1/4 or 1/2. As a result, the variation within the work domain of H(u,v,d) as a function of d is relatively small and therefore it is possible to employ a single restoration transfer function such as T~(u,v) for all d's within the work domain without causing substantial image distortion.
The above system characteristics ensure that almost all the information is restorable. In contrast, in an ordinary camera lens, the absolute value of H(u,v,d) decreases for large (u,v) as [d - df | increases, where df is the distance to the object in focus.
Specifically, the blurring process for a planar object at a distance d from a lens or mirror may be described as:
I~(u,v) = O~ b (u,v,d) x H(u,v,d) + n~(u,v) where:
O~b(u,v,d) = the planar Fourier transform of the planar object;
H(u,v,d) = transfer function, as defined above; and
n~(u,v) = Fourier transform of the noise.
Therefore, if for a particular (u,v), H(u,v,d) is so small that O~ b (u,v,d) x H(u,v,d) approximately equals n~(u,v), then the (u,v) information is non-restorable in the sense the I~(u,v) is dominated by noise. This is the case for ordinary camera lenses if |d-df| is large.
In contrast, in accordance with the present invention, most or all information may be restored by employing T~(u,v,), which is independent of d, to restore an image of an object whose d falls within the working domain. Typically, there is a tradeoff between the size of the working domain and the restoration quality.
Reference is now made to Fig. 7 which is a simplified block diagram of another
embodiment of image processing unit 14 of Fig. 1 which is particularly useful in applications in which the embodiment of Fig. 6 is difficult to implement due to use of a relatively non- powerful processor having computational
limitations.
The apparatus of Fig. 7 includes a convolution unit 110 which convolves the output I of unit 12 of Fig. 1 with a convolution kernel t. The convolution kernel t may be supplied by a kernel storage or kernel computation unit 112. The kernel t(x,y) preferably is approximately equal to the inverse Fourier transform of
1/T~(u,v). A preferred method for computing the kernel is disclosed in Rabiner, L. R. and Gold, B. Theory and Application of Digital Signal Processing, Prentice-Hall, 1975, pp. 455 - 483. The disclosure of this document is incorporated herein by reference.
The above image restoration methods may be employed in applications in which a fragmented or noncontinuous work domain is desired. However, it is believed that the alternative to these image restoration methods, described below from the next paragraph onward, is preferable for fragmented work domains. The results of image restoration are typically less satisfactory for fragmented work domains than for continuous work domains.
Alternatively, the image restoration methods shown and described above may be
replaced by an adaptive image restoration method in which the optical restoration transfer function is adapted to d, the depth of the object. Blurring of the object by the lens varies slightly as a function of d and a
particular feature of the adaptive method is that this variation is taken into account. The adaptive method preferably comprises the
following steps:
a. The working domain, which may or may not be fragmented, is partitioned into a
plurality of n-1 subdomains. For example, for a non-fragmented working domain [d1, d2], the partition includes a plurality of n-1 subdomains [ei, ei+1] for i = 2 , . . . , n-1, where e1 = d1, en
= d2 and:
ei+1 > ei for all i = 1, . . . , n-1.
n may be an integer within a suitable range such as 3 - 10.
b. For each subdomain, the restoration transfer function is computed in accordance with Equation 1, set forth above, thereby to define a plurality of n-1 restoration transfer functions T~ 1, ..., T~ n-1, as illustrated conceptually in
Fig. 8.
Alternatively, as illustrated conceptually in Fig. 9, for each subdomain, the kernel for real space restoration is computed, as explained hereinabove with reference to Fig.
7, thereby to define a plurality of n-1 kernels t1(x,y), ..., tn-1(x,y).
c. The image is restored either in the Fourier plane or in real space, as convenient, using the restoration transfer functions in the first instance and the convolution kernels in the second instance. The result of this step is a plurality of n-1 restored images I^1, ... , I^n-
1.
d. The n-1 restored images computed in step c are fused or merged into a single
restored image I^ using a suitable method such as the following, illustrated conceptually in
Fig. 10:
i. For each of the images I^1,, ..., I^n, define a plurality of m1 x m2 square blocks which cover the image. The square blocks covering image I^i are indexed herein (i,j,k) where j = 1, ..., m1 and k = 1, ..., m2. For each ordered pair (j,k), there are n square blocks, one for each of the n restored images I^ (i).
ii. For each ordered (j,k), select the sharpest square block from among the blocks (i, j,k).
The criteria for the selection process may be conventional statistical or morphological criteria for image quality. For example, the selected block may be that which is most similar to a preassembled directory of known image features such as edges, isolated local extreme points, roof edges and smooth functions, where a correlational similarity criterion is employed, iii. Assemble the single restored image I^ by fusing the m1 x m2 sharpest square blocks selected in step ii.
The final t(x,y) for restoration of block (j,k) is preferably interpolated from the kernels selected for block (j,k) and for its neighbors, blocks (j,k+1), (j+1,k) and
(j+1,k+1). A suitable interpolation procedure prevents formation of artificial edges at the boundaries between blocks., using
Any suitable interpolation method may be employed, such as bilinear interpolation methods. An interpolation method which is believed to provide high quality results is described in the following publication, the disclosure of which is incorporated herein by reference:
Burt, P. J. and Adelson, E. H. "A multiresolution spline with application to image mosaics", ACM Transactions on Graphics, Vol. 2, No. 4, Oct. 1983, pp. 217 - 236.
In Fig. 10, * denotes convolution.
Reference is made to Fig. 11 which is a simplified block diagram of an alternative embodiment of image processing unit 14 of Fig. 1 which is particularly suited for processing color image data such as RGB data. The apparatus of Fig. 11 includes a color component channeling unit 202 which channels each of a plurality of color components of the input color image I, such as R, G and B components, to a corresponding one of a plurality of image processing subunits 204, 206 and 208,
corresponding in number to the plurality of color components.
Each image processing subunit may be similar to the image processing apparatus of
Fig. 6 or the image processing apparatus of Fig. 7. The I^ outputs of image processing subunits 204, 206 and 208 are referenced I^R, I^G and I^B in Fig. 11. All three outputs are provided to a color combining unit 210 which combines them and provides an output color image I^. It is appreciated that many output devices, such as display monitors, video recorders, video
printers and computer with video acquisition equipment, include a color combining function because they are capable of receiving a
plurality of channels, such as 3 channels of R, G and B data respectively, and providing an output color image.
According to one embodiment of the present invention, blurred images may be
recorded by a video recorder without restoration and restoration may be carried out later by playing the blurred image from a video player.
A particular feature of the invention shown and described herein is that, in contrast to conventional imaging systems, here, the mapping from the three-dimensional object to the two-dimensional image is almost independent of the depth of the object within the field of view. This is because the point spread function h(x,y,z) is, up to a first order, independent of the depth coordinate z within a predetermined depth domain, i.e. within a predetermined range of z's.
It is appreciated that the particular embodiments of imaging apparatus shown and described herein are merely exemplary of the wide variety of ways in which it is possible to implement the multifocal imaging apparatus of Fig. 1.
Appendix C is a Fortran language listing of a software implementation of image processing unit 14 of Fig. 1 according to one alternative embodiment of the present invention. Appendix C also includes results of operation of the software implementation on data derived from a software simulation of imaging apparatus 10 and sensor 12. It is appreciated that inclusion of the listing of Appendix C is intended merely to provide an extremely detailed disclosure of the present invention and is not intended to limit the scope of the present invention to the particular implementation of Appendix C.
In the present specification and claims, the term "distance" of an object or image from an optical system refers to that component of the distance which is parallel to the optical axis of the system.
In the present specification and claims, the term "distance" of an object point from an optical system refers to the distance between a first plane and a second plane, both of which are perpendicular to the optical axis of the system, wherein the first plane includes the object point and the second plane includes the operative portion of the optical system.
The term "depth" of an object relates to distance of the object, along the optical axis, from the imaging plane.
It is appreciated that the apparatus shown and described above has applicability in a broad variety of devices and systems used for imaging. To illustrate this, Figs. 12 - 16 illustrate various applications of the imaging apparatus shown and described above, it being understood that the particular applications of Figs. 12 - 16 are merely exemplary of possible imaging applications and are not intended to be limiting.
The apparatus of Fig. 12 includes video or electronic still camera apparatus or camcorder apparatus 250 which is imaging a scene 252 in which various objects are located at different distances from the camera/camcorder 250. The lens 254 of the camera/camcorder 250 is not a conventional lens but rather comprises an imaging device constructed and operative in accordance with the present invention such as any one of the imaging apparatus embodiments shown and described above with reference to Figs. 2A - 5. A sensor 256 receives the raw image formed by the multifocal imager 254.
Optionally, the sensed raw image is recorded and, optionally, played back by recording unit 258 and playback unit 260, respectively, for off-line reception by digitizer 264.
Although the record/playback option is not specifically illustrated in Figs. 13 and 14, it is appreciated that the apparatus of Figs. 13 and 14 may also include this option, if desired.
The sensed raw image is digitized by a digitizer 264 and the resulting digital
representation of the image is provided to an image restoration computation unit 266 which has image processing capabilities as shown and described above with reference to Figs. 6 - 11. The restored digital image generated by
computation unit 266 may be converted into analog representation by a D/A unit 270, and may then be displayed or printed by an output device 272 such as a TV monitor or printer.
Alternatively, the digital output of computation unit 266 may be stored on a
conventional digital memory or digital tape 274. The digital output of computation unit 266 may also be outputted directly by a digital output device 280 which is capable of handling digital input, such as but not limited to a printer, recorder or monitor with digital interface.
Optionally, image restoration computation unit 266 may receive distance information from a rangefinder 282 such as an ultrasound rangefinder or laser rangefinder.
Unit 266 may employ this information to improve the image of the features falling within the range found by the rangefinder.
In other words, provision of a
rangefinder allows a user to select an object in a scene which is of relative importance, and determine the distance to that object using the rangefinder. The image restoration computation unit 266 is preferably operative to receive the distance information and to select a transfer function on the basis of that distance such that optimal restoration results are achieved for the depth at which the selected object lies.
It is appreciated that an improved electronic still camera, video camera or
camcorder may be constructed to include all components of Fig. 12. Alternatively, the components of the apparatus of Fig. 12 which are other than conventional may be retrofitted to an existing conventional electronic still camera, video camera or camcorder.
The apparatus of Fig. 12 is useful in high-definition television (HDTV) applications where, due to very high resolution, the depth domain is, in conventional apparatus, very small and even small deviations from focus result in perceptible defocussing.
Fig. 13 illustrates filming apparatus for filming a scene 310 including a film camera 312 fitted with a multifocal lens 314 which is constructed and operative in accordance with the present invention and which may comprise any of the imaging apparatus embodiments shown and described hereinabove with reference to Figs. 2A - 5. The film 320 generated by film camera 312 is developed conventionally and the
developed film is scanned by a scanner 324 which provides a digital output. The digital output of scanner 324 is provided to an image
restoration computation unit 326 which has image processing functions in accordance with the present invention such as those shown and described above with reference to Figs. 6 - 11. The sharpened image generated by the restoration unit 326 is converted into a hard copy 330 by a suitable output device 332 such as a plotter.
It is appreciated that an improved film camera system may be constructed to include all components of Fig. 13. Alternatively, the components of the apparatus of Fig. 13 which are other than conventional, such as the multifocal lens and the image restorer, may be provided stand-alone, for use in conjunction with
existing conventional equipment including film camera, development laborator, digital scanner and plotter. Fig. 14 is a simplified block diagram of microscopy apparatus in which conventional microscope functions are combined with the functions of the invention shown and described herein.
The apparatus of Fig. 14 includes a microscope 350 fitted with an multifocus objective lens 352 which is constructed and operative in accordance with the present invention and which may comprise any of the imaging devices shown and described above with reference to Figs. 2A - 5. A camera 354, such as a video, electronic still or film camera captures the magnified image from its sensor 355 which may be a CCD, tube or film. The digital output of the camera is provided to an image restoration unit 356. If the output of the camera is other than digital, the output is first digitized by an A/D unit 358.
Image restoration unit 356 is a computational unit with image processing
capabilities such as those shown and described above with reference to Figs. 6 - 11. The output of unit 356 is provided to an analog output device 360 via a D/A unit 362 or to a digital output device 364.
It is appreciated that an improved microscopy system may be constructed to include all components of Fig. 14. Alternatively, the components of the apparatus of Fig. 14 which are other than conventional, such as the multifocal lens and the image restorer, may be provided stand-alone, for use in conjunction with
existing conventional equipment including microscope, camera, A/D and D/A converters, and output device.
The apparatus of Fig. 14 is useful in inspecting a wide variety of objects and scenes such as but not limited to a drop of liquid, a three-dimensional trasparent or semitransparent scene, a 2.5-dimensional surface such as the surface of a microchip and an object or material whose surface is not flat.
Fig. 15 illustrates a laproscope or endocscope for imaging and photography of interior portions of a body such as that of a human patient. In the apparatus of Fig. 15, conventional laproscopy/endoscopy functions are combined with the functions of the invention shown and described herein. The apparatus of Fig. 15 includes an objective lens 400 which is inserted into the body using conventional medical techniques at a location depending on the location which it is desired to image. The objective lens 400 comprises a multifocal imager such as any of those shown and described above. A first bundle 402 of optical fibers is
provided, one end 403 of which is disposed in the image plane of the multifocal imager. The other end 404 of optical fiber bundle 402 is operative associated with a relay lens 408 and a sensor 410 such that the image formed at end 404 of the bundle 402 is projected onto sensor 410 via relay lens 408.
The image captured by sensor 410 is provided to an image restoration computation unit 414 which may be similar to any of the image processing units shown and described above. The image generated by the image
processor has a much higher resolution over a large depth domain. This image is provided to a suitable output device 416.
Illumination of the body interior portion to be imaged may be provided by a second optical fiber bundle 420 which is operatively associated with a source of light located externally of the body of the patient.
Fig. 16 illustrates laproscopy or endocscopy apparatus which is a modification of the apparatus of Fig. 15 in that the entire camera apparatus is inserted into the body. The apparatus of Fig. 16 is generally similar to the apparatus of Fig. 15 except that relay lens 408 is eliminated and, instead, the sensor 410 is attached to the first optical fiber bundle 402 at end 404 thereof. All elements of the
apparatus apart from image restoration
computation unit 414 and output device 416 may be inserted into the body. The elements of the apparatus which operate interiorly of the human body may communicate with exteriorly operating elements 414 and 416 via a suitable electric cable 424.
Alternatively, image restoration computation unit 414 may also be inserted into the body, such that all elements of the
apparatus apart from output device 416 operate interiorly of the body. The interiorly
operating elements may, in this case,
communicate with the exteriorly operating output device via a suitable electric cable 428.
Fig. 17 is a simplified block diagram of multifocal imaging apparatus constructed and operative in accordance with another embodiment of the present invention. The apparatus of Fig. 17 includes lens 450 which is operatively associated with a plurality of semitransparent mirrors or beam splitters such as three
splitting elements 452, 454 and 456. The focal planes of splitting element 454 for a point source 458 are indicated by dotted lines 460 and 462 respectively. The focal planes of splitting element 456 for point source 458 are indicated by dotted line 464 and dotted line 466
respectively. Four sensors are provided for respectively imaging the four images generated in the illustrated embodiment. The locations of the four sensors are indicated by solid lines 470, 472, 474 and 476. All sensors are, as illustrated, parallel respectively to the corresponding focal planes.
As shown, the four sensors are located at different respective distances from their corresponding focal planes, respectively, thereby to provide a plurality of differently focussed images. These images are then
preferably rescaled to a uniform magnification by one or more rescaling units 480. In adding unit 482, the rescaled differently focussed images may be added, i.e. combined "one on top of the other" by addition, averaging or other suitable methods, into a single image. This image is processed by an image restoration computation unit 484, similar to those described above, so as to generate a final image including contributions from all four differently focussed images.
The present invention is not limited in its applicability to planar sensors such as planar CCD's or such as films lying in a plane. If a nonplanar sensor is employed, the present invention may be used, for example, by treating the digitized sensor output as though it had arrived from a planar sensor.
The term "transfer function" has been used generally to refer to the appropriate one of the following terms: contrast transfer function, optical transfer function (OTF), modulation transfer function (MTF) or phase transfer function.
The present invention is useful in applications in which the edge information is the most crucial portion of the total image information, such as in industrial robot vision applications. Preferably, for these
applications, a conventional edge detector function is provided upstream of or instead of the image processing unit 14 which is operative to enhance edges located in any of the planes falling within the depth domain of the
apparatus.
Referring back to Fig. 12, optionally, record and playback functions are provided between digitization unit 264 and image
processor 266 instead of or in addition to playback and record functions 258 and 260.
Preferably, in the embodiments of Figs. 1, 12 and 14 - 16, record and playbaick functions are provided which are operative upstream of the image processing function. This allows image processing, such as image
processing of a video movie, to be performed off-line as well as on-line. The following procedure may be employed:
a. Recording sensor output either before or after digitization;
b. Use the playback function to play back frame by frame;
c. Perform image processing, such as image restoration or edge detection, on each frame; and
d. Record the output, frame by frame, in the output device.
The above optional feature has the advantage of allowing image restoration to be performed frame by frame, off-line, recording the output frame by frame for later viewing. In other words, even if the image processor is not fast enough to work at video rate, the present apparatus can be employed to generate an in- focus video movie.
In Fig. 14, record and playback functions are optionally provided intermediate camera 354 and A/D unit 358 and/or intermediate camera 354 and image restoration computational unit 356.
In Fig. 16, optionally, either or both of the following sequences of functions may be added intermediate sensor 410 and image
restoration computational unit 414:
a. A/D, recorder of digital image, playback; and/or
b. analog image recorder, playback, A/D.
Suitable record and playback functions, similar to those described above, may also be provided in the embodiment of Fig. 15.
It will be appreciated by persons skilled in the art that the present invention is not limited by what has been particularly shown and described hereinabove. Rather the scope of the present invention is defined only by the claims which follow:
Figure imgf000039_0001
Figure imgf000040_0001
APPENDIX C
CCC R E S E A R C H S I M U L A T I O N
P R O GRAM CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
CCC THE PROGRAM IS WRITTEN IN FORTRAN OF
CCC VMS SYSTEM ON VAX COMPUTER.
CCC CCC CCC
CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
CCC THIS PROGRAM IS A RESEARCH PROGRAM FOR
CCC SIMULATION TO THE IMAGING PROCESS MADE
CCC BY THE MULTIFOCAL APPARATUS AND THE
CCC RESTORATION PROCESS.
CCC 1. THE SIMULATED FLAT OBJECT (IMAGE)
CCC IS CREATED BY THE ROUTINE
CCC FILLIMG(...),
CCC ALTERNATIVELY THE IMAGE CAN BE READ
CCC FROM DISK RATHER THEN CREATED BY THE
CCC ROUTINE.
CCC 2. THE IMAGE IS TRANSFORMED INTO
CCC FOURIER PLANE.
CCC 3. RECEIVE DATA FOR THE SIMULATION,
CCC AS WHAT IS THE FOCAL LENGTH OF THE
CCC LENSES IN THE APPROXIMATION, THEIR
CCC DIAMETER, AND THE DESIRED BOUNDARIES
CCC OF THE WORK DOMAIN (DOB1, DOB2).
CCC 4. CALCULATE THE RESTORATION OTF
CCC (ROUTINE MAXIM), BY CALCULATING THE
CCC PSF OF THE SYSTEM FOR OBJECTS LOCATED
CCC AT A SEQUENCE OF DISTANCES BETWEEN
CCC DOB1 AND DOB2 WITH EQUAL SEPARATION.
CCC THE RESTORATION OTF IS CALCULATED BY
CCC TAKING THE MAXIMAL ABSOLUTE VALUE FOR
CCC EACH FREQUENCY OVER ALL THE SEQUENCE
CCC OF PSF'S.
CCC THE TEST:
CCC 5. A LOOP ON NPNTS NO. OF DISTANCES
CCC PARTITIONING THE DEPTH DOMAIN. FOR
CCC EACH ONE, THE GIVEN FLAT OBJECT IS
CCC SIMULATED TO BE POSITIONED AT THIS
CCC DISTANCE, THE OBJECT IS BLURRED (AS
CCC DESCRIBED IN ROUTINE AVPSF) BY THE
CCC SIMULATED OPTICAL APPARATUS AND THEN
CCC DEBLURRED (RESTORED) BY THE
CCC RESTORATION OTF TO RECEIVE THE
CCC RESTORED IMAGE.
CCC THE GOAL OF THIS SIMULATION IS TO CCC OBTAIN A QUALITATIVE EVALUATION OF AN
CCC IMAGE BLURRED BY THE SIMULATED
CCC MULTIFOCAL APPARATUS AND THEN RESTORED
CCC BY THE RESTORATION OTF. THE QUALITY OF
CCC THE RESTORED IMAGE AS A FUNCTION OF
CCC THE DISTANCE OF THE FLAT OBJECT
CCC FROMTHE LENS CAN BE STUDIED AS WELL.
CCC COMMENTS :
CCC TODFFT ROUTINE IS FOR 2D FFT FROM REAL
CCC TO COMPLEX.
CCC CALL TODFFT(A-REAL ARRAY,NX - X
CCC DIMENSION,NY-Y DIMENSION, 0-FORWARD
CCC FFT)
CCC CALL TODFFT (A-REAL ARRAY,NX - X
CCC DIMENSION,NY-Y DIMENSION, 1-FORWARD
CCC FFT)
CCC THE IMAGE IS REPRESENTED AS AN ARRAY
CCC OF REAL*4 NUMBERS EACH OF WHICH
CCC IS BETWEEN 0. TO 255.
CCC IAH- A CONTROL PARAMETER, S.T. IF
CCC IAH=0, THE IMAGE BLURRED BY THE
CCC MULTIFOCAL APPARATUS AND THE IMAGE
CCC BLURRED BY AN ORDINARY LENS, WILL NOT
CCC BE CALCULATED, BUT ONLY THE RESTORED
CCC IMAGE. IF IAH=1, ALL 3 KINDS WILL BE
CCC CALCULATED AND OUTPUTTED.
CCC AS FOR DISPLAY - THE IMAGES ARE ARRAYS
CCC OF NX x NY PIXELS, REPRESENTED BY
CCC REAL*4 NUMBERS. THERE IS A FORMAT FOR
CCC THEIR I/O DICTATED BY THE I/O ROUTINES
CCC RBLK1A AND WBLK3A RESPECTIVELY. THE
CCC DISPLAY PROCEDURE CAN TAKE THE FILES
CCC OUTPUTTED BY THE PROGRAM AND DISPLAY
CCC THEM ON ANY ARBITRARY SYSTEM, OR
CCC INTEGRATED INTO THIS PROGRAM.
CCC THE IMAGE SIZE IS CHANGEABLE IN THE
CCC PARAMETER STATEMENT.
CCC POWER OF TWO SIZE AND NX=NY IS
CCC PREFERED TO AVOID NECESSITY FOR OTHER
CCC CHANGES.
CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
PARAMETER NY=256,NX=NY,NX2=NX+2 real psf (nx,ny) ,RPSFAC (NX2,NY),
PSFAC(NX2,NY) ,psfi (nx2,ny)
REAL A(NX2,NY) ,A1(NX2,NY),
EZ (NX2,NY) ,EZ2 (NX2,NY)
real img(nx2,ny), IMGl (NX2,NY)
INTEGER ASK(12)
CCC PSF - CREATING A PSF.
CCC PSFAC - ACCUMULATED/AVERAGED PSF
CCC PSFI - PSFAC BUT IN FOURIER PLANE IAH=0 !IF IAH=1, OUTPUT OF IMAGE
BLURRED BY MULTIFOCAL IMAGER, AND C ! AND ANOTHER BLURRED BY SIMPLE LENS
FOR COMPARISON WILL BE GIVEN. C ! IAH=1 IS OPTIONAL.
AAA=SQRT (0. +NX*NY)
BBB=1./AAA
CALL SET1(A,NX2*NY,0.)
CALL SETKPSFAC,NX2*NY,0.)
C SUBROUTINE FILLPNT (A,NX,NY) ! FILL WITH
A POINT IN THE MIDDLE
CP CALL FILLPNT (A,NX,NY) ! FILL WITH A
POINT IN THE MIDDLE
C SUBROUTINE FILLIMG(IMG,NX,NY,VAL) .FILL
IMAGE WITH CONTENT VAL=245.
CALL FILLIMG (IMG,NX,NY,VAL)! FILL IMAGE
WITH CONTENT
C CALL RBLK1A(IMG,NN) ! FILL IMG BY
READING A FILE
ccc changing array format to fit fft
routine and perform forward fft: CALL NIPCON(IMG,NX,NX2,NY)
CALL TODFFT(IMG,NX,NY, 0) !FORWARD ccc NOW IMG CONTAINS FOURIER TRANSFORMED
IMAGE
CCC FOCAL LENGTH OF LENSES BUILDING THE
MULTIFOCAL PSF IN ROUTINE AVPSF CALL ASKISC GIVE FOCAL LENGTH (mm) - ',IF,1)
F=IF
CALL ASKISC GIVE LENS DIAMETER(mm) -
',IDLENS,1)
DLENS=IDLENS
CCC DOB1, DOB2 - DISTANCE (IN mm) FROM LENS
TO CLOSEST AND FURTHEST PLANES
CCC BOUNDING THE WORKING DOMAIN.
CALL ASKIS(' GIVE DOB1, DOB2 (mm) -
',ASK,2)
DOB1=ASK(1)
DOB2=ASK(2)
ICHAN-10
WRITE (ICHAN, 1005) F,DLENS,DOB1,DOB2
1005 FORMAT (' FOCAL LENGTH=' ,G12.5, 5X, 'LENS
DIAMETER=', G12.5,
* /5X,'MIN/MAX OBJ. DIST=' ,2G12.5///)
F=24 ! FOCAL LENGTH,
UNIT=MILIMETERS
C DLENS=20 ! LENS DIAMETER
C DOB1=3000 ! CLOSEST POINT
C DOB2=9000 ! FURTHEST POINT NP-13 ! NUMBER OF SAMPLING POINTS IN AVPSF CCC 1/F=1/FOB+1/DOB
FOB1=1./F-1./DOB1
FOB1=1./FOB1
FOB2=1./F-1./DOB2
FOB2=1./FOB2
TYPE *,' FOB1/2,F,DOB1/2=',FOB1,FOB2, F,DOB1,DOB2
C FOB1=? ! FIRST FOCUS
C FOB2=? ! LAST FOCUS
NPNTS=13
DO 10 I=1,NPNTS1 MAXIMIZING COEFF'S OVER ALL OTF'S TO OBTAIN
CCC ! RESTORATION OTF.
TYPE *,' SEQ. NO. OF OTF TO
PARTICIPATE IN MAXIMIZING = ' , I
DEL= (DOB2-DOB1) / (NPNTS-1)
DOB-DOB1+ (I-1) *DEL
CCC AVPSF IS THE ROUTINE WHICH PRODUCES CCC THE POINT SPREAD FUNCTION (PSF)
CCC (AS A SIMPLIFIED SIMULATION) BY A CCC SUPERPOSITION OF A NUMBER OF PSF'S CCC SEE DESCRIPTION IN THE ROUTINE AVPSF.
C SUBROUTINE AVPSF(A,NX,NY,EZ,DLENS,F,
FOB,DOB,FOB1,FOB2,NP)
CCC CREATING THE THE PSF FOR A LENS WHICH CCC WOULD FOCUS AN OBJECT IN DISTANCE DOB CCC FROM
CALL AVPSF(A,NX,NY,EZ,DLENS,F,FOB,DOB, FOB1,FOB2,NP)
CALL GAGOPER1(EZ2, a,NX,NY) IDISPLACE THE- OPERATOR TO ORIGIN
call copyl (a,ex2,nx*ny)
CALL NIPCON(A,NX,NX2,NY) ! UP CALL TODFFT(A,NX,NY, 0) ! FORWARD
C SUBROUTINE MAXIM(A,B,N) ! COMPLEX A,B;
B-MAXABS((A,B))
10 CALL MAXIM(A, PSFAC,NX2*NY/2)
C SUBROUTINE REPORT(A,NX,NY,DOB,ICHAN) !
WRITS OUT RESULTS
CCC RPSFAC - TO CREAT THE RESTORATION PSF
IN REAL-SPACE.
CALL COPY1 (RPSFAC, PSFAC,NX2*NY/2) CALL TODFFT (RPSFAC,NX,NY, 1) 1 INVERSE
FFT (IN-PLACE)
CALL NIPCON(RPSFAC,NX2,NX,NY) ! SOME FORMAT CHANGES.
C CALL REPORT(RPSFAC,NX,NY, -1., 6) ! WRITE
OUT RESULTS cccccccccccccccccccccccccccccccccccccccccccccccc WRITE (10, 1010) (RPSFAC (1,1) ,1=1,10) ! WRITING THE TEN PIXEL PROFILE
1010 F0RMAT(1X,6G12.5,/1X,4G12.5)
CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
CCC BLURRING BY LENS AND DEBLURRING BY THE
PSFAC:
CALL WBLK3A(IMG,NX*NY,NY) ! OUTPUT TO
DISK OF ORIGINAL SIMULATED IMAGE DO 20 I=1,NPNTS ! BLURRING BY ALL
OTF'S.
DEL= (DOB2-DOB1) / (NPNTS-1)
DOB=DOB1+ (I-1) *DEL
C SUBROUTINE AVPSF (A,NX,NY,EZ,DLENS,F,
FOB,DOB,FOB1, FOB2,NP)
CALL AVPSF (A,NX,NY,EZ,DLENS, F, FOB,DOB, FOB1,FOB2,NP) IPSF OF MULTI.FOC. APPA CCC TO MULTIPLE BY SQRT(NX*NY) ? OR
WHAT?
CALL GAGOPER1(EZ2, a,NX,NY) !DISPLACE THE OPERATOR TO ORIGIN
call copyl (a,ez2,nx*ny) !ez2->a
CALL DIV(A,NX*NY,BBB) ! TO DEVIDE THE
ARRAY BY "BBB"(MULT. BY AAA).
CALL NIPCON(A,NX,NX2,NY) ! UP CALL TODFFT(A,NX,NY, 0) ! FORWARD, NOW A CONTAINS FOURIER SPACE OTF OF CCC MULTIFOCAL APPARATUS.
CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
IF(IAH.EQ.1)THEN
CCC CREATING THE IMAGE BLURRED BY THE
SIMULATED MULTIFOCAL APPARATUS CALL CMULT(IMGl,A,IMG,NX2*NY/2) IIMG1= IMG*A(FOURIER OTF OF MULT.FOC.AP) CALL TODFFT(IMG1,NX,NY,1) ! IFFT
CALL NIPCON(IMG1,NX2,NX,NY)
END IF CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC CALL RESTORE (A,PSFAC,NX2*NY/2) !A=A/
PSFAC IN FOURIER CALL CMULT(A,A,IMG,NX2*NY/2) CALL TODFFT (A,NX,NY,1) ! IFFT CALL NIPCON(A,NX2,NX,NY)
C ICHAN=6
C CALL REPORT(A,NX,NY,DOB,ICHAN)
C ICHAN=10
C CALL REPORT(A,ΝX,ΝY,DOB,ICHAΝ)
CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
IF(IAH.EQ.1)THEN
CCC FOR COMPARISON, WRITE OUT THE IMAGE CCC OBTAINED BY ORDINARY BLUR.
CCC LET THE DOB3=(DOBl+DOB2)/2 AND LET CCC FOB3 BE THE DISTANCE LENS-PLANE SUCH CCC THAT AN OBJECT AT DISTANCE DOB3 WILL
CCC BE IMAGED SHARPLY ON THIS
C DIST FOB3 PLANE.THE FOLLOWING BLUR IS OBTAINED
BY SIMULATING THE IMAGE CREATED CCC BY A TEST LENS WITH THE SAME FOCAL
LENGTH, SENSOR AT THE PLANE WITH CCC FOB3 TO LENS AND OBJECT AT DISTANCE
DOB.
DOB3=0.5*(DOB1+DOB2)
FOB3=1./F-1./DOB3
FOB3=1./FOB3
CALL AVPSFONE(Al,NX,NY,EZ,DLENS,F,FOB,
DOB,FOB3,FOB3,1) !A1 = THE PSF CALL GAGOPER1(EZ2,A1,NX,NY) ! DISPLACE
OPERATOR TO ORIGIN CALL C0PY1(A1,EZ2,NX*NY)
CALL DIV(A1,NX*NY,BBB) ! TO DEVIDE THE
ARRAY BY "BBB" (MULT. BY AAA) . CALL NIPCON(A1,NX,NX2,NY)
CALL TODFFT(A1,NX,NY, 0)
CALL CMULT(A1,A1,IMG,NX2*NY/2)
CALL TODFFT (A1,NX,NY, 1)
CALL NIPCON(A1,NX2,NX,NY)
CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC CCC OUTPUT: ANY OTHER DESIRED OUTPUT
ROUTINE CAN BE CHOSEN HERE.
TYPE *,' NOW OUTPUT OF IMAGE BLURRED
BY MULTIFOCAL IMAGER'
CALL CALIB(IMG1,NX*NY,0.,245.)
CALL WBLK3A(IMG1,NX*NY,NY) ! OUTPUT TO DISK OF IMAGE BLURRED BY SIMULATED CCC ! MULTIFOCAL APPARATUS
TYPE *,' NOW OUTPUT OF IMAGE BLURRED
BY ORDINARY LENS'
CALL CALIB(A1,NX*NY,0. ,245.) CALL WBLK3A(A1,NX*NY,NY) ! OUTPUT TO DISK OF IMAGE BLURRED BY ONE CCC SIMPLE LENS
END IF! THOSE ARE COMPUTED AND OUTPUTTED ONLY IF IAH=1
TYPE *,' NOW OUTPUT OF IMAGE BLURRED
BY THE MULTIFOCAL IMAGER AND ' TYPE *, 'RESTORED BY RESTORATION OTF.' CALL CALIB(A,NX*NY,0.,245.) CALL WBLK3A(A,NX*NY,NY) ! OUTPUT TO
DISK OF IMAGE BLURRED BY MULTIFOCAL CCC !APPARATUS AND RESTORED
BY RESTORATION OTF
20 CONTINUE
STOP END CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
SUBROUTINE REPORT (A,NX,NY,DOB, ICHAN) ! WRITE OUT RESULTS
REAL A(NX,NY)
WRITE (ICHAN, 1000)DOB, (INT(10000*A
(NX/2+I,NY/2+1)),I=1,NX/2)
1000 FORMAT(//' DOB=' ,G12.5, / 8(1X,16I5/))
RETURN
END CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
SUBROUTINE RESTORE (A, PSFAC,N) !A=A/
PSFAC IN FOURIER
COMPLEX A(N), PSFAC (N)
DO 1 I=1,N
IF (CABS (PSFAC (I)).LT.1. E-14)THEN
A(I) = (0.,0.)
ELSE
A(I)=A(I)/PSFAC(I)
END IF
1 CONTINUE
RETURN
END CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
SUBROUTINE MAXIM(A,B,N) ! COMPLEX A,B;
B=MAXABS ( (A,B))
CCC TO CHECK IF FOR OTF'S IT IS POSSIBLE
TO SAVE TIME
COMPLEX A(N),B(N)
DO 1 I=1,N
IF (CABS (A(I)).GT.CABS(B(I)))THEN
B(I)=A(I)
END IF
1 CONTINUE
RETURN
END CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
SUBROUTINE AVPSF (A,NX,NY,EZ,DLENS,F, FOB,DOB, FOB1, FOB2,NP)
CCC ACCUMULATE THE PSF'S CREATED BY ONE
CCC OBJECT POINT AT DIFFERENT FOCAL
CCC PLANES.
CCC A(NX,NY) - ACCUMULATED IMAGE OF ALL
CCC FOCII.
CCC EZ(NX,NY) - IMAGE FOR A PARTICULAR
CCC FOCUS
CCC DLENS - DIAMETER OF LENS
CCC F - FOCAL PLANE OF LENS
CCC FOB - FOCAL PLANE OF OBJECT
CCC DOB - DISTANCE OF OBJECT
CCC DOB1, DOB2 - RANGE OF CHANGE FOR
CCC OBJECT DISTANCE RELATIVE TO LENSE
CCC NP - NUMBER OF PLANES INCLUDING FOB1,
CCC FOB2 TO INTEGRATE ON.
REAL A(NX,NY) ,EZ (NX,NY) SAF=1. E -25
CCC 1/F=1/FOB+1/DOB
FOB=1./ (1./F-1./DOB) ! FOCAL DISTANCE
OF OBJECT.
CCC RAD (RADIUS OF CIRCLE OF CONFUSION, OR
PROPORTIONAL TO SIGMA) :
CCC RAD/DISTANCE FROM FOCAL PLANE (DFP) =
DLENS/2 / FOB
CALL SET1(A,NX*NY,0.)
DEL=(FOB2-FOB1)/(NP-1) ! DISTANCE
BETWEEN SAMPLING PLANES.
CCC FOR EQUI-DISTANT PLANES IN OBJECT
PLANE DOB1=1./F-1./FOB1
DOB1=1./DOB1
DOB2=1./F-1./FOB2
DOB2=1./DOB2
DELB=(DOB2-DOB1)/(NP-1) ! DEL OF DIST
BETWEEN SAMPLING PLANE.
DO 10 1=1,NP! NUMBER OF SAMPLING
POINTS
FF=FOB1+(I-1)*DEL ! THE ACTUAL
POSITION OF THE SAMPLING POINT CCCCCCCCCCCCCCCCCC EQUI-DISTANCE IN OBJECT PLANE CCC RAD=RADIUS OF RESULTING CIRCLE OF
CONFUSION.
DOBFF=DOBl+ (I-1) *DELB
FF=1./F-1./DOBFF
FF=1./FF
CCCCCCCCCCCCCCCCCC
RAD=ABS (FF-FOB) *DLENS/2./FOB! RAD/ | FF- FOB | =DLENS/ (2*FOB)
CCC FOR THE GAUSSIAN APPROXIMATION CCC SIGMA=SIGMA(RAD) COMPUTE SIGMA AS A
FUNCTION OF RAD C SUBROUTINE CRGAUSS (A,NX,NY,SIG,SAF)
!NORMALIZED 2D GAUSSIAN, TRIG LEV.
SIG=RAD/ (2. *ALOG(2.))
C CALL CRGAUSS (EZ,NX,NY,SIG,SAF)
INORMALIZED 2D GAUSSIAN, TRIG LEV.
C SUBROUTINE CRCIRC(A,NX,NY,R) ! CIRCULAR
DISC WITH CONSTANT WEIGHT
radp=rad/0.016
C type *,' rad,radpix=' ,rad,radp
ccc THE PSF OF THE MULTIFOCAL APPARATUS IS ccc SIMULATED BY TAKING A NUMBER
CCC OF IMAGES (NP IMAGES) OF AN OBJECT
CCC (POINT OF LIGHT) AT DISTANCE DOB CCC FROM THE LENS. THE SENSOR'S POSITION
CCC IS CHANGED IN EACH IMAGE-TAKING,
CCC SUCH THAT THE DEPTH DOMAIN IN THE
CCC OBJECT SPACE WHICH IS USER DEFINED,
CCC I.E., [DOB1, DOB2] IS DEVIDED INTO NP-
CCC 1 SUBDOMAINS BOUNDED BY ΝP RESPECTIVE
CCC PARALEL PLANES, THE DISTANCE BETWEEN
CCC EACH CONSEQUENT OF THEM IS THE SAME.
CCC THE DISTANCES ARE DOB1+ (I-1) * (DOB2-
CCC DOBl)/(NP-1) FOR I=1, ...,NP. THE
CCC RESPECTIVE FOCAL PLANES FOR THOSE
CCC OBJECT DISTANCES ARE FOB1, FOB11,
CCC FOB12 , FOB13 , ... FOB2. THE SIMULATED
CCC PSF IS CREATED IN THE FOLLOWING WAY: A
CCC LOOP ON ALL NP POSITIONS OF THE
CCC SENSOR:
CCC THE POINT SOURCE AT DISTANCE DOB IS
CCC IMAGED ON EACH SUCH SENSOR THE IMAGES
CCC ON THE SENSOR CREATED IN ALL ITS
CCC POSITIONS ARE ADDED UP.
CCC THE RESULT IS THE POINT SPREAD
CCC FUNCTION (PSF) OF THE APPARATUS FOR
CCC OBJECTS LOCATED AT DISTANCE DOB FROM
CCC LENS.
CCC IT IS TO BE NOTED THAT THE
CCC MAGNIFICATION IS ASSSUMED TO REMAIN
CCC THE SAME FOR ALL SENSOR POSITIONS.
CCC THE PSF OF A DEFOCUSED LENS IS
CCC APPROXIMATED AS A CIRCULAR DISC
CCC (CIRCLE OF CONFUSION) WHOSE
CCC [RADIUS/ (THE DISTANCE FROM FOCUS) ] =
CCC [(RADIUS OF LENS) / (DISTANCE FROM
CCC FOCUS)]. (ALTERNATIVELY - BY
CCC A CIRCULAR GAUSSIAN S.T. THE HALF
CCC WIDTH OF ITS PROFILE GAUSSIAN
CCC IS PROPORTIONAL TO THE RADIUS OF THE
CCC CIRCLE OF CONFUSION.
CALL CRCIRC(EZ,NX,NY,RADp) ! CIRCULAR
DISC WITH CONSTANT WEIGHT CALL ADD1(A,A,EZ,NX*NY) ! ACCUMULATE INTO "A"
C SUBROUTINE ADD1 (C,A,B,N)
10 CONTINUE
CCC AVERAGING
E=1./NP
CALL DIV(A,NX*NY,E) ! TO DEVIDE THE
ARRAY BY "E".
RETURN
END CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC SUBROUTINE AVPSFONE (A,NX,NY,EZ,DLENS, F,FOB,DOB,FOB1,FOB2,NP)
CCC ACCUMULATE THE PSF'S CREATED BY ONE
CCC OBJECT POINT AT DIFFERENT FOCAL
CCC PLANES. BUT IN THIS VERSION OF AVPSF
CCC ROUTINE, IT IS DEGENERATED TO
CCC SUPERIMPOSE JUST ONE PSF.
CCC A(NX,NY) - ACCUMULATED IMAGE OF ALL
CCC FOCII.
CCC EZ(NX,NY) - IMAGE FOR A PARTICULAR
CCC FOCUS
CCC DLENS - DIAMETER OF LENS
CCC F - FOCAL PLANE OF LENS
CCC FOB - FOCAL PLANE OF OBJECT
CCC DOB - DISTANCE OF OBJECT
CCC DOB1, DOB2 - RANGE OF CHANGE FOR
CCC OBJECT DISTANCE RELATIVE TO LENSE
CCC NP - NUMBER OF PLANES INCLUDING FOB1,
CCC FOB2 TO INTEGRATE ON.
REAL A(NX,NY) ,EZ (NX,NY)
SAF=1. E -25
CCC 1/F=1/FOB+l/DOB
FOB=1./ (1. /F-1. /DOB) ! FOCAL DISTANCE
OF OBJECT.
CCC RAD (RADIUS OF CIRCLE OF CONFUSION, OR
PROPORTIONAL TO SIGMA) :
CCC RAD/DISTANCE FROM FOCAL PLANE(DFP) =
DLENS/2 / FOB
CALL SET1(A,NX*NY,0.)
DEL= (FOB2-FOB1) / (NP-1) ! DISTANCE
BETWEEN SAMPLING PLANES.
CCC FOR EQUI-DISTANT PLANES IN OBJECT
PLANE DOB1=1./F-1./FOB1
DOB1=1. /DOB1
DOB2=1./F-1./FOB2
DOB2=1./DOB2
C DELB=(DOB2-DOB1)/(NP-1) ! DEL OF DIST
BETWEEN SAMPLING PLANE.
DELB=0. ! TO ENABLE JUST ONE POSSIBLE
DISTANCE FOR PLANE.
DEL=0.
DO 10 1=1,1! NUMBER OF SAMPLING POINTS FF=FOB1+(I-1)*DEL ! THE ACTUAL
POSITION OF THE SAMPLING POINT CCCCCCCCCCCCCCCCCC EQUI-DISTANCE IN OBJECT PLANE CCC RAD=RADIUS OF RESULTING CIRCLE OF
CONFUSION.
C DOBFF=DOBl+ (I-1) *DELB
C FF=1./F-1./DOBFF
C FF=1. /FF
CCCCCCCCCCCCCCCCCCC RAD=ABS (FF-FOB) *DLENS/2. /FOB!
RAD/ | FF-FOB | =DLENS/ (2*FOB)
CCC FOR THE GAUSSIAN APPROXIMATION CCC SIGMA=SIGMA(RAD) COMPUTE SIGMA AS
A FUNCTION OF RAD C SUBROUTINE CRGAUSS (A,NX,NY, SIG, SAF)
!NORMALIZED 2D GAUSSIAN, TRIG
LEV.
SIG=RAD/ (2. *ALOG(2. ))
C CALL CRGAUSS (EZ,NX,NY, SIG, SAF)
!NORMALIZED 2D GAUSSIAN, TRIG
LEV.
C SUBROUTINE CRCIRC (A,NX,NY,R) ! CIRCULAR
DISC WITH CONSTANT WEIGHT
radp=rad/0.016
CALL CRCIRC (EZ,NX,NY,RADp) ! CIRCULAR
DISC WITH CONSTANT WEIGHT CALL ADD1(A,A,EZ,NX*NY) ! ACCUMULATE INTO "A"
C SUBROUTINE ADD1 (C,A,B,N)
10 CONTINUE
CCC AVERAGING
E=1./NP
CALL DIV(A,NX*NY,E) ! TO DEVIDE THE
ARRAY BY "E".
RETURN
END CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
SUBROUTINE FILLPNT (A,NX,NY) ! FILL WITH A POINT IN THE MIDDLE CCC
REAL A(NX,NY)
A(NX/2+1,NY/2+1) =100. ! SQRT (0. +NX*NY)
RETURN
END CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
SUBROUTINE CRGAUSS (A,NX,NY,SIG, SAF)
!NORMALIZED 2D GAUSSIAN TRIG LEV. CCC A(129,129) IS THE CENTER. IF A(I,J)
< 1/10**7=>A(I,J)-0, CIRCULAR CCC I.E., DOMAIN IS LIMITED TO RADIUS 127.
REAL A(NX,NY)
I0=NX/2+1
J0=NY/2+1
PI2=8.*ATAN(1.)
CCC F=1/(2*PI*SIG**2) * EXP (- (X**2+Y**2) /
2./SIG**2)
CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
c type *,' crgauss: saf=', saf, sig c type *,' alog(saf)=' ,alog(saf)
c type *,' arg under sqrt=', -2*SIG**2*
(ALOG(SAF) +ALOG(PI2*SIG**2)) cccccccccccccccccccccccccccccccccccccccccccccccc if (abs(sig) .le.l. e -25)then rr=0.1
else
RR=SQRT(-2*SIG**2* (ALOG(SAF) +ALOG
(PI2*SIG**2)))
end if
IF(RR.GT.NY/2-1)RR=NY/2-1
IF(RR.GT.NX/2-1)RR=NX/2-1
RR2=RR*RR
IF (SIG.NE.0.)DD=-1. /2/SIG**2
S=0.
DO 10 J=1,NY
DO 10 1=1,NX
IF((I-I0)**2+(J-J0)**2.GT.RR2)THEN
A(I, J)=0.
ELSE IF (SIG.NE.0) THEN
A(I,J)=EXP ( (DD* ( (1-10) **2+ (J-J0) **2) ) )
ELSE!!!!!!!!! TAKE CARE FOR SIG=0 ! ! ! C A(I,J) =
S=S+A(I,J)
END IF
10 CONTINUE
DO 20 J=1,NY
DO 20 1=1,NX
20 A(I,J)=A(I,J)/S
RETURN
END CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
SUBROUTINE DIV(A,N,E) !A=A/E
REAL A(N)
DO 1 I=1,N
1 A(I)=A(I)/E
RETURN
END CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
SUBROUTINE FILLIMG(IMG,NX,NY,VAL) 'FILL IMAGE WITH CONTENT CCC FILL WITH STRAIGHT LINES
REAL IMG(NX,NY)
CALL SET1(IMG,NX*NY,0.)
C IMG(NX/2+1,NY/2+1)=VAL
DO 10 J=1,NY,NY/16! EACH 16 LINES
DO 10 1=1,NX
10 IMG(I,J)=VAL
RETURN
END CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
SUBROUTINE GAGOPER1 (AOPER,OPER,NX,NY)
REAL AOPER(NX,NY),OPER(NX,NY)
DO 18 J=1,ny
JA=J-NY/2 IF (JA.LE.0) JA=JA+NY
DO 18 I=1,NX
IA=I-NX/2
IF (IA.LE.0) IA=IA+NX
18 AOPER(IA,JA)=OPER(I, J)
RETURN
END CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
subroutine subpix(i0, j0,i, j,rr,npar,s) ! subpixel area within a circle ccc iO,jO- coords of center pixel
ccc i,j coords of current pixel
ccc rr - r**2 of circle
CCC NPAR - ID FACTOR OF REFINEMENT (MUST
BE EVEN)
ccc s - weight assigned to (i,j) pixel
(output).
nt=npar/2
AX=I-I0
AY=J-J0
S=0.
DEL=1./(2.*NT)
do 10 j1=-NT,NT
DO 10 I1=-NT,NT
IF ( (AX+I1*DEL) **2+ (AY+J1*DEL) **2.GT.RR)
GO TO 10
S=S+1.
10 CONTINUE
S=S/((2*NT+1)**2)
RETURN
END CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
SUBROUTINE CRCIRC (A, NX, NY, R) !
CIRCULAR DISC WITH CONSTANT WEIGHT CCC TO BE USED AS A CIRCLE OF CONFUSION,
NORMALIZED S.T. TOTAL WEIGHTS=1.
INTEGER LIST(2,1024) ! INDICES LIST
REAL WGTU024)
REAL A(NX,NY)
RR=R*R
I0=NX/2+1
J0=NY/2+1
MC=0
DO 10 J=1,NY
DO 10 1=1,NX
IF ( (1-10) **2+ (J-J0) **2.LE.RR) THEN
A(I,J)=1
ELSE
A(I, J)=0.
END IF 10 CONTINUE
C subroutine subpix(i0, j0,i, j ,rr,npar,s)
! subpixel area within a circle do 30 j=2,ny-1
do 30 i=2,nx-1
if (a(i, j) .ne.0.and. (a(i-1, j).eq.0.
.or. a(i+1, j) .eq.0.or.
* a(i, j-1) .eq.0. .or.a(i,j+1)
.eq.0.)) then ! internal border MC=MC+1
LIST(1,MC)=I
LIST(2,MC)=J
CALL subpix(i0, j0,i, j,rr,10,WGT
(MC) ) !supixel area within a circle
ELSE if(a(i, j) .eq.0.and. (a(i-1, j).
ne.0..or.a(i+1, j) .ne.0.or.
* a(i, j-1) .ne.0..or.a(i, j+1) .ne.0.)) then
! EXTERNAL border
MC=MC+1
LIST(1,MC)=I
LIST(2,MC)=J
CALL subpix(i0, j0,i, j,rr,10,WGT
(MC) )!subpixel area within a circle
else
end if
30 continue
IF(MC.GT.0)THEN
DO 40 K=1,MC
40 A(LIST(1,K),LIST(2,K))=WGT(K)
END IF
S=0.
DO 20 J=1,NY
DO 20 1=1,NX
IF(A(I,J) .EQ.0.)GO TO 20
S=S+A(I,J)
20 CONTINUE
DO 50 J=1,NY
DO 50 1=1,NX
IF(A(I,J) .EQ.0.)GO TO 50
A(I,J)=A(I,J)/S
50 CONTINUE
RETURN END CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
SUBROUTINE ASKIS (MESSAGE,ASK,N)
CCC TYPES MESSAGE ON SCREEN, THEN GETS CCC FROM THE SCREEN A LIST
CCC OF INTEGERS SEPARATED BY COMAS. INTEGER ASK(N) ,OUTCH, INCH
CHARACTER* (*) MESSAGE
INCH=5
OUTCH=6
WRITE (OUTCH, 1000)MESSAGE
1000 FORMAT ('$',A)
READ (INCH,*) (ASK(I) , I=1,N) C WRITE (6, 1020) (ASK(I) ,I=1,N)
C1020 FORMAT (' ASK=',20I4)
RETURN
END CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
SUBROUTINE MINMAX(A,N, SMIN, SMAX)
CCC TO FIND MINIMUM AND MAXIMUM OF A
VECTOR
REAL A(N)
SMIN=1. E 38
SMAX=-SMIN
DO 1 I=1,N
IF (A(I) .LT.SMIN) SMIN=A(I)
IF(A(I) .GT.SMAX)SMAX=A(I)
1 CONTINUE
RETURN
END CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
SUBROUTINE CALIB(A,N,AMIN,AMAX)
CCC IN-PLACE CALIBRATION OF A ,NEW MAX AND CCC MIN VALUES WILL BE AMAX,AMIN.
CCC THE ROUTINE CHECKS AND CORRECTS FOR CCC UNDERFLOW.
REAL A(N)
C SUBROUTINE MINMAX (A,N, SMIN, SMAX)
CALL MINMAX(A,N, SMIN, SMAX)
RNEW=AMAX-AMIN
ROLD=SMAX-SMIN
XX=RNEW/ROLD
DO 10 I=1,N
A(I) = (A(I)-SMIN)*XX
IF(ABS(A(I)) .LT. 1. E -38) A(I)=0.
10 A(I)=A(I)+AMIN
CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC TEMPORARILY
ZERO= - SMIN*XX+AMIN
TYPE *,' FROM CALIB : ZERO WAS TRANSFORMED TO : ' , ZERO
CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
RETURN
END CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
SUBROUTINE CMULT(C,A,B,N) ! C= A*B IN COMPLEX
COMPLEX C(1),A(1) ,B(1)
DO 10 1=1,N 10 C(I)=A(I)*B(I)
RETURN
END CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
SUBROUTINE COPY1 (A,B,N)
REAL A (N) ,B (N)
DO 1 I=1,N
1 A(I)-B(I)
RETURN
END CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
SUBROUTINE NIPCON(A,NX1,NX2,NY)
CCC A(NX1,NY)-->A(NX2,NY)
REAL A(1)
IF (NX1-NX2) 10, 20,30
10 CALL NIP(A,NX1,NY,A,NX2)
20 RETURN
30 CALL NIPC(A,NX1,NY,A,NX2)
RETURN
END CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
SUBROUTINE NIP (A,NX1,NY,B,NX2)
CCC NIPUAH A->B, NX2>NX1
REAL A(NX1,NY),B(NX2,NY)
NX11=NX1+1
IF(NXl.LT.NX2)GO TO 5
WRITE (6, *) ' FROM NIPCON:NX1->NX2,
BUTNX2.LE.NX1 , NX1/2=' ,NX1,NX2
STOP
5 DO 10 J=1,NY
J1=NY+1-J
I1=NX1+1
DO 30 I=I1,NX2
30 B(I,J1)=0.
DO 20 1=1,NX1
20 B(NX11-I,J1)=A(NX11-I,J1)
10 CONTINUE
RETURN
END CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
SUBROUTINE NIPC (A,NX1,NY,B,NX2)
REAL A(NX1,NY) ,B(NX2,NY)
NX11=NX1+1
IF(NX2.LT.NX1)GO TO 5
TYPE *, ' ERR FROM NIPC'
STOP
5 DO 10 J=1,NY
DO 20 I=1,NX2
20 B(I,J)=A(I,J)
10 CONTINUE
RETURN
END CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC SUBROUTINE SET1(A,N,V)
CCC-- ONE DIMENSIONAL CCCCCC
REAL A(N)
DO 10 1=1,N
10 A(I)=V
RETURN
END CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
SUBROUTINE ADD1 (C,A,B,N)
REAL A,B,C
DIMENSION A(N) ,B (N) , C (N)
DO 1 1=1,N
1 C(I)=A(I)+B(I)
RETURN
END CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
SUBROUTINE RBLK1A(B,N) ! TO READ PICS
INTO AN ARRAY REAL B(1)
CHARACTER FNAME*40,TITLE*128 TYPE 1030
1030 FORMAT (/'$ FILE NAME FOR INPUT PIC
(R*4) ?-')
ACCEPT 1001,FNAME
1001 FORMAT(A40)
C SUBROUTINE RBLK1 (A,N,NAI,FNAME,TITLE,
LREC)
CALL RBLK1(B,N,NAI,FNAME,TITLE,LREC)
TYPE *,TITLE
RETURN
END CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
SUBROUTINE RBLK1 (A,N,NAI, FNAME, TITLE, LREC)
CCC READING:
CCC READS AN ARRAY OF N WORDS FROM THE CCC DISK, IN THE CONVENT. FORMAT
CCC A(N)-THE ARRAY (N-ON OUTPUT)
CCC FNAME-FILE NAME
CCC TITLE-TITLE TO BE READ FROM THE FIFTH CCC WORD OF THE HEADER.
CCC LREC-LENGTH OF RECORD .
CCC THE FILE IS A N E W FILE!
CCCCC THE ROUTINE WRITES A ONE-RECORD HEADER
BEFORE WRITING THE ARRAY.
CCC THE HEADER: INTEGER-1-ST WORD -THE NO. CCC OF WORDS IN A 2 " " "
CCC " RECORDS WRITTEN
CCC THE 5-TH WORD AND ON-THE TITLE IN
CHARACTERS REAL A(1)
INTEGER IHDR(256)
CHARACTER FNAME* (*) ,TITLE* (*) ,TIT1*128
EQUIVALENCE (IHDR(5) ,TIT1)
INQUIRE (FILE=FNAME,RECL=LREC)
CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC C TYPE *,'RBLK1: AFTER INQUIRE-LREC=',
LREC LREC=LREC/4
CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC OPEN(UNIT=59,FORM='UNFORMATTED' ,FILE= FNAME,STATUS='OLD' ,
* RECORDTYPE='FIXED' ,RECL=LREC)
READ(59) (IHDR(IK) ,IK=1,MIN(256,LREC) )
TITLE=TIT1
N=IHDR(1)
NAI=IHDR(3)
NQ=N/LREC
NR=N-NQ*LREC
DO 10 I=1,NQ
IA=(I-1)*LREC
10 READ (59) (A(IA+IK) ,IK=1,LREC)
IF(NR.GT.0)READ(59) (A(NQ*LREC+IK),
IK=1,NR)
CLOSE(UNIT=59)
RETURN END CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
CCC ROUTINE FOR THE DEBLURRING SYSTEM.
CCC WRITING:
CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
SUBROUTINE WBLK1 (A,N,NAI,FNAME,
TITLE,LREC)
CCC WRITES AN ARRAY OF N WORDS*4 ON THE CCC DISK CCC A(N) -THE ARRAY
CCC FNAME-FILE NAME
CCC TITLE-TITLE TO BE WRITTEN FROM THE CCC FIFTH WORD OF THE HEADER.
CCC LREC-LENGTH OF RECORD .
CCC NAI-DIM OF RELEVANT SYSTEM
CCC THE FILE IS A NEW FILE! CCCCC THE ROUTINE WRITES A ONE-RECORD HEADER
BEFORE WRITING THE ARRAY.
CCC THE HEADER: INTEGER-1-ST WORD -THE NO.
OF WORDS IN A 2 " " " CCC " RECORDS WRITTEN
CCC THE 5-TH WORD AND ON-THE TITLE IN
CHARACTERS
REAL A(N) INTEGER IHDR (256)
CHARACTER FNAME*40,TITLE*128,TIT1*128
EQUIVALENCE (IHDR(5) ,TIT1)
TIT1=TITLE
NQ=N/LREC
NR=N-NQ*LREC
IHDR(1)=N
IHDR(2)=NQ+1
IF (NR.GT.0) IHDR(2) =NQ+2
IHDR(3)=NAI
OPEN(UNIT=59,FORM='UNFORMATTED', FILE=
FNAME,STATUS='NEW',
* RECORDTYPE='FIXED' ,RECL=LREC)
WRITE(59) (IHDR(IK) ,IK=1,MIN(256,
LREC) )
DO 10 I=1,NQ
IA=(I-1)*LREC
10 WRITE(59) (A(IA+IK),IK=1,LREC)
IF(NR.GT.0)WRITE(59) (A(NQ*LREC+IK),
IK=1,NR)
CLOSE(UNIT=59)
RETURN
END CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
SUBROUTINE WBLK3A(B,N, IDIM) ! TO READ
PICS INTO AN ARRAY REAL B(l)
CHARACTER FNAME*40,TITLE*128
TYPE 1030
1030 FORMAT(/'$ FILE NAME FOR OUTPUT PIC
(R*4) ?-')
ACCEPT 1001,FNAME
1001 FORMAT(A40)
TYPE 1031
1031 FORMAT(/'$ GIVE TITLE - ')
ACCEPT 1002,TITLE
1002 FORMAT(A128)
C SUBROUTINE WBLK1 (A,N,NAI,FNAME,TITLE,
LREC)
LREC=256
CALL WBLK1(B,N, IDIM,FNAME,TITLE,LREC)
RETURN
END CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
SUBROUTINE WBLK2A(B,N) ! TO READ PICS
INTO AN ARRAY
REAL B(1)
CHARACTER FNAME*40,TITLE*128
TYPE 1030
1030 FORMAT(/'$ FILE NAME FOR OUTPUT PIC (R*4) ?-')
ACCEPT 1001, FNAME
1001 FORMAT (A40)
TYPE 1031
1031 FORMAT(/'$ GIVE TITLE - ')
ACCEPT 1002,TITLE
1002 FORMAT(A128)
TYPE 999
999 FORMAT ('$ GIVE DIMENSION OF SQUARE
PIC - ')
ACCEPT *,IDIM
C SUBROUTINE WBLK1 (A,N,NAI,FNAME,TITLE,
LREC)
LREC=256
CALL WBLK1(B,N,IDIM,FNAME,TITLE,LREC)
RETURN
END CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
SUBROUTINE WBLK1A(B,N) ! TO READ PICS INTO AN ARRAY
REAL B(1)
CHARACTER FNAME*40,TITLE*128
TYPE 1030
1030 FORMAT(/'$ FILE NAME FOR OUTPUT PIC
(R*4) ?-')
ACCEPT 1001,FNAME
1001 FORMAT(A40)
TYPE 1031
1031 FORMAT(/'$ GIVE TITLE - ')
ACCEPT 1002,TITLE
1002 FORMAT(A128)
C SUBROUTINE WBLK1 (A,N,NAI,FNAME,TITLE,
LREC)
LREC=256
CALL WBLK1 (B,N,N,FNAME,TITLE,LREC)
RETURN
END
C*TODFFT.FOR************************************ C
C TWO-DIMENSIONAL FOURIER TRANSFORM
C SUBROUTINE FOR IMAGE PROCESSING. DOES
C BOTH FOWARD & INVERSE TRANSFORMS
C USES LYNN TENEYCK'S MIXED-RADIX
C ROUTINES
C THUS THE ONLY RESTRICTION IS THAT THE
C IMAGE SIZE BE AN EVEN NUMBER AND HAVE
C NO PRIME FACTORS LARGER THAN 19 ! !
C
C IDIR = 0 FOWARD TRANSFORM :
C exp(+2PIirs)
C IDIR = 1 INVERSE TRANSFORM :
C exp(-2PIirs)
C IDIR = -1 INVERSE TRANSFORM BUT NO
C COMPLEX CONJUGATE C
C DATA SET UP AS NY ROWS OF NX NUMBERS
C NOTE NX,NY ALWAYS REFER TO THE REAL-
C SPACE IMAGE SIZE
C
C NX,NY IMAGE IS TRANSFORMED IN-PLACE
C INTO NY STRIPS OF NX/2 + 1 COMPLEX
C FOURIER COEFFICIENTS
C THE ORIGIN IS LOCATED AT THE FIRST
C POINT!!!
C
C ARRAY MUST BE DIMENSIONED TO ALLOW FOR
C EXTRA STRIP OF COMPLEX NUMBERS ON
C OUTPUT.
C THUS FOR A 300X400 TRANSFORM, ARRAY
C MUST BE DIMENSIONED:
C REAL ARRAY (302, 400)
C
C A NORMALIZATION FACTOR IS APPLIED
C DURING BOTH TRANSFORMATIONS
C
C VERSION 1.00 OCT 11 1981 DAA
C VERSION 1.02 APR 19 1982 DAA
C VERSION 1.03 NOV 23 1982 DAA
C
SUBROUTINE TODFFT (ARRAY,NX,NY, IDIR) DIMENSION ARRAY (1) , IDIM(5)
NXO2 = NX/2
IF (2*NXO2 .EQ. NX) GOTO 1
WRITE (6, 1000) NX
1000 FORMAT (' TODFFT: NX= ',I7,' MUST BE
EVEN!! !')
STOP
1 NXP2 = NX + 2
NXT = NXP2*NY
NXT1 = NXT - 1
ONEVOL = SQRT(1.0/(NX*NY))
IF (IDIR .NE. 0) GOTO 50
C
C******** FOWARD TRANSFORMS COME HERE ******** C C
C SET UP FOR FIRST DIMENSION OF TRANSFORM C
IDIM(1) = NXP2*NY
IDIM(2) = 2
IDIM(3) = IDIM(1)
IDIM(4) = IDIM(1)
IDIM(5) = NXP2
C
CALL REALFT (ARRAY (1) ,ARRAY (2),NXO2, IDIM)
C C SET UP FOR SECOND DIMENSION OF TRANSFORM IDIM(2) = NXP2
IDIM(4) = IDIM(2)
IDIM(5) = 2
C
CALL CMPLFT (ARRAY(1) ,ARRAY(2) ,NY, IDIM) C C TAKE COMPLEX CONJUGATE (TO MAKE PROPER
FOWARD TRANSFORM) & SCALE BY 1/VOLUME C
DO 100 J = 1,NXT1,2
ARRAY (J) = ARRAY(J) *ONEVOL
ARRAY(J+1) = -ARRAY(J+1) *ONEVOL 100 CONTINUE
C
RETURN C
C********** INVERSE TRANSFORM **************
C
C
C SET UP FOR FIRST DIMENSION OF TRANSFORM
C
50 NXM1 = NX - 1
IDIM(1) = NXP2*NY
IDIM(2) = NXP2
IDIM(3) = IDIM(1)
IDIM(4) = IDIM(2)
IDIM(5) = 2
C C TAKE COMPLEX CONJUGATE TO DO INVERSE & SCALE
BY 1/VOLUME
C
IF (IDIR .EQ. 1) GOTO 60
DO 300 J = 1,NXT1,2
ARRAY(J) = ARRAY(J) *ONEVOL ARRAY(J+l) = -ARRAY (J+1) *ONEVOL 300 CONTINUE
GOTO 70
C C IDIR = 1 JUST SCALE BY 1/VOLUME (FOR
STANDARD INVERSE TRANSFORM) C
60 DO 400 J = 1,NXT1,2
ARRAY(J) = ARRAY(J) *ONEVOL ARRAY (J+1) = ARRAY (J+1)*ONEVOL
400 CONTINUE
C
70 CALL CMPLFT (ARRAY(1),ARRAY(2),NY, IDIM) C
C SET UP FOR SECOND DIMENSION OF TRANSFORM C
IDIM(2) = 2
IDIM(4) = IDIM(1)
IDIM(5) = NXP2 C CHANGE DATA STORAGE MODE COMPLEX CONJUGATE
DONE BY HERMFT C
INDEX = 2
DO 500 IY = 1,NY
ARRAY(INDEX) = ARRAY(NXM1 + INDEX) INDEX = INDEX + NXP2
500 CONTINUE
C
CALL HERMFT (ARRAY(1) ,ARRAY(2) ,NXO2, IDIM)
RETURN
END
ccσccccccccccccccccccccccccccccccccccccccccccccc
SUBROUTINE REAL FT (EVEN, ODD, N, DIM)
INTEGER N
INTEGER DIM(5)
REAL EVEN(1), ODD(1)
C
C REAL FOURIER TRANSFORM
C
C GIVEN A REAL SEQUENCE OF LENGTH 2N
C THIS SUBROUTINE CALCULATES THE UNIQUE
C PART OF THE FOURIER TRANSFORM. THE
C FOURIER TRANSFORM HAS N + 1 UNIQUE
C REAL FARTS AND N - 1 UNIQUE IMAGINARY
C PARTS. SINCE THE REAL PART AT X (N) IS
C FREQUENTLY OF INTEREST, THIS
C SUBROUTINE STORES IT AT X (N) RATHER
C THAN IN Y(0). THEREFORE X AND Y MUST
C BE OF LENGTH N + 1 INSTEAD OF N. NOTE
C THAT THIS STORAGE ARRANGEMENT IS
C DIFFERENT FROM THAT EMPLOYED BY THE
C HERMITIAN FOURIER TRANSFORM
C SUBROUTINE.
C
C FOR CONVENIENCE THE DATA IS PRESENTED
C IN TWO PARTS, THE FIRST CONTAINING THE
C EVEN NUMBERED REAL TERMS AND THE
C SECOND CONTAINING THE ODD NUMBERED
C TERMS (NUMBERING STARTING AT 0). ON
C RETURN THE REAL PART OF THE TRANSFORM
C REPLACES THE EVEN TERMS AND THE
C IMAGINARY PART OF THE TRANSFORM
C REPLACES THE ODD TERMS.
C
REAL A, B, C, D, E, F, ANGLE, CO, SI, TWO PI, TWO N
INTEGER NT, D2, D3, D4, D5
INTEGER I, J, K, L, N OVER 2, 10, I1, 12
C
TWO PI = 6.283185
TWO N = FLOAT(2*N) C
CALL CMPL FT (EVEN, ODD, N, DIM) C
NT = DIM(1)
D2 = DIM(2)
D3 = DIM(3)
D4 = DIM(4)
D5 = DIM(5)
N OVER 2 = N/2 + 1
C
IF (N OVER 2 .LT. 2) GO TO 400
DO 300 I = 2, N OVER 2
ANGLE = TWO PI*FLOAT(I-1)/TWO N
CO = COS (ANGLE)
SI = SIN(ANGLE)
10 = (I - 1)*D2 + 1
J = (N + 2 - 2*1) *D2
DO 200 I1 = 10, NT, D3
12 = I1 + D4
DO 100 K = I1, I2, D5
L = K + J
A = (EVEN(L) + EVEN(K))/2.0
C = (EVEN(L) - EVEN(K))/2.0
B = (ODD(L) + ODD(K))/2.0
D = (ODD(L) - ODD(K))/2.0
E = C*SI + B*CO
F = C*CO - B*SI
EVEN(K) = A + E
EVEN(L) = A - E
ODD(K) = F - D
ODD(L) = F + D
100 CONTINUE
200 CONTINUE
300 CONTINUE
C
400 CONTINUE
IF (N .LT. 1) GO TO 600
J = N*D2
DO 500 I1 = 1, NT, D3
12 = I1 + D4
DO 500 K = I1, I2, D5
L = K + J
EVEN(L) = EVEN(K) - ODD (K)
ODD(L) = 0.0
EVEN(K) = EVEN(K) + ODD (K)
ODD(K) = 0.0
500 CONTINUE
C
600 CONTINUE
RETURN C
END
cccccccccccccccccccccccccccccccccccccccccccccccc
SUBROUTINE HERM FT(X, Y, N, DIM)
INTEGER N INTEGER DIM(5)
REAL X(1), Y(1)
C
C HERMITIAN SYMMETRIC FOURIER TRANSFORM C
C GIVEN THE UNIQUE TERMS OF A HERMITIAN
C SYMMETRIC SEQUENCE OF LENGTH 2N THIS
C SUBROUTINE CALCULATES THE 2N REAL
C NUMBERS WHICH ARE ITS FOURIER
C TRANSFORM. THE EVEN NUMBERED ELEMENTS
C OF THE TRANSFORM (0, 2, 4, . . .,
C 2N-2) ARE RETURNED IN X AND THE ODD
C NUMBERED ELEMENTS (1, 3, 5, . . .,
C 2N-1) IN Y.
C
C A FINITE HERMITIAN SEQUENCE OF LENGTH
C 2N CONTAINS N + 1 UNIQUE REAL NUMBERS
C AND N - 1 UNIQUE IMAGINARY NUMBERS.
C FOR CONVENIENCE THE REAL VALUE FOR
C X(N) IS STORED AT Y(0).
REAL A, B, C, D, E, F, ANGLE, CO, SI,
TWO N, TWO PI
INTEGER NT, D2 , D3, D4, D5
INTEGER I, J, K, N OVER 2, 10, I1, I2 INTEGER K1
TWO PI = 6.283185
TWO N = FLOAT(2*N)
NT = DIM(1)
D2 = DIM(2)
D3 = DIM(3)
D4 = DIM(4) - 1
D5 = DIM(5)
DO 100 I0 = 1, NT, D3
I1 = I0 + D4
DO 100 I = I0, I1, D5
A = X(I)
B = Y (I)
X(I) = A + B
Y(I) = A - B
100 CONTINUE
N OVER 2 = N/2 + 1
IF (N OVER 2 .LT. 2) GO TO 500
DO 400 10 = 2, N OVER 2
ANGLE = TWO PI*FLOAT(I0-1)/TWO N
CO = COS (ANGLE)
SI = SIN(ANGLE)
K = (N + 2 - 2*I0) *D2
K1 = (I0 - 1)*D2 + 1
DO 300 I1 = K1, NT, D3
12 = I1 + D4 DO 200 I = I1, I2, D5
J = I + K
A = X(I) + X(J)
B = X(I) - X(J)
c = Yd) + Y(J)
D = Y(I) - Y(J)
E = B*CO + C*SI
F = B*SI - C*CO
X(I) = A + F
X(J) = A - F
Y(I) = E + D
Y(J) = E - D
200 CONTINUE
300 CONTINUE
400 CONTINUE
CALL CMPL FT (X, Y, N, DIM) C
500 CONTINUE
RETURN C
END
ccccccccccccccccccccccccccccccσccccccccccccccccc
SUBROUTINE CMPL FT (X, Y, N, D)
INTEGER N
INTEGER D(5)
REAL X(1), Y(1)
C
C COMPLEX FINITE DISCRETE FOURIER
C TRANSFORM TRANSFORMS ONE DIMENSION OF
C MULTI-DIMENSIONAL DATA MODIFIED BY L.
C F. TEN EYCK FROM A ONE-DIMENSIONAL
C VERSION WRITTEN BY G. T. SANDE, 1969.
C
C THIS PROGRAM CALCULATES THE TRANSFORM
C (X(T) + I*Y(T))*(COS(2*PI*T/N) -
C I*SIN(2*PI*T/N))
C
C INDEXING -- THE ARRANGEMENT OF THE
C MULTI-DIMENSIONAL DATA IS SPECIFIED BY
C THE INTEGER ARRAY D, THE VALUES OF
C WHICH ARE USED AS CONTROL PARAMETERS
C IN DO LOOPS. WHEN IT IS DESIRED TO
C COVER ALL ELEMENTS OF THE DATA FOR
C WHICH THE SUBSCRIPT BEING TRANSFORMED
C HAS THE VALUE 10, THE FOLLOWING IS
C USED.
C
C I1 = (I0 - 1)*D(2) + 1
C DO 100 I2 = I1, D(1), D(3)
C I3 = I2 + D(4) - 1
C DO 100 I = I2, I3, D(5)
C .
C .
C 100 CONTINUE C
C WITH THIS INDEXING IT IS POSSIBLE TO
C USE A NUMBER OF ARRANGEMENTS OF THE
C DATA, INCLUDING NORMAL FORTRAN COMPLEX
C NUMBERS (D(5) = 2) OR SEPARATE STORAGE
C OF REAL AND IMAGINARY PARTS.
C
LOGICAL ERROR
INTEGER P MAX, P SYM, TWO GRP INTEGER FACTOR(15), SYM(15),
UN SYM(15)
C
C P MAX IS THE LARGEST PRIME FACTOR THAT
C WILL BE TOLERATED BY THIS PROGRAM.
C TWO GRP IS THE LARGEST POWER OF TWO
C THAT IS TREATED AS A SPECIAL CASE.
C
P MAX = 19
TWO GRP = 8
C
IF (N .LE. 1) GO TO 100
CALL S R FP (N, P MAX, TWO GRP,
FACTOR, SYM, P SYM, UN SYM,
ERROR)
IF (ERROR) GO TO 200
C
CALL MD FT KD (N, FACTOR, D, X, Y) CALL D IP RP (N, SYM, P SYM, UN SYM, D, X, Y)
C
100 CONTINUE
RETURN
C
200 CONTINUE
WRITE (6, 300) N
STOP
C
300 FORMAT (43H0INVALID NUMBER OF POINTS
FOR CMPL FT. N =, I10//)
C
END
cccccccccccccccecccccccccccccccccccccccccccccccc
SUBROUTINE S R FP (PTS,PMAX,TWO
GRP,FACTOR,SYM, P SYM,UN
SYM,ERROR)
C SYMMETRIZED REORDERING FACTORING
PROGRAMME C
LOGICAL ERROR
INTEGER PTS,PMAX,TWO GRP,P SYM INTEGER FACTOR (10) , SYM (10), UN SYM (10)
C
INTEGER PP(14), QQ (7)
INTEGER F,J, JJ,N,NEST,P,P TWO,Q,R C
NEST=14
C
N=PTS
P SYM=1
F=2
P=0
Q=0
100 CONTINUE
IF (N.LE.1) GO TO 500
DO 200 J=F,PMAX
IF (N.EQ. (N/J)*J) GO TO 300 200 CONTINUE
GO TO 1400
300 CONTINUE
IF (2*P+Q.GE.NEST) GO TO 1600
F=J
N=N/F
IF (N.EQ. (N/F)*F) GO TO 400
Q=Q+1
QQ(Q)=F
GO TO 100
400 CONTINUE
N=N/F
P=P+1
PP(P)=F
P SYM=P SYM*F
GO TO 100
C
500 CONTINUE
R=1
IF (Q.EQ.0) R=0
IF (P.LT.1) GO TO 700
DO 600 J=1,P
JJ=P+1-J
SYM(J)=PP(JJ)
FACTOR (J)=PP(JJ)
JJ=P+Q+J
FACTOR (JJ) =PP (J)
JJ=P+R+J
SYM(JJ)=PP(J)
600 CONTINUE
700 CONTINUE
IF (Q.LT.1) GO TO 900
DO 800 J=1,Q
JJ=P+J
UN SYM(J)=QQ(J)
FACTOR (JJ) =QQ (J)
800 CONTINUE
SYM(P+1)=PTS/P SYM**2 900 CONTINUE
JJ=2*P+Q
FACTOR(JJ+1)=0
P TWO=1
J=0 1000 CONTINUE
J=J+1
IF (FACTOR (J) .EQ.O) GO TO 1200
IF (FACTOR(J) .NE.2) GO TO 1000
P TWO=P TWO*2
FACTOR(J)=1
IF (P TWO.GE.TWO GRP) GO TO 1100
IF (FACTOR (J+1) .EQ.2) GO TO 1000
1100 CONTINUE
FACTOR(J)=P TWO
P TWO=1
GO TO 1000
1200 CONTINUE
IF (P.EQ.O) R=0
JJ=2*P+R
SYM(JJ+1)=0
IF (Q.LE.1) Q=0
UN SYM(Q+1)=0
ERROR=.FALSE.
C
1300 CONTINUE
RETURN
C
1400 CONTINUE
WRITE (6,1500) PMAX,PTS
ERROR=.TRUE.
GO TO 1300
1500 FORMAT (24H0LARGEST FACTOR EXCEEDS
,I3,7H. N = ,I6,1H.)
1600 CONTINUE
WRITE (6,1700) NEST,PTS
ERROR=.TRUE.
GO TO 1300
1700 FORMAT (22H0FACTOR COUNT EXCEEDS
,I3,7H. N = ,I6,1H.)
END
SUBROUTINE D IP RP (PTS,SYM,P SYM,UN
SYM,DIM,X,Y)
C DOUBLE IN PLACE REORDERING PROGRAMME C
INTEGER PTS,P SYM
REAL X (10), Y (10)
INTEGER SYM(10), UN SYM(10) , DIM(5)
REAL T
LOGICAL ONE MOD
INTEGER MODULO (14)
INTEGER DK,JJ,KK,LK,MODS,MULT,NEST,P
UN SYM,TEST
INTEGER NT, SEP, DELTA, P, PO, P1, P2,
P3, P4, P5, SIZE
INTEGER S (14), U (14) INTEGER A,B,C,D,E,F,G,H,I,J,K,L,M,N INTEGER BS,CS,DS,ES,FS,GS,HS,IS,JS,
KS,LS,MS,NS
INTEGER AL,BL,CL,DL,EL,FL,GL,HL,IL,
JL,KL,LL,ML,NL
EQUIVALENCE (AL,U(1)),
(BS,S(2)), (BL,U(2))
EQUIVALENCE (CS, S (3)), (CL,U(3)),
(DS,S(4)), (DL,U(4))
EQUIVALENCE (ES,S (5)), (EL,U(5)),
(FS,S(6)), (FL,U(6))
EQUIVALENCE (GS, S (7)), (GL,U(7)),
(HS,S(8)), (HL,U(8))
EQUIVALENCE (IS,S (9)), (IL,U(9)),
(JS,S(10)),(JL,U(10))
EQUIVALENCE (KS,S (11)), (KL,U(11)),
(LS,S(12)), (LL,U(12))
EQUIVALENCE (MS, S (13)), (ML,U(13)),
(NS,S(14)), (NL,U(14))
C
NEST=14
C
NT = DIM(1)
SEP = DIM(2)
P2 = DIM(3)
SIZE = DIM(4) - 1
P4 = DIM(5)
IF (SYM(1) .EQ.0) GO TO 500
DO 100 J=1,NEST
U(J)=1
S (J) =1
100 CONTINUE
N=PTS
DO 200 J=1,NEST
IF (SYM(J) .EQ.0) GO TO 300
JJ=NEST+1-J
U(JJ)=N
S(JJ)=N/SYM(J)
N=N/SYM(J)
200 CONTINUE
300 CONTINUE
C
JJ=0
DO 400 A=1,AL
DO 400 B=A,BL,BS
DO 400 C=B,CL,CS
DO 400 D=C,DL,DS
DO 400 E=D,EL,ES
DO 400 F=E,FL,FS
DO 400 G=F,GL,GS
DO 400 H=G,HL,HS
DO 400 I=H,IL,IS
DO 400 J=I,JL,JS
DO 400 K=J,KL,KS
DO 400 L=K,LL,LS DO 400 M=L,ML,MS
DO 400 N=M,NL,NS
JJ=JJ+1
IF (JJ.GE.N) GO TO 400
DELTA = (N-JJ)*SEP
P1 = (JJ-1)*SEP + 1
DO 350 P0 = P1, NT, P2
P3 = P0 + SIZE
DO 350 P = P0, P3, P4
P5 = P + DELTA
T = X(P)
X(P) = X(P5)
X(P5) = T
T = Y(P)
Y(P) = Y(P5)
Y(P5) = T
350 CONTINUE
400 CONTINUE
C
500 CONTINUE
IF (UN SYM(l).EQ.0) GO TO 1900
P UN SYM=PTS/P SYM**2
MULT=P UN SYM/UN SYM(1)
TEST=(UN SYM(1)*UN SYM(2) -1) *MULT*P SYM
LK=MULT
DK=MULT
DO 600 K=2,NEST
IF (UN SYM(K).EQ.0) GO TO 700
LK=LK*UN SYM(K-l)
DK=DK/UN SYM(K)
U(K) = (LK-DK)*P SYM
MODS=K
600 CONTINUE
700 CONTINUE
ONE MOD=MODS.LT.3
IF (ONE MOD) GO TO 900
DO 800 J=3,MODS
JJ=MODS+3-J
MODULO (JJ)=U(J)
800 CONTINUE
900 CONTINUE
MODULO(2)=U(2)
JL=(P UN SYM-3)*P SYM
MS=P UN SYM*P SYM
C
DO 1800 J=P SYM,JL,P SYM K=J
C
1000 CONTINUE
K=K*MULT
IF (ONE MOD) GO TO 1200
DO 1100 1=3,MODS
K=K- (K/MODULO (I) ) *MODULO (I)
1100 CONTINUE 1200 CONTINUE
IF (K.GE.TEST) GO TO 1300
K=K- (K/MODULO (2) ) *MODULO (2)
GO TO 1400
1300 CONTINUE
K=K- (K/MODULO (2) ) *MODULO (2) +MODULO (2) 1400 CONTINUE
IF (K.LT.J) GO TO 1000
C
IF (K.EQ.J) GO TO 1700
DELTA = (K-J)*SEP
DO 1600 L=1,P SYM
DO 1500 M=L,PTS,MS
P1 = (M+J-1)*SEP + 1
DO 1500 P0 = P1, NT, P2
P3 = P0 + SIZE
DO 1500 JJ = P0, P3, P4
KK = JJ + DELTA
T=X(JJ)
X(JJ)=X(KK)
X (KK) =T
T=Y(JJ)
Y(JJ)=Y(KK)
Y(KK)=T
1500 CONTINUE
1600 CONTINUE
1700 CONTINUE
1800 CONTINUE
C
1900 CONTINUE
RETURN END SUBROUTINE MD FT KD (N, FACTOR,
DIM, X, Y)
C MULTI-DIMENSIONAL COMPLEX FOURIER
TRANSFORM KERNEL DRIVERC
INTEGER N
INTEGER FACTOR (10), DIM(5)
REAL X(10), Y(10)
C
INTEGER F, M, P, R, S
C
S = DIM(2)
F = 0
M = N
100 CONTINUE
F = F + 1
P = FACTOR(F)
IF (P.EQ.0) RETURN
M = M/P
R = M*S
IF (P.GT.8) GO TO 700
GO TO (100, 200, 300, 400, 500, 800, 700, 600), P GO TO 800
C
200 CONTINUE
CALL R2 CFTK (N, M, X(1), Y(1),
X(R+1), Y(R+1), DIM)
GO TO 100
C
300 CONTINUE
CALL R3 CFTK (N, M, X(1), Y(1),
X(R+1), Y(R+1), X(2*R+1),
Y(2*R+1)
., DIM)
GO TO 100
C
400 CONTINUE
CALL R4 CFTK (N, M, X(1), Y(1),
X(R+1), Y(R+1), X(2*R+1),
Y(2*R+1)
., X(3*R+1), Y(3*R+1), DIM) GO TO 100
C
500 CONTINUE
CALL R5 CFTK (N, M, X(1), Y(1),
X(R+1), Y(R+1), X(2*R+1),
Y(2*R+1)
., X(3*R+1), Y(3*R+1), X(4*R+1),
Y(4*R+1), DIM)
GO TO 100
C
600 CONTINUE
CALL R8 CFTK (N, M, X(1), Y(1),
X(R+1), Y(R+1), X(2*R+1),
Y(2*R+1)
., X(3*R+1), Y(3*R+1), X(4*R+1),
Y(4*R+1), X(5*R+1), Y(5*R+1), .X(6*R+1), Y(6*R+1), X(7*R+1),
Y(7*R+1), DIM)
GO TO 100
C
700 CONTINUE
CALL RP CFTK (N, M, P, R, X, Y, DIM) GO TO 100
C
800 CONTINUE
WRITE (6, 900)
RETURN
900 FORMAT (34H0TRANSFER ERROR DETECTED
IN MDFTKD,//)
END
cccccccccccccccccccccccccccccccccccccccccccccccc
SUBROUTINE R2 CFTK (N, M, X0, Y0, X1, Y1, DIM)
C RADIX 2 MULTI-DIMENSIONAL COMPLEX
FOURIER TRANSFORM KERNEL C
INTEGER N, M, DIM(5)
REAL XO (10), YO (10), XI (10), Yl (10)
C
LOGICAL FOLD, ZERO
INTEGER J,K,K0,M2,M OVER 2
INTEGER K1, K2, KK, L, L1, MM2, NT,
SIZE, SEP
INTEGER NS
REAL ANGLE,C,IS, IU,RS,RU,S,TWOPI REAL FJM1, FM2
DATA TWO PI/6.283185/
C
NT = DIM(1)
SEP = DIM(2)
L1 = DIM(3)
SIZE = DIM(4) - 1
K2 = DIM(5)
NS = N*SEP
M2=M*2
FM2 = FLOAT(M2)
M OVER 2=M/2+1
MM2 = SEP*M2
C
FJM1 = -1.0
DO 600 J=1,M OVER 2
FOLD=J.GT.1 .AND. 2*J.LT.M+2
K0 = (J-1)*SEP + 1
FJM1 = FJM1 + 1.0
ANGLE = TWO PI*FJM1/FM2
ZERO=ANGLE.EQ.0.0
IF (ZERO) GO TO 200
C=COS (ANGLE)
S=SIN(ANGLE)
GO TO 200
100 CONTINUE
FOLD=.FALSE.
K0 = (M+1-J)*SEP + 1
C=-C
200 CONTINUE
C
DO 500 KK = K0, NS, MM2
DO 440 L = KK, NT, L1
K1 = L + SIZE
DO 420 K = L, K1, K2
RS=X0 (K) +X1 (K)
IS=Y0 (K) +Y1 (K)
RU=X0 (K) -X1 (K)
IU=Y0 (K) -Y1 (K)
X0 (K) =RS
Y0 (K) =IS
IF (ZERO) GO TO 300
X1 (K) =RU*C+IU*S
Y1 (K) =IU*C-RU*S GO TO 400
300 CONTINUE
X1 (K) =RU
Y1 (K) =IU
400 CONTINUE
420 CONTINUE
440 CONTINUE
500 CONTINUE
IF (FOLD) GO TO 100
600 CONTINUE
C
RETURN
END
SUBROUTINE R3 CFTK (N, M, X0, Y0,
X1, Y1, X2, Y2, DIM)
C RADIX 3 MULTI-DIMENSIONAL COMPLEX
FOURIER TRANSFORM KERNELC
INTEGER N, M, DIM(5)
REAL X0 (10), Y0 (10), X1 (10),
Y1 (10), X2 (10), Y2 (10)C
LOGICAL FOLD, ZERO
INTEGER J,K,K0,M3,M OVER 2
INTEGER K1, K2 , KK, L, L1, MM3 , NT,
SIZE, SEP
INTEGER NS
REAL ANGLE,A,B,C1,C2,S1,S2,T,TWOPI REAL I0,I1,I2,IA,IB,IS,R0,R1,R2,RA,
RB,RS
REAL FJM1, FM3
DATA TWO PI/6.283185/, A/-0.5/,
B/0.86602540/
C
NT = DIM(1)
SEP = DIM(2)
L1 = DIM(3)
SIZE = DIM(4) - 1
K2 = DIM(5)
NS = N*SEP
M3=M*3
FM3 = FLOAT(M3)
MM3 = SEP*M3
M OVER 2=M/2+1
C
FJM1 = -1.0
DO 600 J=1,M OVER 2
FOLD=J.GT.1 .AND. 2*J.LT.M+2
K0 = (J-1)*SEP + 1
FJM1 = FJM1 + 1.0
ANGLE = TWO PI*FJM1/FM3
ZERO=ANGLE.EQ.0.0
IF (ZERO) GO TO 200
C1=COS (ANGLE)
S1=SIN(ANGLE) C2=C1*C1-S1*S1
S2=S1*C1+C1*S1
GO TO 200
100 CONTINUE
FOLD= .FALSE.
KO = (M+1-J)*SEP + 1
T=C1*A+S1*B
S1=C1*B-S1*A
C1=T
T=C2*A-S2*B
S2=-C2*B-S2*A
C2=T
200 CONTINUE
C
DO 500 KK = K0, NS, MM3
DO 440 L = KK, NT, L1
K1 = L + SIZE
DO 420 K = L, K1, K2
R0=X0 (K)
I0=Y0 (K)
RS=X1 (K) +X2 (K)
IS=Y1 (K) +Y2 (K)
X0 (K) =R0+RS
Y0 (K) =I0+IS
RA=R0+RS*A
IA=I0+IS*A
RB= (X1 (K) -X2 (K) ) *B
IB= (Y1 (K) -Y2 (K) ) *B
IF (ZERO) GO TO 300
R1=RA+IB
I1=IA-RB
R2=RA-IB
I2=IA+RB
X1 (K) =R1*C1+I1*S1
Y1(K)=I1*C1-R1*S1
X2(K)=R2*C2+I2*S2
Y2(K)=I2*C2-R2*S2
GO TO 400
300 CONTINUE
X1 (K) =RA+IB
Y1 (K) =IA-RB
X2(K)=RA-IB
Y2 (K) =IA+RB
400 CONTINUE
420 CONTINUE
440 CONTINUE
500 CONTINUE
IF (FOLD) GO TO 100
600 CONTINUE
C
RETURN
END
SUBROUTINE R4 CFTK (N, M, X0, Y0, X1,
Y1, X2, Y2, X3, Y3, DIM)
C RADIX 4 MULT-DIMENSIONAL COMPPEX FOURIER TRANSFORM KERNEL
C
INTEGER N, M, DIM(5)
REAL X0 (10), Y0 (10), X1 (10), Y1 (10)
REAL X2 (10), Y2 (10), X3 (10), Y3 (10)
C
LOGICAL FOLD, ZERO
INTEGER J,K,K0,M4,M OVER 2
INTEGER K1, K2, KK, L, L1, MM4, NT, SIZE, SEP
INTEGER NS
REAL ANGLE, C1, C2, C3, S1, S2, S3, T, TWOPI
REAL I1, 12, 13, ISO, IS1, IU0 , IU1, R1, R2, R3 ,RS0, RS1, RU0 , RU1
REAL FJM1, FM4
DATA TWO PI/6.283185/
C
NT = DIM(1)
SEP = DIM(2)
L1 = DIM(3)
SIZE = DIM(4) - 1
K2 = DIM(5)
NS = N*SEP
M4=M*4
FM4 = FLOAT (M4)
MM4 = SEP*M4
M OVER 2=M/2+1
C
FJM1 = -1.0
DO 600 J=1,M OVER 2
FOLD=J.GT.1 .AND. 2*J.LT.M+2
K0 = (J-1)*SEP + 1
FJM1 = FJM1 + 1.0
ANGLE = TWO PI*FJM1/FM4
ZERO=ANGLE.EQ.0.0
IF (ZERO) GO TO 200
C1=COS (ANGLE)
S1=SIN(ANGLE)
C2=C1*C1-S1*S1
S2=S1*C1+C1*S1
C3=C2*C1-S2*S1
S3=S2*C1+C2*S1
GO TO 200
100 CONTINUE
FOLD=.FALSE.
K0 = (M+1-J)*SEP + 1
T=C1
C1=S1
S1=T
C2=-C2
T=C3
C3=-S3
S3=-T 200 CONTINUE
C
DO 500 KK = K0, NS, MM4
DO 440 L = KK, NT, L1
K1 = L + SIZE
DO 420 K = L, K1, K2
RS0=X0 (K) +X2 (K)
IS0=Y0 (K) +Y2 (K)
RU0=X0(K)-X2(K)
IU0=Y0 (K) -Y2 (K)
RS1=X1 (K) +X3 (K)
IS1=Y1 (K) +Y3 (K)
RU1=X1 (K) -X3 (K)
IU1=Y1 (K) -Y3 (K)
X0 (K) =RS0+RS1
Y0(K)=IS0+IS1
IF (ZERO) GO TO 300
R1=RU0+IU1
I1=IU0-RU1
R2=RS0-RS1
I2=IS0-IS1
R3=RU0-IU1
I3=IU0+RU1
X2 (K) =R1*C1+I1*S1
Y2(K)=I1*C1-R1*S1
X1(K)=R2*C2+I2*S2
Y1(K)=I2*C2-R2*S2
X3(K)=R3*C3+I3*S3
Y3(K)=I3*C3-R3*S3
GO TO 400
300 CONTINUE
X2 (K) =RU0+IU1
Y2(K)=IU0-RU1
X1(K)=RS0-RS1
Y1(K)=IS0-IS1
X3(K)=RU0-IU1
Y3(K)=IU0+RU1
400 CONTINUE
420 CONTINUE
440 CONTINUE
500 CONTINUE
IF (FOLD) GO TO 100
600 CONTINUE
C
RETURN
END
SUBROUTINE R5 CFTK (N, M, X0, Y0, X1,
Y1, X2, Y2, X3, Y3, X4, Y4, .
DIM)
C RADIX 5 MULTI-DIMENSIONAL COMPLEX
FOURIER TRANSFORM KERNELC
INTEGER N, M, DIM(5)
REAL X0 (10), Y0 (10), X1 (10), Y1 (10), X2 (10), Y2 (10) REAL X3 (10) , Y3 (10) , X4 (10) , Y4 (10)
C
LOGICAL FOLD, ZERO
INTEGER J,K,K0,M5,M OVER 2
INTEGER K1, K2 , KK, L, L1, MM5, NT,
SIZE, SEP
INTEGER NS REAL ANGLE,A1,A2 ,B1,B2 , C1, C2,C3 ,C4, S1,
S2, S3, S4,T,TWOPI
REAL RO,R1,R2,R3 ,R4,RA1,RA2,RB1,RB2,
RS1,RS2,RU1,RU2
REAL I0,I1,I2,I3,I4,IA1,IA2,IB1,IB2,
IS1,IS2,IU1,IU2
REAL FJM1, FM5
DATA TWO PI/6.283185/, A1/0.30901699/,
B1/0.95105652/,
. A2/-0.80901699/, B2/0.58778525/ C
NT = DIM(1)
SEP = DIM(2)
L1 = DIM(3)
SIZE = DIM(4) - 1
K2 = DIM(5)
NS = N*SEP
M5=M*5
FM5 = FLOAT(M5)
MM5 = SEP*M5
M OVER 2=M/2+1
C
FJM1 = -1.0
DO 600 J=1,M OVER 2
FOLD=J.GT.1 .AND. 2*J.LT.M+2 K0 = (J-1)*SEP + 1
FJM1 = FJM1 + 1.0
ANGLE = TWO PI*FJM1/FM5
ZERO=ANGLE.EQ.0.0
IF (ZERO) GO TO 200
C1=COS (ANGLE)
S1=SIN(ANGLE)
C2=C1*C1-S1*S1
S2=S1*C1+C1*S1
C3=C2*C1-S2*S1
S3=S2*C1+C2*S1
C4=C2*C2-S2*S2
S4=S2*C2+C2*S2
GO TO 200
100 CONTINUE
FOLD=.FALSE.
KO = (M+1-J)*SEP + 1
T=C1*A1+S1*B1
S1=C1*B1-S1*A1
C1=T
T=C2*A2+S2*B2
S2=C2*B2-S2*A2 C2=T
T=C3*A2-S3*B2 S3=-C3*B2-S3*A2
C3=T
T=C4*A1-S4*B1 S4=-C4*B1-S4*A1
C4=T
200 CONTINUE
C
DO 500 KK = K0, NS, MM5 DO 440 L = KK, NT, L1 K1 = L + SIZE DO 420 K = L, K1, K2 R0=X0 (K)
I0=Y0 (K)
RS1=X1 (K) +X4 (K)
IS1=Y1 (K) +Y4 (K)
RU1=X1 (K) -X4 (K)
IU1=Y1 (K) -Y4 (K)
RS2=X2 (K) +X3 (K)
IS2=Y2 (K) +Y3 (K)
RU2=X2 (K) -X3 (K)
IU2=Y2 (K) -Y3 (K)
X0(K)=R0+RS1+RS2
Y0(K)=I0+IS1+IS2
RAl=R0+RS1*A1+RS2*A2 IA1=I0+IS1*A1+IS2*A2 RA2=R0+RS1*A2+RS2*Al IA2=I0+IS1*A2+IS2*A1 RB1=RU1*B1+RU2*B2
IB1=IU1*B1+IU2*B2
RB2=RU1*B2-RU2*B1
IB2=IU1*B2-IU2*B1
IF (ZERO) GO TO 300 R1=RA1+IB1
I1=IA1-RB1
R2=RA2+IB2
I2=IA2-RB2
R3=RA2-IB2
I3=IA2+RB2
R4=RA1-IB1
I4=IA1+RB1
X1(K)=R1*C1+I1*S1
Y1(K)=I1*C1-R1*S1
X2(K)=R2*C2+I2*S2
Y2(K)=I2*C2-R2*S2
X3(K)=R3*C3+I3*S3
Y3 (K)=I3*C3-R3*S3
X4 (K) =R4*C4+I4*S4
Y4(K)=I4*C4-R4*S4
GO TO 400
300 CONTINUE
XI (K) =RA1+IB1 Y1(K)=IA1-RB1 X2 (K) =RA2+IB2 Y2(K)=IA2-RB2
X3(K)=RA2-IB2
Y3 (K) =IA2+RB2
X4(K)=RA1-IB1
Y4 (K) =IA1+RB1
400 CONTINUE
420 CONTINUE
440 CONTINUE
500 CONTINUE
IF (FOLD) GO TO 100
600 CONTINUE
C
RETURN
END
SUBROUTINE R8 CFTK (N, M, X0, Y0, X1,
Y1, X2, Y2, X3, Y3, X4, Y4, . X5, Y5, X6, Y6, X7, Y7, DIM)
C RADIX 8 MULTI-DIMENSIONAL COMPLEX
FOURIER TRANSFORM KERNELC
INTEGER N, M, DIM(5)
REAL X0 (10), Y0 (10), X1 (10), Y1
(10), X2 (10), Y2 (10)
REAL X3 (10), Y3 (10), X4 (10), Y4
(10), X5 (10), Y5 (10)
REAL X6 (10), Y6 (10), X7 (10), Y7
(10)
C
LOGICAL FOLD, ZERO
INTEGER J,K,K0,M8,M OVER 2
INTEGER K1, K2, KK, L, L1, MM8, NT,
SIZE, SEP
INTEGER NS REAL ANGLE,C1,C2,C3,C4,C5,C6,C7,E,
Sl,S2,S3,S4,S5,S6,S7,T,TWOPI REAL R1,R2,R3,R4,R5,R6,R7,RS0,RS1,
RS2,RS3,RU0,RU1,RU2,RU3
REAL I1, I2, I3, I4, I5, I6, I7, IS0, IS1,
IS2,IS3,IU0, IU1, IU2, IU3
REAL RSS0,RSS1,RSU0,RSU1,RUS0,RUS1,
RUU0,RUU1
REAL ISS0, ISS1, ISUO, ISU1, IUSO, IUS1,
IUU0,IUU1
REAL FJM1, FM8
DATA TWO PI/6.283185/, E/0.70710678/C
NT = DIM(1)
SEP = DIM(2)
L1 = DIM(3)
SIZE = DIM(4) - 1
K2 = DIM(5)
NS = N*SEP
M8=M*8
FM8 = FLOAT(M8)
MM8 = SEP*M8 M OVER 2=M/2+1
C
FJM1 = -1.0
DO 600 J=1,M OVER 2
FOLD=J.GT.1 .AND. 2*J.LT.M+2
KO = (J-1)*SEP + 1
FJM1 = FJM1 + 1.0
ANGLE = TWO PI*FJM1/FM8
ZERO=ANGLE.EQ.0.0
IF (ZERO) GO TO 200
C1=COS (ANGLE)
S1=SIN(ANGLE)
C2=C1*C1-S1*S1
S2=S1*C1+C1*S1
C3=C2*C1-S2*S1
S3=S2*C1+C2*S1
C4=C2*C2-S2*S2
S4=S2*C2+C2*S2
C5=C4*C1-S4*S1
S5=S4*C1+C4+S1
C6=C4*C2-S4*S2
S6=S4*C2+C4*S2
C7=C4*C3-S4*S3
S7=S4*C3+C4*S3
GO TO 200
100 CONTINUE
FOLD= .FALSE.
K0 = (M+1-J)*SEP + 1
T= (C1+S1) *E
S1=(C1-S1)*E
C1=T
T=S2
S2=C2
C2=T
T=(-C3+S3)*E
S3=(C3+S3)*E
C3=T
C4=-C4
T=-(C5+S5)*E
S5=(-C5+S5)*E
C5=T
T=-S6
S6=-C6
C6=T
T=(C7-S7)*E
S7=-(C7+S7)*E
C7=T
200 CONTINUE
C
DO 500 KK = K0, NS, MM8 DO 440 L = KK, NT, L1
K1 = L + SIZE
DO 420 K = L, K1, K2 RS0=X0 (K) +X4 (K)
IS0=Y0 (K) +Y4 (K) RU0=X0 (K) -X4 (K)
IU0=Y0 (K) -Y4 (K)
RS1=X1 (K) +X5 (K)
IS1=Y1 (K) +Y5 (K)
RU1=X1 (K) -X5 (K)
IU1=Y1 (K) -Y5 (K)
RS2=X2(K)+X6(K)
IS2=Y2(K)+Y6(K)
RU2=X2(K)-X6(K)
IU2=Y2 (K) -Y6 (K)
RS3=X3(K)+X7(K)
IS3=Y3(K)+Y7(K)
RU3=X3 (K) -X7 (K)
IU3=Y3(K) -Y7(K)
RSS0=RS0+RS2
ISS0=IS0+IS2
RSU0=RS0-RS2
ISU0=IS0-IS2
RSS1=RS1+RS3
ISS1=IS1+IS3
RSU1=RS1-RS3
ISU1=IS1-IS3
RUS0=RU0-IU2
IUS0=IU0+RU2
RUU0=RU0+IU2
IUU0=IU0-RU2
RUS1=RU1-IU3
IUS1=RU1+RU3
RUU1=RU1+IU3
IUU1=IU1-RU3
T=(RUS1+IUS1)*E
IUS1= (IUS1-RUS1) *E
RUS1=T
T=(RUU1+IUU1)*E
IUU1= (IUU1-RUU1) *E
RUU1=T
X0 (K) =RSS0+RSS1
Y0(K)=ISS0+ISS1
IF (ZERO) GO TO 300
R1=RUU0+RUU1
I1=IUU0+IUU1
R2=RSU0+ISU1
I2=ISU0-RSU1
R3=RUS0+IUS1
I3=IUS0-RUS1
R4=RSS0-RSS1
I4=ISS0-ISS1
R5=RUU0-RUU1
I5=IUU0-IUU1
R6=RSU0-ISU1
I6=ISU0+RSU1
R7=RUS0-IUS1
I7=IUS0+RUS1
X4(K)=R1*C1+I1*S1
Y4 (K) =I1*C1-R1*S1 X2 (K) =R2*C2+I2*S2
Y2(K)=I2*C2-R2*S2
X6(K)=R3*C3+I3*S3
Y6(K)=I3*C3-R3*S3
X1(K)=R4*C4+I4*S4
Y1(K)=I4*C4-R4*S4
X5 (K) =R5*C5+I5*S5
Y5(K)=I5*C5-R5*S5
X3(K)=R6*C6+I6*S6
Y3(K)=I6*C6-R6*S6
X7(K)=R7*C7+I7*S7
Y7(K)=I7*C7-R7*S7
GO TO 400
300 CONTINUE
X4(K)=RUU0+RUU1
Y4 (K) =IUU0+IUU1
X2 (K) =RSU0+ISU1
Y2(K)=ISU0-RSU1
X6 (K) =RUS0+IUS1
Y6(K)=IUS0-RUS1
X1(K)=RSS0-RSS1
Y1(K)=ISS0-ISS1
X5(K)=RUU0-RUU1
Y5(K)=IUU0-IUU1
X3(K)=RSU0-ISU1
Y3 (K) =ISU0+RSU1
X7(K)=RUS0-IUS1
Y7 (K) =IUS0+RUS1
400 CONTINUE
420 CONTINUE
440 CONTINUE
500 CONTINUE
IF (FOLD) GO TO 100
600 CONTINUE
C
RETURN
END
SUBROUTINE RP CFTK (N, M, P, R, X,
Y, DIM)
C RADIX PRIME MULTI-DIMENSIONAL COMPLEX
FOURIER TRANSFORM KERNELC
INTEGER N, M, P, R, DIM(5) REAL X(R,P), Y(R,P)
C
LOGICAL FOLD, ZERO
REAL ANGLE, IS, IU,RS,RU,T,TWOPI,XT,YT
REAL FU, FP, FJM1, FMP
INTEGER J,JJ,K0,K,M OVER 2,MP,PM,
PP,U,V
INTEGER K1, K2, KK, L, L1, MMP, NT,
SIZE, SEP
INTEGER NS DATA TWO PI/6.283185/
C REAL AA (9,9), BB (9,9)
REAL A (18), B (18), C (18), S (18)
REAL IA (9), IB (9), RA (9), RB (9) C
NT = DIM(1)
SEP = DIM (2)
L1 = DIM(3)
SIZE = DIM(4) - 1
K2 = DIM(5)
NS = N*SEP
M OVER 2=M/2+1
MP=M*P
FMP = FLOAT (MP)
MMP = SEP*MP
PP=P/2
PM=P-1
FP = FLOAT (P)
FU = 0.0
DO 100 U=1,PP
FU = FU + 1.0
ANGLE = TWO PI*FU/FP
JJ=P-U
A(U)=COS(ANGLE)
B(U) =SIN(ANGLE)
A(JJ)=A(U)
B(JJ)=-B(U)
100 CONTINUE
DO 300 U=1,PP
DO 200 V=1,PP
JJ=U*V-U*V/P*P
AA(V,U)=A(JJ)
BB(V,U)=B(JJ)
200 CONTINUE
300 CONTINUE
C
FJM1 = -1.0
DO 1500 J=1,M OVER 2
FOLD=J.GT.1 .AND. 2*J.LT.M+2
K0 = (J-1)*SEP + 1
FJM1 = FJM1 + 1.0
ANGLE = TWO PI*FJM1/FMP
ZERO=ANGLE.EQ . 0 . 0
IF (ZERO) GO TO 700
C(1)=COS (ANGLE)
S(1)=SIN(ANGLE)
DO 400 U=2,PM
C(U)=C(U-1)*C(1)-S(U-1)*S(1)
S (U) =S (U-1) *C (1) +C (U-1) *S (1)
400 CONTINUE
GO TO 700
500 CONTINUE
FOLD= .FALSE.
K0 = (M+1-J)*SEP + 1
DO 600 U=1,PM
T=C (U) *A (U) *S (U) *B (U) S (U) =-S (U) *A (U) +C (U) *B (U)
C(U)=T
600 CONTINUE
700 CONTINUE
C
DO 1400 KK = K0, NS, MMP
DO 1340 L = KK, NT, L1
K1 = L + SIZE
DO 1320 K = L, K1, K2
XT=X(K,1)
YT=Y(K,1)
RS=X(K,2)+X(K,P)
IS=Y(K,2)+Y(K,P)
RU=X(K,2)-X(K,P)
IU=Y(K,2)-Y(K,P)
DO 800 U=1,PP
RA (U) =XT+RS*AA (U, 1)
IA (U) =YT+IS*AA (U, 1)
RB(U)=RU*BB(U,1)
IB(U)=IU*BB(U,1) 800 CONTINUE
XT=XT+RS
YT=YT+IS
DO 1000 U=2,PP
JJ=P-U
RS=X (K,U+1) +X (K, JJ+1)
IS=Y (K,U+l) +Y(K,JJ+1)
RU=X(K,U+1) -X(K, JJ+1)
IU=Y(K,U+1) -Y(K,JJ+1)
XT=XT+RS
YT=YT+IS
DO 900 V=1,PP
RA(V) =RA (V) +RS*AA(V,U)
IA(V)=IA(V)+IS*AA(V,U)
RB (V) =RB (V) +RU*BB (V,U)
IB (V) =IB (V) +IU*BB (V,U) 900 CONTINUE
1000 CONTINUE
X(K,1)=XT
Y(K,1)=YT
DO 1300 U=1,PP
JJ=P-U
IF (ZERO) GO TO 1100
XT=RA (U) +IB (U)
YT=IA(U)-RB(U)
X(K,U+1) =XT*C (U) +YT*S (U)
Y(K,U+1)=YT*C(U)-XT*S(U)
XT=RA(U)-IB(U)
YT=IA(U)+RB(U)
X (K, JJ+1) =XT*C (JJ) +YT*S (JJ)
Y (K, JJ+1) =YT*C (JJ) -XT*S (JJ)
GO TO 1200
1100 CONTINUE
X(K,U+1)=RA(U)+IB(U)
Y (K,U+1) =IA (U) -RB (U) X(K, JJ+1)=RA(U) -IB(U)
Y (K, JJ+1) =IA (U) +RB (U)
1200 CONTINUE
1300 CONTINUE
1320 CONTINUE
1340 CONTINUE
1400 CONTINUE
IF (FOLD) GO TO 500
1500 CONTINUE
C
RETURN END
cccccccccccccccccccccccccccccccccccccccccccccccc

Claims

C L A I M S
1. Optical apparatus comprising:
a multifocal imager defining a depth domain;
a sensor receiving light from a scene via said multifocal imager; and
an image processor for receiving an output of said sensor and for providing an in- focus image of the portion of the scene falling within the depth domain.
2. Optical apparatus according to claim 1, and wherein said image processor operates in real time.
3. Optical apparatus according to claim 1, and wherein said image processor operates off line with respect to said sensor.
4. Optical apparatus according to claim 3, and wherein said sensor comprises
photographic film.
5. Optical apparatus according to claim
3, and wherein said image processor comprises an image digitizer operative to provide a digital representation of the scene to the image
processor.
6. Optical apparatus according to any of the preceding claims, and wherein said
multifocal imager defines a plurality of
surfaces each arranged to receive light from a scene, each of said plurality of surfaces having a different focal length.
7. Optical apparatus according to claim 6, and wherein said sensor receives light from the scene via each of said plurality of
surfaces, such that a different part of said light from said scene received via each of said plurality of surfaces is in focus.
8. Optical apparatus according to claim 7, and wherein said image processor is operative to provide a composite image built up using in- focus parts received from each of said plurality of surfaces.
9. Optical apparatus according to any of claims 1-8, and wherein said imager comprises a lens.
10. Optical apparatus according to any of claims 1-8, and wherein said imager comprises a mirror system.
11. Optical apparatus according to any of the preceding claims, and wherein said imager has an invertible transfer function and said image processor comprises a computational unit for inverting the transfer function, thereby to restore sharp details of said scene.
12. Optical apparatus according to claim 11, and wherein said transfer function includes no zeros for predetermined domains of distance from the imager.
13. Optical apparatus according to claim 11, and wherein said transfer function includes no zeros.
14. Optical apparatus according to claim 11, and wherein the absolute value of said transfer function has a predetermined lower bound which is sufficiently large to permit image restoration.
15. Optical apparatus according to claim 11, and wherein the absolute value of said transfer function has a predetermined lower bound which is sufficiently large to permit image restoration for a predetermined domain of distances from the imager.
16. An image processing method comprising the steps of:
providing a digital representation of a viewed scene comprising at least two scene locations whose distances from the point of view are nonegual;
dividing the digital representation of the viewed scene into a multiplicity of digital representations of a corresponding plurality of portions of the viewed scene and separately sharpening each of the multiplicity of digital representations; and
assembling the multiplicity of
sharpened digital representations into a single sharpened digital representation.
17. A method according to claim 16, wherein said step of dividing and sharpening comprises the steps of:
operating a plurality of restoration filters on each of the multiplicity of digital representations; and
selecting, for each individual one of the multiplicity of digital representations, an individual one of the plurality of restoration filters to be employed in providing the
sharpened digital representation of the
individual one of the multiplicity of digital representations.
18. Video camera apparatus comprising an imager defining a depth domain and a sensor receiving light from a scene via said imager for generating video images,
wherein the imager comprises a multifocal imager, and
wherein the video camera apparatus also comprises an image processor for receiving an output of said sensor and for providing an in-focus image of the portion of the scene falling within the depth domain.
19. Electronic still camera apparatus comprising an imager defining a depth domain and a sensor receiving light from a scene via said imager for generating electronic still images, wherein the imager comprises a
multifocal imager, and
wherein the electronic still camera apparatus also comprises an image processor for receiving an output of said sensor and for providing an in-focus image of the portion of the scene falling within the depth domain.
20. Camcorder apparatus comprising an imager defining a depth domain and a sensor receiving light from a scene via said imager for generating video images,
wherein the imager comprises a
multifocal imager, and
wherein the camcorder apparatus also comprises an image processor for receiving an output of said sensor and for providing an in- focus image of the portion of the scene falling within the depth domain.
21. Film camera development apparatus comprising:
a multifocal-combining image processor operative to receive a multifocal representation of a scene and to provide an in-focus
representation of a portion of the scene falling within a predetermined depth domain.
22. Microscope apparatus comprising:
an objective lens comprising a multifocal imager defining a depth domain;
a sensor receiving light from a scene, having more than two dimensions, via said multifocal imager; and
an image processor for receiving an output of said sensor and for providing an in- focus image of a plurality of planes within the scene which fall within the depth domain.
23. Apparatus for internal imaging of the human body comprising:
a multifocal imager defining a depth domain, which is configured for insertion into the human body;
a sensor receiving light from an internal portion of the human body via said imager; and
an image processor for receiving an output of said sensor and for providing an in- focus image of the internal portion of the human body falling within the depth domain.
24. Apparatus according to claim 1, wherein the multifocal imager is operative to provide an image, including a superposition of in-focus contributions of a plurality of planes, to a single sensor.
25. Apparatus according to claim 1, wherein the multifocal imager is operative to provide an image, including a superposition of in-focus contributions of an infinite number of planes.
26. Apparatus according to claim 1, wherein said sensor comprises a single sensor.
27. An imaging method comprising the steps of:
providing a multifocal imager defining a depth domain;
sensing light from a scene via said multifocal imager; and
receiving an output of said sensor and image processing the output, thereby to provide an in-focus image of the portion of the scene falling within the depth domain.
28. Apparatus substantially as shown and described hereinabove.
29. Apparatus substantially as illustrated in any of the drawings.
30. A method substantially as shown and described hereinabove.
31. A method substantially as illustrated in any of the drawings.
PCT/US1993/000330 1992-01-14 1993-01-13 Multifocal optical apparatus WO1993014384A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP5512658A JPH07505725A (en) 1992-01-14 1993-01-13 multifocal optical device
EP19930904495 EP0623209A4 (en) 1992-01-14 1993-01-13 Multifocal optical apparatus.
GB9413367A GB9413367D0 (en) 1992-01-14 1994-07-01 Multifocal optical apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IL100657A IL100657A0 (en) 1992-01-14 1992-01-14 Multifocal optical apparatus
IL100,657 1992-01-14

Publications (2)

Publication Number Publication Date
WO1993014384A1 true WO1993014384A1 (en) 1993-07-22
WO1993014384B1 WO1993014384B1 (en) 1995-12-14

Family

ID=11063273

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1993/000330 WO1993014384A1 (en) 1992-01-14 1993-01-13 Multifocal optical apparatus

Country Status (6)

Country Link
EP (1) EP0623209A4 (en)
JP (1) JPH07505725A (en)
AU (1) AU3583193A (en)
GB (1) GB9413367D0 (en)
IL (1) IL100657A0 (en)
WO (1) WO1993014384A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1468314A2 (en) * 2001-12-18 2004-10-20 University Of Rochester Imaging using a multifocal aspheric lens to obtain extended depth of field
WO2006041219A2 (en) * 2004-10-15 2006-04-20 Matsushita Electric Industrial Co., Ltd. Enhancement of an image acquired with a multifocal lens
US7336430B2 (en) 2004-09-03 2008-02-26 Micron Technology, Inc. Extended depth of field using a multi-focal length lens with a controlled range of spherical aberration and a centrally obscured aperture
WO2009061439A2 (en) * 2007-11-06 2009-05-14 Tessera North America, Inc. Determinate and indeterminate optical systems
EP2399157A1 (en) * 2009-02-20 2011-12-28 Thales Canada Inc. Dual field-of-view optical imaging system with dual focus lens

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3767934A (en) * 1971-06-16 1973-10-23 Exxon Co Fault responsive switching system
US4485303A (en) * 1981-07-31 1984-11-27 Asahi Kogaku Kogyo Kabushiki Kaisha Focus detection with splitter lens
US4548495A (en) * 1981-02-27 1985-10-22 Takeomi Suzuki Proper focusing state detecting device
US4874239A (en) * 1983-08-10 1989-10-17 Canon Kabushiki Kaisha Distance measuring device
US4899041A (en) * 1987-09-02 1990-02-06 Erwin Sick Gmbh Optik-Elektronik Light sensor for detecting an object in a certain distance
US4993830A (en) * 1989-12-18 1991-02-19 Systronics, Incorporated Depth and distance measuring system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4745484A (en) * 1986-07-30 1988-05-17 Drexler Technology Corporation Method and apparatus for stepped imaging in reading data
FR2609817B1 (en) * 1987-01-21 1989-05-05 Matra METHOD AND DEVICE FOR SHOOTING WITH A LARGE DEPTH OF FIELD
DE3905619C2 (en) * 1988-02-23 2000-04-13 Olympus Optical Co Image input / output device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3767934A (en) * 1971-06-16 1973-10-23 Exxon Co Fault responsive switching system
US4548495A (en) * 1981-02-27 1985-10-22 Takeomi Suzuki Proper focusing state detecting device
US4485303A (en) * 1981-07-31 1984-11-27 Asahi Kogaku Kogyo Kabushiki Kaisha Focus detection with splitter lens
US4874239A (en) * 1983-08-10 1989-10-17 Canon Kabushiki Kaisha Distance measuring device
US4899041A (en) * 1987-09-02 1990-02-06 Erwin Sick Gmbh Optik-Elektronik Light sensor for detecting an object in a certain distance
US4993830A (en) * 1989-12-18 1991-02-19 Systronics, Incorporated Depth and distance measuring system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP0623209A4 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1468314A4 (en) * 2001-12-18 2006-12-13 Univ Rochester Imaging using a multifocal aspheric lens to obtain extended depth of field
EP1468314A2 (en) * 2001-12-18 2004-10-20 University Of Rochester Imaging using a multifocal aspheric lens to obtain extended depth of field
US7554750B2 (en) 2001-12-18 2009-06-30 The University Of Rochester Imaging using a multifocal aspheric lens to obtain extended depth of field
US7593161B2 (en) * 2004-09-03 2009-09-22 Aptina Imaging Corporation Apparatus and method for extended depth of field imaging
US7336430B2 (en) 2004-09-03 2008-02-26 Micron Technology, Inc. Extended depth of field using a multi-focal length lens with a controlled range of spherical aberration and a centrally obscured aperture
US7511895B2 (en) * 2004-09-03 2009-03-31 Aptina Imaging Corporation Apparatus and method for extended depth of field imaging
US8086058B2 (en) 2004-09-03 2011-12-27 Aptina Imaging Corporation Extended depth of field using a multi-focal length lens with a controlled range of spherical aberration and a centrally obscured aperture
WO2006041219A3 (en) * 2004-10-15 2006-11-16 Matsushita Electric Ind Co Ltd Enhancement of an image acquired with a multifocal lens
WO2006041219A2 (en) * 2004-10-15 2006-04-20 Matsushita Electric Industrial Co., Ltd. Enhancement of an image acquired with a multifocal lens
WO2009061439A2 (en) * 2007-11-06 2009-05-14 Tessera North America, Inc. Determinate and indeterminate optical systems
WO2009061439A3 (en) * 2007-11-06 2009-06-25 Tessera North America Inc Determinate and indeterminate optical systems
EP2399157A1 (en) * 2009-02-20 2011-12-28 Thales Canada Inc. Dual field-of-view optical imaging system with dual focus lens
EP2399157A4 (en) * 2009-02-20 2012-06-20 Thales Canada Inc Dual field-of-view optical imaging system with dual focus lens

Also Published As

Publication number Publication date
AU3583193A (en) 1993-08-03
JPH07505725A (en) 1995-06-22
GB9413367D0 (en) 1994-08-31
IL100657A0 (en) 1992-09-06
EP0623209A4 (en) 1994-11-17
EP0623209A1 (en) 1994-11-09

Similar Documents

Publication Publication Date Title
JP5742179B2 (en) Imaging apparatus, image processing apparatus, image processing method, and program
US7723662B2 (en) Microscopy arrangements and approaches
JP5274623B2 (en) Image processing apparatus, imaging apparatus, image processing program, and image processing method
US8743245B2 (en) Image processing method, image pickup apparatus, image processing apparatus, and non-transitory computer-readable storage medium
US11928794B2 (en) Image processing device, image processing program, image processing method, and imaging device
CN103533227A (en) Image pickup apparatus and lens apparatus
JP5677366B2 (en) Imaging device
JP6576046B2 (en) Compound eye imaging device
CN102959945A (en) Method and system for producing a virtual output image from data obtained by an array of image capturing devices
CA2577735A1 (en) Extended depth of field using a multi-focal length lens with a controlled range of spherical aberration and centrally obscured aperture
JP2011118235A (en) Imaging apparatus
JP2012073691A (en) Image processing system, imaging apparatus, image processing method, and program
JP3013721B2 (en) Optical device having digital image forming means
US20090201386A1 (en) Image processing apparatus, image processing method, image capturing apparatus, and medium storing a program
Chan et al. Super-resolution reconstruction in a computational compound-eye imaging system
Labussière et al. Leveraging blur information for plenoptic camera calibration
WO2013124664A1 (en) A method and apparatus for imaging through a time-varying inhomogeneous medium
WO1993014384A1 (en) Multifocal optical apparatus
JPH0887600A (en) Edge detection device
JP2007156749A (en) Image input device
JP2013236291A (en) Stereoscopic imaging apparatus
JP2017034682A (en) Image generation apparatus, image processing apparatus, image generation method, and image processing program
JP2013033496A (en) Image processing apparatus, image pickup apparatus, image processing method, and program
JP6330955B2 (en) Imaging apparatus and imaging method
JP5820650B2 (en) Imaging device

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AT AU BB BG BR CA CH CZ DE DK ES FI GB HU JP KP KR LK LU MG MN MW NL NO NZ PL RO RU SD SE SK UA US

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN ML MR SN TD TG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
WWE Wipo information: entry into national phase

Ref document number: 1993904495

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

WWP Wipo information: published in national office

Ref document number: 1993904495

Country of ref document: EP

ENP Entry into the national phase

Ref country code: US

Ref document number: 1995 256559

Date of ref document: 19950123

Kind code of ref document: A

Format of ref document f/p: F

NENP Non-entry into the national phase

Ref country code: CA

WWW Wipo information: withdrawn in national office

Ref document number: 1993904495

Country of ref document: EP