GB2173663A - Super resolution imaging system - Google Patents

Super resolution imaging system Download PDF

Info

Publication number
GB2173663A
GB2173663A GB08511465A GB8511465A GB2173663A GB 2173663 A GB2173663 A GB 2173663A GB 08511465 A GB08511465 A GB 08511465A GB 8511465 A GB8511465 A GB 8511465A GB 2173663 A GB2173663 A GB 2173663A
Authority
GB
United Kingdom
Prior art keywords
image
imaging system
function
functions
weight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB08511465A
Other versions
GB2173663B (en
Inventor
Stephen Piers Luttrell
Christopher John Oliver
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
UK Secretary of State for Defence
Original Assignee
UK Secretary of State for Defence
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by UK Secretary of State for Defence filed Critical UK Secretary of State for Defence
Publication of GB2173663A publication Critical patent/GB2173663A/en
Application granted granted Critical
Publication of GB2173663B publication Critical patent/GB2173663B/en
Expired legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/02Systems using reflection of radio waves, e.g. primary radar systems; Analogous systems
    • G01S13/0209Systems with very large relative bandwidth, i.e. larger than 10 %, e.g. baseband, pulse, carrier-free, ultrawideband

Description

1 GB 2 173 663 A 1
SPECIFICATION
Super resolution imaging system This invention relates to a super resolution imaging system of the kind employing coherent radiation to 5 illuminate a scene.
Super resolution or resolution enhancement in imaging systems is known, as set outfor example in published United Kingdom Patent Application No 2,113,501A (Reference 1). This reference describes resolution enhancement in an optical microscope. The microscope comprises a laser illuminating a small area of an object plane and means forfocussing lightfrom the object plane on to an image plane containing 10 a two dimensional array of detectors. Each detector output is processed to derive the complex amplitude and phase of the light or image element incident on it. A mathematical analysis of the image information is employed to reconstruct the illuminated object region. The analysis incorporates the constraints that the object illumination is zero outside a predetermined region, referred to as the support, and that a focussing device of known spatial impulse response or optical transfer function is employed to produce the image 15 information. The known impulse response and support are analysed mathematically to produce related image and object space singular functions into which image data may be decomposed, and from which object data may be reconstructed. The process is analogous to Fourier spectral analysis. The net effect of this is that the object can be reconstructed from the image data with better resolution than that provided by the image data along. Resolution within the classical diffraction limit, the Rayleigh criterion, can be obtained. 20 The mathematical analysis is discussed in detail by Bertero and Pike, Optical Acta, 1982, Vol 29, No 6, pp 727-746 (Reference 2).
Reference 1 is applicable to any imaging system, ie to optics, radar and sonar. It can be considered in broad terms as employing a single transmitter with a number of detectors, or alternatively as a transmitter with a movable or scanning detector. This corresponds to a bistatic radar system for example. However, in 25 many important cases imaging systems employ a single coupled transmitter and receiver, as for example in monostatic radar equipment, sonar and laser range-finders or lidar. In radar and sonar, the transmitter and receiver are commonly the same device, ie a radar antenna or a sonar transducer array. In lidar, the laser transmitter and the detectors are coupled. In any of these cases, the transmitter/receiver combination may be scanned to provide the effect of equal numbers of transmitters and receivers, as occurs in air traffic control radar and synthetic aperture radar. In these and analogous optical and sonar equipments it would be expensive and undesirably complex to provide a plurality of detectors per transmitter, or to decouple the transmitter and receiver and scan the latter. Furthermore, Reference 1 provides no improvement in range information. It enhances resolution in directions orthogonal to the range dimension, ie the transmitter-target direction. In radar, sonar or lidar employed to determine target range, this would be an undesirable 35 limitation.
It is an object of the present invention to provide an alternative form of imaging system adapted for resolution enhancement.
The present invention provides an imaging system having a given impulse response and including:- (1) an imaging device arranged to provide complex amplitude image data, (2) means for generating from image data a weight function appropriate to distinguish weak and strong image features, (3) means for reconstructing object data from a singular function decomposition of image data on the basis of singular functions derived from the weight function and system impulse response, and (4) means for generating an image from reconstructed object data.
The invention provides super resolution by generating singular functions from system impulse response and the weight function. The singular functions are employed in an image decomposition analogous to Fourier spectral analysis, and for subsequent object reconstruction. The weight function expresses the general expected form of the object giving rise to the image data, and is based on prior experience or knowledge of typical imaged objects. For example, an image containing a single intense diffraction maximum might in theory correspond to any assembly of scattering objects giving rise to constructive interference at the maximum and destructive interference elsewhere. In practice, it is overwhelmingly more probable that the maximum corresponds in position to a localised target distribution, and the weight function expresses this. The net effect of incorporating the weight in an image data analysis by singular function decomposition in accordance with the invention is that resolution may be enhanced over that obtainable according to the classical Rayleigh criterion.
The invention is applicable to any imaging system arranged to provide complex image data, such as radar, sonar or lidar. It is not restricted as regards the number of dimensions in which imaging is performed. Unlike References 1 and 2, in appropriate embodiments it is capable of enhancing range information.
The means for reconstructing object data preferably includes computing means arranged to:- (1) provide image and object space singular functions from the weight functions and system impulse response, (2) decompose image data into a linear combination of image space singular functions (3) convert the image space singular function combination into a corresponding object decomposition, and 1 2 GB 2 173 663 A 2 (4) reconstruct object data from its decomposition.
The computing means may also be arranged to omit noise-corrupted singular functions from the object reconstruction. Singular functions maybe calculated from the weight function and system impulse response. Alternatively, generated weight functions may be matched with previously stored weight functions with corresponding pre-calculated singular functions. Provision of singular functions then merely 5 involves selection.
In a preferred embodiment, the means for generating a weight function is arranged to assign each image pixei a respective weight value according to its intensity relative to nearby or local pixels. The weight function then consists of pixel weight as a function of pixel number. Individual pixel weights preferably vary in accordance with respective intensity if in excess of a threshold based on average local pixel intensity.
Pixels with intensities not exceeding this threshold are preferably assigned a weight value based on local pixel intensities not exceeding the threshold.
The means for generating an image from reconstructed object data may include an envelope detectorto provide amplitude modulus values and a visual display device.
The imaging system of the invention may include means for iterating weight function generation and reconstruction of object data. Such means would be operative to employ reconstructed object data instead of image data for weightfunction generation and iterative reconstruction of object data. It may preferably include means for terminating iteration when image enhancement reduces to an insignificant level.
In order that the invention might be more fully understood, embodiments thereof will now be described with reference to the accompanying drawings, in which:
Figure 1 is a schematic functional block diagram of an imaging system of the invention.
Figure 2 and 3 are more detailed representations of a weight function generator and iteration controller respectively appearing in Figure 1, both being schematic functional drawings, Figure 4 provides two-dimensional contour drawings illustrating object reconstruction in accordance with the invention, Figure 5 provides two-dimensional target and image contour drawings as produced in a conventional radar system, Figure 6 illustrates unweighted singular functions, Figure 7 illustrates weighted singular functions produced in accordance with the invention and employing Figure 5 image data, Figure 8 illustrates object reconstruction with the singular functions of Figure 7, and Figure 9 is a schematic drawing of part of a lidar system.
Referring to Figure 1, there is shown a schematic functional drawing (not to scale) of a pulse compression radar system of the invention. The system incorporates an antenna 10 producing a radar beam 11 illuminating a landscape schematically indicated in one (range) dimension at 12. The landscape 12 is 35 effectively shown as a sectional side elevation. It contains three major scattering objects 13, 14 and 15 in an area not greater than the classical Rayleigh resolution limit of the radar system. The objects, 13 to 15 appear in a clutter-generating background 16 of comparatively weak scatterers. The objects 13 to 15 generate return signals to the antenna 10 which are 18dB stronger in intensity than clutter signals. The radar system has a 20 nanosecond pulse width, a 40 MHz bandwidth and a resolution limit of 5 metres. Each pixel of a corresponding radar display would normally be equivalent to 5 metres in range and in an orthogonal scan dimension. The outer objects 13 and 15 in the landscape 12 are 5 metres apart.
The antenna 10 is connected to a radar transmitter/receiver unit 20 and thence to a heterodyne signal processing (SP) unit 21 incorporating a local oscillator (not shown). The transmitter/receiver unit 20 and SP unit 21 are conventional radar apparatus and will not be described further. The SP unit 21 provites in-phase 45 and quadrature or P and Q signals, fe complex image data, to a frame store 22 connected to a data select unit 23. P and Q signals pass from the data select unit 23 to an envelope detector 24 producing modulus values (p2+Q'), and thence to an image store 25 and display device 26.
The display device 26 comprises a 33 x 33 pixel (picture element) array, implying a time/bandwidth product of 33 for the radar system in two dimensions, range and bearing. Each pixel corresponds to a 11/4 50 metre resolution cell in the range and scan dimensions. This in fact corresponds to oversampling of the image, since thefundamental or unenhanced resolution of the system is 5 metres. Normally, one pixel would be equivalent to one 5 metre resolution cell. It is portrayed thus for clarity of illustrating image enhancement in accordance with the invention. The objects 13 to 15 are imaged on the display device 26 as a single or unresolved diffraction lobe 27 extending across 6 pixels and surrounded by a speckle pattern of 55 clutterfeatures such as 28. The lobe 27 and features 28 are shown as two- dimensional contour plots. The width of the lobe 27 indicates the radar system diffraction limit.
Image modulus information also passes from the image store 25 to an intensity weight function generator 30, and thence to a computer 31 indicated by chain lines containing a flow diagram of operation. As will be described later in more detail, the computer 31 generates singular function at 32 for subsequent image decomposition. It combines an amplitude weight function (based on the intensity weight function generated at 30) with the radar system impulse response stored at 33. The impulse response is the image the system produces of a point source or delta function. The singular functions are employed at 34to decompose complex data, which is received directly from the frame store 22, in a similar fashion to spectral analysis into Fourier components. The image data become a linear combination of the singular functions with function 65 is 3 GB 2 173 663 A 3 coefficients calculated in the decomposition. Terms in this linear combination which are strongly affected by noise are removed at 35. The remaining terms are converted at 36 into an equivalent object decomposition. Object reconstruction is carried out at 37 to produce calculated P and Q values for each image pixel. This is similar to reconstitution of a signal from its Fourier spectral components. An envelope detector 38 generates 5 the moduli of the P and Q values, which pass via an object store 39 to a second display device 40. The device 40 displays contour plots of three two-dimensional diffraction lobes 41 to 43. These correspond to super-resolved objects 13 to 15, and to the single unresolved diffraction lobe 27 or classical diffraction limit. The lobes 41 to 43 are accompanied by false images such as 44 arising from unusually intense clutter.
The object reconstruction or calculated P and Q values are also passed to an iteration controller 47, which also receives the original image data from the frame store 22. The controller 47 detects any change between 10 the original image and the reconstructed object. If the change is significant, the controller 47 repeats the resolution enhancement procedure. To achieve this, a control signal is sent via a line 48 to the data select unit 23. The unit 23 then reads reconstructed object data from the iteration controller 47 via a line 49. The resolution enhancement process then proceeds once more using the reconstructed object data as input for both weight and singular function generation, and the original image data is decomposed once more. This 15 iterative procedure continues by comparing each reconstructed object with that preceding. It terminates when the enhanced or reconstructed object does not change significantly between successive iterations.
Referring now to Figure 2, there is shown in schematic functional diagram of the weight function generator indicated within chain lines. Parts previously mentioned are like- referenced. As indicated at 50, pixel intensities or values of (p2 + Q2) in successive image sub-areas up to 7 x 7 pixels in extent are addressed in 20 the image store 25. Each sub-area has a central or equivalent pixel for which a weight is to be calculated on the basis of a statistical analysis of pixel intensities in the sub-area. Pixels near display edges having fewer than three pixels to one or two sides have respective weights calculated from all available pixels up to three pixels distant. This gives weight determination over a minimum of sixteen pixels for a corner pixel, and a maximum of forty-nine pixels for those spaced by at least three pixels from any display edge. At 51, the 25 mean <1(k)> and variance Var [1(k)l of the respective sub-area pixel intensities l(k) are calculated for all pixels in the sub-area, the parameter k indicating each individual pixel number. The results of these calculations are employed at 52 to derive a(k), a contrast coefficient term in the weight function (to be described later). They are also employed at 53 to set an adaptive threshold equal to 5<l(k)>, ie the threshold is an intensity level equal to five times the mean intensity. At 54, each sub-area pixel intensity addressed is compared with the 30 threshold 5<l(k)>. Intensities below that threshold are treated as clutter signals or background, and those above as detected targets. Sub-threshold intensities are employed at 55 to calculate a background mean intensity for each sub-area ignoring above-threshold pixel intensities. The value of a. and the background mean are used at 56 to generate a weight value for the central pixel of the respective sub-area, or that corresponding for sub-areas containing less than forty-nine pixels. This procedure is repeated until all pixels 35 intensities in the display have a weight value. The weight values collectively form a weight function for singular function generation at 32 in the computer 31.
Referring now to Figure 3, there is shown a schematic functional diagram of the iteration controller 47 indicated within chain lines. Parts previously mentioned are like- referenced. Object reconstruction data received by the controller 47 pass to a current object store 60, which simultaneously outputs any object data 40 held from an immediately preceding iteration cycle to a preceding object store 61. The store 61 initially holds original image data received from the frame store 22. The contents of the current and preceding stores 60 and 61 are read via respective envelope detectors 62 and 63 to a difference calculator 64. The calculator 64 produces the squared sum of all individual pixel intensity changes over all 33 X 33 pixeis, and divides this by the squared sum of all current pixel intensities. If the result is greater than 10-4, indicating an overall amplitude change >1%, a control signal is sent via line 48 to data select unit 23. The unit 23 then reads the contents of current object store 60 for subsequent envelope detection and generation of weight and singular functions. A further iteration or enhancement is then carried out upon the original image data. On receiving a second reconstructed data set, the current object store 60 passes the set received one iteration cycle earlier to the preceding object store 61, which ejects previously held data. Comparison between successive reconstructions then proceeds as before. Iteration cycling continues until successive object reconstructions differ by less than 1 %, when display device 40 indicates a final reconstruction.
Referring now to Figure 4, there are shown four two-dimensional displays in the form of contour graphs 71 to 74. These schematically represent 33 x 33 pixel radar displays such as 26 and 40 in Figure 1, and each consists of amplitude modulus plotted in contours against range R and scan angle or bearing 0. These graphs were obtained in a computer simulation of the invention. Graphs 71 and 72 are target and conventional radar image representations respectively, graph 73 is a weight function derived from the data of graph 72, and graph 74 is an object reconstruction. The scales are arbitrary, but the same for all four graphs. Range is derived from pulse time of flight and bearing from antenna position at signal receipt. Graph 71 is a two-dimensional point target represented at high resolution, and consists of a narrow central diffraction lobe 75. Graph 71 could theoretically be represented as a delta function one pixel in extent in range and bearing, but the narrow diffraction lobe 75 or target corresponds to more practical situations.
Graph 72 is a radar image of the target 75, and shows a broad central diffraction lobe 76 indicating the classical Rayleigh limit of diffraction. The lobe 76 is accompanied by a weak clutter background having features such as 77,78 and 79. If no clutter were present, graph 72 would show only the central lobe 76 less 65 4 GB 2 173 663 A 4 its underlying background. The clutter-free lobe corresponds to the impulse response of the radar system in the range and bearing dimensions, impulse response being defined as the image produced by a point source. It is a calculable or measurable constant of an imaging system. For an optical system,the impulse response is commonly termed the optical transfer function. The corresponding one-dimensional impulse response would be sin xlx, where x is a variable appropriately normalised to the relevant pixel display. This impulse response would be appropriate for a radar system detecting target range on a fixed bearing.
The weight function shown in graph 73 is dominated by a main central lobe 80 corresponding to identification of a localised target, ie target 75. In addition, small weight values 81, 82 and 83 correspond to clutter features incorrectly identified as weak targets. The weight function value is substantially constant otherthan at 80 to 83, as indicated by the lack of contours. In the case of targets in a non-zero background, it 10 would not be possible to distinguish between weak targets and clutter. The degree to which weak signals are given a significant weight value may be reduced by increasing the discrimination level, but at the expense of suppressing possible desired signals.
Graph 74 illustrates the effect of applying the weightfunction of graph 73 to image data. Awelf-resolved, main diffraction lobe 85 is shown having five contours, togetherwith weak clutter-produced distortions 86 15 and 87 of one contour. Resolution is betterthan that obtainable according to the Rayleigh criterion. In addition, spurious targets 88,89 and 90 appear weakly (one contour), and correspond respectivelyto small weight values 81 to 83. It is evidentthat a significant improvement in resolution has been obtained. This is indicated by comparison of the widths of the innerfour contours of the main lobes 76 (image) and 85 (reconstructed object), and corresponds approximately to a factor of 2 improvement in resolution.
Whereas Figure 4 illustrates a two-dimensional display, the invention is independent of the number of dimensions in which it is implemented. Versions of the invention may be used to enhance resolution in any one, two or all three of the range, bearing and elevation dimensions. Appropriately dimensional singular functions, weight functions and impulse responses would be employed.
Referring now to Figure 5, there are shown two computer-simulated graphs 101 and 102 corresponding 25 respectively to a scene and a conventional radar image of the scene. The graphs are the equvalent of graphs 71 and 72 in Figure 4 for a different scene. Graph 101 displays four relatively intense point targets 103 to 106 (cf 75), each having five contours. The targets 103 to 106 appear within a comparatively weak clutter or speckle background indicated generally by 107 and consisting largely of one-contour features. The radar image of graph 102 indicates that the targets 103 to 106 have not been resolved. They are reproduced as a 30 single broad diffraction lobe 108 of five contours, of which the lowest contour 109 is dominated by the effects of clutter. Lobe 108 has a peak value 18 dB in intensity above mean clutter intensity. One and two-contour clutter features accompany the main lobe 108, some of which are indicated by 110 and 111 respectively.
Referring now to Figure 6, there are shown the first sixteen object space singular functions derived from imaging system impulse response and a uniform weight function. The functions are illustrated as two-dimensional contour plots 121 to 136. There functions are shown for the purposes of comparison with those obtained in accordance with the invention, ie when a weight function generator such as 30 is employed to generate a non-uniform weight function from image data. If the singular functions 121 to 136 were to be employed to reconstruct an object from the image data represented by graph 102, reconstruction would give no resolution improvement whatsoever. The image would be left unchanged apart from minor 40 computational rounding or digitisation errors. This corresponds to the conventional imaging process.
Referring now-to Figure 7, there are shown the firstsixteen object space singular functions 141 to 156 derived from system impulse response and a non-uniform weightfunction produced in accordance with the invention as indicated at 30. Image space singular functions are not shown. It can be seen from graphs 141 to 147 in particularthat a major effect of introducing a non-uniform weight is to concentrate function magnitude in cenral graph regions. This corresponds to the position of targets 103 to 106 and image diffraction lobe 108 in Figure 5.
Turning nowto Figure 8, object reconstruction is illustrated using the functions 141 to 156 of Figure 7.
Figure 8 shows four graphs 161 to 164 corresponding respectively to one, two, three and four iterations of the reconstruction process, ie applyling recomputed singular functions to the original image 102 of Figure 5. 50 It can be shown thatthe main diffraction lobe 108 of image 102 is very largely composed of a linear combination of lower-order functions 141 to 147 together with 149 and 150. Unsuppressed clutter background 110 and 111 is largely reconstructed from higher order functions 148 and 151 to 156.
Comparing object reconstruction in Figure 8 with original image 102, it is seen that one application of the reconstruction procedure shown in graph 161 has improved resolution apreciably. In particular, three maxima 165 have been resolved from the four (103 to 106) originally present but unresolved at 108. Graphs 162 to 164 give the effect of successive iterations of the reconstruction process. The net effect in graph 164 is that the four original targets 103 to 106 are resolved with varying degrees of strength at 166 to 169. A fifth and spurious peak 170 indicates a false target close to the original targets. In addition, a further target is strongly indicated at 171, although this is also spurious. Minor clutter features are reproduced at positions 60 such as 172. It can be seen that the overall effect of the reconstruction process is to produce greatly enhanced resolution at the small expense of introducing a minor amount of spurious information. A radar operator viewing the initial image and final reconstruction has the option of disregarding peaks not corresponding to major features of the original image. The great improvement obtained in accordance with the invention is that diffraction lobes such as 108 are resolved as arising from several small features of a GB 2 173 663 A 5 scene, ratherthan from one large feature as in the prior art. This would permit for example an operatorto distinguish the presence of vehicles in a scene containing larger objects.
The process of target or object reconstruction by singular function decomposition will now be described in more detail. Initially, the generation of an intensity weight function W having individual values W(k) will be described. The index k corresponds to pixel number, and may have x and y components kx, ky, or maybe a single index if pixels are labelled serially, ie either 1 to 33 for both x and y components or 1 to 1089 for a single index.
A physical object in a real scene is imaged as a bright object superimposed on a clutter background. The intensity distribution of the clutter background arises from interference between a large number of random scatterers, and gives rise to the speckle phenomena. It is well described by an uncorrelated Gaussian 10 probability distribution as follows:- PM = 1 exp [-11<l>] <1> (1) where P(I) is the probability of a pixel exhibiting intensity 1, and <1> is the mean of all pixel intensities.
It follows from Equation (1) that the relative variance Var(l) of pixel intensity fluctuations for clutter is given by: - Varfl y<l>2 = 1 (2) For an N-look radar, the equivalent relative variance would be given by Var(I)/<I>2 = 1/N (3) 25 To compute a weight value for each individual pixel, its intensity is compared with those of nearby pixels.
The approach is to determine whether the pixel intensity is comparable with or significantly above the average of its respective nearby pixels. Pixel intensities significantly above an average of nearby pixel intensities are assigned an i ntensity-depen dent weight. Those not differing significantly from this average are accorded a weight which is an average over nearby pixel intensities also classified as background, ie an average over pixels other than those which are significantly intense.
The statistical properties of bright scattering objects in a scene are not known. Accordingly, the procedure is to treat them as having similar properties to those of clutter, ie an uncorrelated Gaussian distribution. 35 As previously indicated, each pixel of an image is assigned an intensity weight value W(k) calculated by comparing its intensity with that of nearby pixels. Pixel intensities in the respective 7 x 7 image pixel sub-area for each pixel is addressed at 50 from the image store 25 for the intensity comparison. The equivalent smaller sub-area previously mentioned is similarly employed for near-edge pixels. The sub-area or local mean <l(k)>A and variance Var [1(k)l, ie <l(k)>2 of the sub-area pixel intensities are calculated at 51 40 A for each pixel number k on the basis of its respective Sub-area or local pixel intensities.
To suppress the clutter background, any pixel intensity l(k) not greaterthan a threshold of five times its respective mean local sub-area intensity, ie 5 <l(k)>A, is treated as clutter. It is assigned anintensity weight value W(k) equal to the respective local mean background intensity.
ie W(k) = <i(k)>S, 1(k) -< 5 <l(k)>A (4) where <l(k)>B is the mean intensity of those pixels in the respective sub- area which do not exceed the local threshold 5<l(k)>A. The threshold is calculated at 53 from the sub-area mean <l(k)>A generated at 51.
Sub-area pixel intensities addressed at 50 are compared with the threshold by the threshold detector 54. All pixel intensities not greater than the threshold are employed at 55 to generate the local mean background intensity <l(k)>B.
Any pixel intensity 1(k) which exceeds its respective local threshold is assigned an intensity dependent weight value W(k). This value is equal to the respective background mean value of <l(k)>B plus a contrast 55 term varying in accordance with the relative prominence of the respective pixel intensity.
ie W(k) = <l(k)>B + oL(k)([(k) - <l(k)>B) 0 Var[l(kfl where a(k) P-1 >2 p+1 <1(k) A (5) (6) 6 GB 2 173 663 A 6 The threshold detector 54 supplies a control signal to the unit 52 calculating et(k) from the respective local mean intensity. If the pixel intensity 1(k) is not greater than the local threshold, the control signal is employed to setthe output value of (x(k) to zero irrespective of its calculated value. Otherwise, u(k) is calculated as indicated in Equation (6). The intensity weightvalue W(k) is then calculated from Equation (5) as indicated at 5 56.
This procedure is repeated until all pixels have been assigned a respective weight value, and the resulting set of values constitutes the intensity weight function W.
It should be noted that the foregoing weight generation procedure automatically deals adaptively with background which is not constant. Each pixel weight W(k) is calculated from its respective sub-area, and the 10 background term <l(k)>B in Equation (5) accordingly varies from pixel to pixel.
Turning now to the process of target or object reconstruction by singular value decomposition, let be an orthonormal set of functions in image space into which an image may be decomposed.
W 1,i =j 15 Then 0, i e- j (7) 15 where Oi is the Hermitian conjugate of ifi.
Let the object states be described by a set of weighted functions g, which functions are equal to the product of an amplitude weighting function wand unweighted functions (j. The amplitude weighting function w is 20 related to the intensity weighting function W previously defined by the expression:- W= 1W12 (8) Since Wisreal,w=-, W (9) Then gj = Wh (10) 30 Let T be the impulse response of the imaging system, ie the image the system generates of an object having the dimensional properties of a delta function. For a lens, this would be the image of a geometrical point source, the optical transfer function having two spatial dimensions. Reference 1 gives impulse response functions for square and circular lenses. A radar system impulse response has one temporal (range) dimension if fixed, and one temporal and one or two spatial dimensions if scanned. A rotating radar scanner has an angular spatial dimension and a synthetic aperture radar a linear spatial dimension. Impulse responses of this kind may be calculated andlor measured in individual imaging systems by known techniques.
Necessarily, the object space functions must be imaged into image space functions by the imaging system transformation or impulse response T. According iy, from Equation (10):- j Tj Twfl (11. 1) 45 and ti AytTt 1 (11.2) Combining Equations (11.1), (11.2) and (7):
so (,tTtTw.j = Bij (12) The expression wtTtTw in Equation (12) is an operator having eigenstates or eigenfunctions which are the unweighted object space function set (, and eigenvalues which may be denoted by Xi. Solving Equation (12) for (j gives the eigenfunction equation:
vvtvTw)j = Xj'5j (13) 7 GB 2 173 663 A 7 Equation (13) determines the unweighted object spacefunction set as eigenstates of the object space operator WVTw. The (5 function set can accordingly be calculated bythe computer 31 from the amplitude weight function w and the impulse response T of the imaging system. T is known, and w is derived from Equation (9) at 32. Combining Equations (12) and (13):- 01)iw tTtTw)i - 8ij 7i - (14) Equation (14) demonstrates that the function set) is an orthogonal set, and that the functions qi and 4) are 10 uniquely related orthogonal sets, the relationship being defined by the normalisation coefficient or energy term Xj. If desired, the function set) could be normalised by multiplying by 'Xi to produce an orthonormal set, say, where)i This is not however essential.
To obtain the image space function set, a further eigenfunction equation is set up by substituting xj 15 Tw(5i from Equation (11) in the left hand side of Equation (13):- ie W'T'j = xj.j (15) Multiplying both sides of Equation (15) byTw: 20 TwwTti = XJW4)i (16) 25 Substituting xj = Tw(j in the right hand side of Equation (16):- TwwtTtj = Xjxj (17) Equation (17) determines the image space function set k as eigenstates of the image space operator TwwtV, the eigenvalues Xj being identical to those of the object space. The function set 4Y and the eigenvalue set X can accordingly be calculated by the computer 31 from w and T as for Equation (13).
Complex image data is represented by a set g having individual pixel values g(k), where k represents pixel 35 number as before. Decomposition of the set g into a function set qi is defined by:
E (k)g(k) (18) 40 k 1 ie the proportion or fraction g of the image data set g present in the ith image space singular function xl is the summation over all k of the product of the kth point value of j and the kth value of g. This calculation'is carried out for the whole of the image space function set, s, ie 1 to Jn, so that image data is decomposed to 45 a series of numerical coefficients of the kind xg each multiplying a respective function xi. this is analogous to decomposition of a signal into its Fourier components.
If i suffixes replace j suffixes in Equation (11.1) and both sides are multiplied by' wwtTt:
XL Then -LwwtTtii=l ww'TtTwd)i X! X] (19) Substituting wtTtTw(i=Xi)i (from Equation (13) with suffix change) in the right hand side of Equation (19) and putting i = wl)i from Equation (10): 55 wwtTtqji =-L wXi.i = wbi = gi (20) 60 Equation (20) demonstrates thatthe ith weighted object space function is precisely equivalent to the term 1 wwtTtPi.
i- Moreover, a reconstruction f, of an object f from a decomposition in terms of a function set t is defined mathematically by:
8 8 GB 2 173 663 A 1, ww-'T'igi X (21) where i = 4itg =that fraction of the image appearing in the ith function i, which was determined in image 5 decomposition using Equation (18).
Combining Equations (20) and (2l):- f r = P 19i (22) Equation (22) demonstrates that object reconstruction is achieved by multiplying the ith weighted object space singular function ti by the ith coefficient @i of, or fraction of the image in, the corresponding ith image space singular function i, and then summing the result over all 1 eigenstates. Individual complex 15 reconstructed object amplitudes or P and Q values are given by f,(k) for the kth pixel, where:
fr(k) =21 ti(k)gi (23) ie The kth pixel complex amplitude is the sum of all k-point values of the term igi.
The reconstruction expressed by Equations (22) and (23) is valid provided that total noise N introduced by the imaging system is negligible compared to clutter background intensity, and provided that Nyquist image sampling is employed. This is in factthe case for all radars viewing landscape scenes consisting of targets in a clutter background. However, imaging systems may be employed in situations where noise significantly 25 affects image data. Moreover, the image energy Xi contributed by the ith image state singular function xi fails with increasing i, ie higher orderfunctions contribute less image energy. If at some i ti falls to equality with or below a corresponding fraction Nj or contribution of total system noise energy N, then both that and higher order terms should be omitted from the decomposition/reconstruction process. For white noise, the fraction of system noise Ni contributed by the ith function i is a constant for all i, and is equal to N/M, where 30 M is the total number of singular functions in set x or). Accordingly, the summation in Equation (22) and (23) is terminated at i,,,, where is the highest order eigenvalue for which Xi >N/M holds good. The reconstruction Equation (23) may then be written more fully as- imax 35 fr(k) (24) It is observed that truncation of the fr(k) summation does not greatly affect resolution. Inclusion of a term incorporating a significant proportion of noise may however severely affect reconstruction. The effect of each term is inversely proportional to the respective Xi, so term with small and noise-affected values of X, may produce disproportionately deleterious results.
It is however a major advantage of the present invention that noise is not so important a consideration as in References 1 and 2. In these References, if truncation is not carried out, the reconstructed object changes dramatically and spuriously as soon as a noise-corrupted term is added. In the present invention, noise-corrupted image data are treated and processed as background clutter, and both have Gaussian statistics. Accordingly, the effect of retaining noise-corrupted terms is merely to include noise "clutter" with background clutter. This only worsens the contrast between identified targets and background. In other words, the reconstruction degrades gracefully with increasing noise in the present invention, and reconstruction truncation is advantageous but not essential.
To summarise the computation, the weightfunction generator30 calculates an intensity weight function W from image intensity statistics. The computer 31 calculates the object and image space function sets ( and 4F at 32 from the eigenvaiue Equations (13) and (17) incorporating the known system impulse response T stored at 33 and the amplitude weightfunction w derived from W. It then computes the weighted object space function setfrom the Equation (10) definition. The image data set g is then decomposed at 34 into a 55 linear combination of the function set x using Equation (18). This yields coefficients!g or i which are precisely the same as those appearing in the object decomposition in terms of the function set. Each value of Xi is then compared at 35 with the fraction N/M of imaging system noise, and all functions and coefficients for which the corresponding Xl> N/M are discarded. The computer 31 multiplies each remaining function ti by the respective coefficient 1 producing the object decomposition in terms of at 36, and then at 37 computes the contribution giti(k) to the complex amplitude of pixel number k. The contributions gii(k) are then summed over all i at each pixel in turn to produce the required reconstructed object data set, ie a complex data value or P and G for each pixel. This is analogous to reconstructing a signal from its Fourier spectral components by adding together the contributions from each component to the corresponding points of the signal. After envelope detection at 38 to produce amplitude moduli (P' + T), the 9 GB 2 173 663 A 9 reconstructed object data pass to the object store 39 for display at 40.
The foregoing computation produces a single stage or iteration of resolution enhancement, as illustrated in Figure 4 in which image data in graph 72 is enhanced to a reconstruction shown in graph 74. As previously outlined, the computation is iterated by means of the iteration controller 47 to obtain any significant further enhancement as follows. The difference calculator 64 receives a respective stream of pixel amplitude modulus values, (p2 + Q2), from each of the envelope detectors 62 and 63. These streams correspond to the first reconstruction and original image (first cycle) or to successive reconstructions (later cycles). If the complex amplitude of the kth pixel after the nth iteration cycle is defined as f"(k), then current object store 60 r holds all fn(k) and preceding object store 61 all f'-'(k). For n = 1, f'(k) is the original image information r r r received from frame store 22. The difference calculator 64 receives modulus values Ifyk)l and C-1M1 from 10 envelope detectors 62 and 63. It computes the difference between successive intensives of each pixel, then squares and sums the difference over all pixels. The result is divided by the squared sum of reconstructed pixed intensities to produce a ratio R. This is expressed by:- Ifn (k)]2 - Ifn-1 (k) 1 21 2 r r R = k 1 (25) 2 z Ifn (k) 1 r k 1 If R is greater than 10-4, the nth iteration has produced an overall intensity variation of more than 1 %. A further iteration is carried out as previously indicated, using the f"(k) complex amplitude values as input to r envelope detector 22. If R is less than 10-4, iteration is terminated. A different criterion or R value for iteration termination could of course be chosen.
The apparatus illustrated in schematically Figures 1, 2 and 3 has been described in functional terms.
However, impIementatin of the apparatus is a straightforward matter for those skilled in the art of digital electronics and computer programming. There are in fact various options for implementing the invention.
The procedures of weight generation and iteration control could be carried out employing a computer of sufficient capacity to execute these as well as singular function decomposition. This corresponds to modifying Figure 1 so that the computer 31 includes the weight function generator 30 and iteration controller 30 47. This may however be comparatively slow in operation. The generator 30 would preferably be implemented as a dedicated arithmetic unit arranged to execute the required calculations. This requires an address unit 50 to address pixel sub-area intensities, together with an arithmetic unit or arrangement of full adders at 51 to perform the necessary repeated addition/subtraction for multiplication/division to generate <l(k)>A and a-(k). The adaptive threshold set at 53 is performed by a simple multiplier to provide 5<l(k)>A. 35 Threshold detector 54 contains a comparator to compare l(k) and 5<l(k)>A. Values of l(k) above 5<l(k)>A are routed to 55 for addition by a full adder, and means for counting their total number is provided. <l(k)>B is simply their sum divided by their number, and division is performed by the same or a further full adder arranged for repeated twos complement addition, ie well-known digital division. Weight function generation at 56 requires a full adder for adding the twos complement of <l(k)>13 to 1(k). Subsequently, this or as convenient a further full adder performs the repeated addition necessary to evaluate 9.(k)(1(k)-<](k)>13), and the sum (Equation M) <l(k)>B + a(k)(](k)-<1(k)>B) is calculated to provide W(k). This could be executed rapidly with a dedicated arithmetic unit.
Similarly, the iteration controller 47 may be implemented as a dedicated hardware unit. The stores 60 and 61 together with envelope detectors 62 and 63 are well-known devices. The difference calculator 64 may be 45 implemented with an appropriate full adder arrangement, and the threshold detector 65 would be a simple comparator. The choice of hardware or software implementation is accordingly a matter of engineering convenience and operating speed requirements to be resolved by those skilled in the art of digital electronics and computer software. In this respect the operational equivalence of electronic hardware and computer software is very well known. Similar considerations apply to stores 22, 25 and 38, envelope detectors 24 and 50 38 together with data select unit 23 as regards their location within or implementation apart f rom computing means.
Referring now to Figure 9, there is shown a schematic drawing of part of a laser ranging or lidar system.
The system comprises a continuous wave (cw) C02 laser 180 producing a plane polarised output light beam along a path 181 to a first beam splitter 182. Light transmitted by the beam splitter 182 passes via a second 55 beam splitter 183 to a C02 laser amplifier 184 arranged to produce 10 nsee pulses at a repetition frequency of kHz. A first lens 185 renders the amplifier output beam 186 parallel for illumination of a scattering object 187 in a remote scene (not shown). Light scattered from the object 187 is focussed by a second lens 188 on to two detectors 189 and 190. Detector 189 receives a reference light beam 191 from the laser 180 after reflection atthe beam splitter 182 and at a partially reflecting mirror 192. In addition, the detector 189 60 receives light scattered from the object 187 after transmission through a partially reflecting mirror 193 and through the partially reflecting mirror 182 via a path 194. Detector 190 receives a reference beam 195 from reflection of laser light 181 atthe beam splitter 183 with subsequent transmission via a ir/2 or quarter wavelength delay device 196 and reflection at a partially reflecting mirror 197. Light scattered from the object 187 and reflected at the mirror passes via paths 198 and 199 to the detector 190 after reflection at a mirror 200 65 GB 2 173 663 A and transmission through the partially reflecting mirror 197. They delay device 196 may be a gas cell having an optical thickness appropriate to delay the beam 195 by (n+1/4) wavelengths, n being integral but arbitrary. The gas pressure in the cell would be adjusted to produce the correct delay by known interferometric techniques; ie the device 196 would be placed in one arm of an interferometer and the gas pressure varied 5 until fringe pattern movement indicated the correct delay.
The arrangement of Figure 9 operates as follows. The delay unit 196 introduces a 1r/2 phase shift in the reference beam 195 reaching detector 190 as compared to that reaching detector 189. Each of the detectors 189 and 190 mixes its reference beam 191 or 195 with light 194 or 199 from the scene, acting as a homodyne receiver. The laser 180 acts as its own local oscillator. In view of the 7r12 phase difference between the reference beams 191 and 195, detectors outputs with a relative phase difference ofrr/2 are produced at 201 10 and 202. These outputs accordingly provide in-phase and quadrature signals P and Q, or complex amplitude image data. These signals are precisely analogous to the P and Q signals appearing at the output of the signal processing unit 21 in Figure 1, and are processed in the same way as previously described to provide resolution enhancement.
In an analogous fashion, a sonar system may be adapted for resolution enhancement in accordance with 15 the invention, since P and Q signals are provided by sonartransducer array processors which may be analysed in the same way as radar or lidar signals. Moreover, a sonartransducer is both a transmitter and a receiver, so a transducer array provides equal numbers of transmitters and receivers for which the present invention is entirely suitable.
Whereas the foregoing description (with reference to Figure 1 in particular) has referred to calculation of 20 object and image space singular functions from the weight function and system impulse response, in some cases this is capable of simplification. As indicated in Figure 4, graph 73, the weight function may consist of a constant background containing a main lobe of approximately Gaussian profile. Sets of Gaussian profiles of varying heights and widths may be stored, together with corresponding object and image space singular function sets. This is valid since the system impulse response is a constant, and the function sets vary only 25 with the weight function. Accordingly, rather than calculating the singular functions during image analysis, they would be precalculated from the Gaussian profile weight functions and impulse response. Generation of singular functions then reduces to matching the measured weight function as nearly as possible to a Gaussian profile, and selecting corresponding stored singular functions. The number of possible approximate weight functions is limited, so that the storage of singular functions need not be impracticable. 30 The weight function matching process may be achieved by well-known correlation techniques. This procedure should reduce computer time needed for image processing, but at the expense of increasing memory requirements.

Claims (9)

1. An imaging system having a given impulse response and including:
(1) an imaging device arranged to provide complex amplitude image data, (2) means for generating from image data a weight function appropriate to distinguish weak and strong imagefeatures, (3) means for reconstructing object data from a singular function decomposition of image data on the bases of singular functions derived from the weight function and system impulse response, and (4) means for generating an image from reconstructed object data.
2. An imaging system according to Claim 1 wherein the means for reconstructing object data comprises computing means arranged to:
(1) provide image and object space singular functions from the weight function and system impulse response, (2) decompose image data into a linear combination of image space singular functions, (3) convert the image space singular function combination into a corresponding object decomposition, W and (4) reconstruct object data from its decomposition.
3. An imaging system according to Claim 2 wherein the computing means is arranged to omit noise-corrupted singular functions from the object reconstruction.
4. An imaging system according to Claim 2 or 3 wherein the computing means is arranged to compare generated weight functions with stored weight functions associated with corresponding singular functions 55 for provision for image decomposition and object reconstruction.
5. An imaging system according to any preceding claim wherein the means for generating a weight function is arranged to assign each image pixel intensity a weight value derived from comparison with respective local pixel intensities.
6. An imaging system according to Claim 5 wherein the weight value comprises the sum of a local background intensity term and a contrast term.
7. An imaging system according to Claim 6 wherein the contrast term is non-zero provided that the relevant pixel intensity exceeds a given multiple of a corresponding average over local pixel intensities.
11 GB 2 173 663 A 11
8. An imaging system according to any preceding claim including means for iterating object reconstruction, which means is arranged to be operative until resolution enhancement becomes insignificant.
9. An imaging system substantially as herein described with reference to and as illustrated in the 5 accompanying drawings.
Printed in the UK for HMSO, D8818935, 8186, 7102. Published by The Patent Office, 25 Southampton Buildings, London, WC2A lAY, from which copies may be obtained.
GB08511465A 1984-05-10 1985-05-07 Super resolution imaging system Expired GB2173663B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB8411916 1984-05-10

Publications (2)

Publication Number Publication Date
GB2173663A true GB2173663A (en) 1986-10-15
GB2173663B GB2173663B (en) 1987-07-29

Family

ID=10560723

Family Applications (1)

Application Number Title Priority Date Filing Date
GB08511465A Expired GB2173663B (en) 1984-05-10 1985-05-07 Super resolution imaging system

Country Status (5)

Country Link
US (1) US4716414A (en)
DE (1) DE3516745C2 (en)
FR (1) FR2694097B1 (en)
GB (1) GB2173663B (en)
IT (1) IT1242085B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0356432A1 (en) * 1988-02-22 1990-03-07 Eastman Kodak Company Digital image noise suppression method using svd block transform
GB2277219A (en) * 1993-03-24 1994-10-19 Loral Vought Systems Lidar signal processing
WO1997027500A1 (en) * 1996-01-26 1997-07-31 The Secretary Of State For Defence Radiation field analyzer

Families Citing this family (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5383457A (en) * 1987-04-20 1995-01-24 National Fertility Institute Method and apparatus for processing images
US5009143A (en) * 1987-04-22 1991-04-23 Knopp John V Eigenvector synthesizer
US4949312A (en) * 1988-04-20 1990-08-14 Olympus Optical Co., Ltd. Ultrasonic diagnostic apparatus and pulse compression apparatus for use therein
US4949313A (en) * 1988-04-20 1990-08-14 Olympus Optical Co., Ltd. Ultrasonic diagnostic apparatus and pulse compression apparatus for use therein
US4973111A (en) * 1988-09-14 1990-11-27 Case Western Reserve University Parametric image reconstruction using a high-resolution, high signal-to-noise technique
US4929951A (en) * 1988-12-23 1990-05-29 Hughes Aircraft Company Apparatus and method for transform space scanning imaging
US4973154A (en) * 1989-04-27 1990-11-27 Rockwell International Corporation Nonlinear optical ranging imager
US5297289A (en) * 1989-10-31 1994-03-22 Rockwell International Corporation System which cooperatively uses a systolic array processor and auxiliary processor for pixel signal enhancement
US5045860A (en) * 1990-06-27 1991-09-03 R & D Associates Method and arrangement for probabilistic determination of a target location
US5233541A (en) * 1990-08-10 1993-08-03 Kaman Aerospace Corporation Automatic target detection process
US5384573A (en) * 1990-10-29 1995-01-24 Essex Corporation Image synthesis using time sequential holography
US5668648A (en) * 1991-11-26 1997-09-16 Kabushiki Kaisha Toshiba Computer-assisted holographic display apparatus
JPH07502610A (en) * 1991-12-20 1995-03-16 エセックス コーポレーション Image synthesis using time-series holography
US5227801A (en) * 1992-06-26 1993-07-13 The United States Of America As Represented By The Secretary Of The Navy High resolution radar profiling using higher-order statistics
EP0610603B1 (en) * 1993-02-11 1999-09-08 Agfa-Gevaert N.V. Fast interactive off-line processing method for radiographic images
US6041135A (en) * 1993-06-28 2000-03-21 Buytaert; Tom Guido Fast interactive off-line processing method for radiographic images
US5644386A (en) * 1995-01-11 1997-07-01 Loral Vought Systems Corp. Visual recognition system for LADAR sensors
JP2877106B2 (en) * 1996-11-18 1999-03-31 日本電気株式会社 Along track interferometry SAR
DE19743884C2 (en) * 1997-10-04 2003-10-09 Claas Selbstfahr Erntemasch Device and method for the contactless detection of processing limits or corresponding guide variables
US5952957A (en) * 1998-05-01 1999-09-14 The United States Of America As Represented By The Secretary Of The Navy Wavelet transform of super-resolutions based on radar and infrared sensor fusion
IL133243A0 (en) 1999-03-30 2001-03-19 Univ Ramot A method and system for super resolution
US6704440B1 (en) 1999-06-24 2004-03-09 General Electric Company Method and apparatus for processing a medical image containing clinical and non-clinical regions
US7221782B1 (en) 1999-06-24 2007-05-22 General Electric Company Method and apparatus for determining a dynamic range of a digital medical image
US6460003B1 (en) 1999-07-01 2002-10-01 General Electric Company Apparatus and method for resolution calibration of radiographic images
US6633657B1 (en) 1999-07-15 2003-10-14 General Electric Company Method and apparatus for controlling a dynamic range of a digital diagnostic image
US6344893B1 (en) 2000-06-19 2002-02-05 Ramot University Authority For Applied Research And Industrial Development Ltd. Super-resolving imaging system
AU2002251830A1 (en) * 2001-01-26 2002-08-06 Colorado State University Research Foundation Analysis of gene expression and biological function using optical imaging
US8958654B1 (en) * 2001-04-25 2015-02-17 Lockheed Martin Corporation Method and apparatus for enhancing three-dimensional imagery data
US20040115683A1 (en) * 2002-01-28 2004-06-17 Medford June Iris Analysis of gene expression and biological function using optical imaging
US8184044B1 (en) * 2010-03-12 2012-05-22 The Boeing Company Super resolution radar image extraction procedure
US8184043B2 (en) * 2010-03-12 2012-05-22 The Boeing Company Super-resolution imaging radar
US8736484B2 (en) * 2010-08-11 2014-05-27 Lockheed Martin Corporation Enhanced-resolution phased array radar
US8659467B1 (en) 2010-08-26 2014-02-25 Lawrence Livermore National Security, Llc Zero source insertion technique to account for undersampling in GPR imaging
US8818124B1 (en) 2011-03-04 2014-08-26 Exelis, Inc. Methods, apparatus, and systems for super resolution of LIDAR data sets
US20140160476A1 (en) * 2012-12-07 2014-06-12 Massachusetts Institute Of Technology Method and Apparatus for Performing Spectral Classification
US9154698B2 (en) * 2013-06-19 2015-10-06 Qualcomm Technologies, Inc. System and method for single-frame based super resolution interpolation for digital cameras
WO2019093979A1 (en) * 2017-11-08 2019-05-16 Aselsan Elektronik Sanayi Ve Ticaret Anonim Sirketi An image generation method
US11150349B2 (en) * 2018-08-16 2021-10-19 Wei Chen Multi-line, high-definition LiDAR device and method with integrated direct spatial reference
DE102019213904A1 (en) * 2019-09-12 2021-03-18 Carl Zeiss Smt Gmbh Method for detecting an object structure and device for carrying out the method
CN112698800B (en) * 2020-12-29 2022-09-30 卡莱特云科技股份有限公司 Method and device for recombining display sub-pictures and computer equipment

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3611369A (en) * 1969-05-27 1971-10-05 Burroughs Corp Quantizer system with adaptive automatic clutter elimination
US3953822A (en) * 1973-10-15 1976-04-27 Rca Corporation Wave-energy imaging technique
US3942150A (en) * 1974-08-12 1976-03-02 The United States Of America As Represented By The Secretary Of The Navy Correction of spatial non-uniformities in sonar, radar, and holographic acoustic imaging systems
US4003311A (en) * 1975-08-13 1977-01-18 Bardin Karl D Gravure printing method
US4127873A (en) * 1977-05-20 1978-11-28 Rca Corporation Image resolution enhancement method and apparatus
US4290049A (en) * 1979-09-10 1981-09-15 Environmental Research Institute Of Michigan Dynamic data correction generator for an image analyzer system
JPH0128427B2 (en) * 1980-04-16 1989-06-02 Eastman Kodak Co
GB2113501B (en) * 1981-11-26 1985-06-05 Secr Defence Imaging system

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0356432A1 (en) * 1988-02-22 1990-03-07 Eastman Kodak Company Digital image noise suppression method using svd block transform
EP0356432A4 (en) * 1988-02-22 1990-12-12 Brandeau, Edward P. Digital image noise suppression method using svd block transform
GB2277219A (en) * 1993-03-24 1994-10-19 Loral Vought Systems Lidar signal processing
GB2277219B (en) * 1993-03-24 1997-06-25 Loral Vought Systems System for processing reflected energy signals
WO1997027500A1 (en) * 1996-01-26 1997-07-31 The Secretary Of State For Defence Radiation field analyzer
GB2323990A (en) * 1996-01-26 1998-10-07 Secr Defence Radiation field analyzer
GB2323990B (en) * 1996-01-26 2000-08-09 Secr Defence Radiation field analyzer

Also Published As

Publication number Publication date
FR2694097B1 (en) 1995-12-22
DE3516745C2 (en) 2000-08-17
IT8548043A0 (en) 1985-05-06
GB2173663B (en) 1987-07-29
US4716414A (en) 1987-12-29
FR2694097A1 (en) 1994-01-28
DE3516745A1 (en) 1995-10-05
IT1242085B (en) 1994-02-16

Similar Documents

Publication Publication Date Title
US4716414A (en) Super resolution imaging system
US5734347A (en) Digital holographic radar
JP4917206B2 (en) SAR radar system
US5227801A (en) High resolution radar profiling using higher-order statistics
EP0395863B1 (en) Aperture synthesized radiometer using digital beamforming techniques
CA1286022C (en) Processing parameter generator for synthetic aperture radar
US8184044B1 (en) Super resolution radar image extraction procedure
Geibig et al. Compact 3D imaging radar based on FMCW driven frequency-scanning antennas
Chan et al. Frequency swept tomographic imaging of three-dimensional perfectly conducting objects
US8184043B2 (en) Super-resolution imaging radar
JPH0531112B2 (en)
US5943006A (en) RF image reconstruction and super resolution using fourier transform techniques
US4385301A (en) Determining the location of emitters of electromagnetic radiation
Ma et al. Target imaging based on ℓ 1 ℓ 0 norms homotopy sparse signal recovery and distributed MIMO antennas
Bocker et al. New inverse synthetic aperture radar algorithm for translational motion compensation
Vu et al. A comparison between fast factorized backprojection and frequency-domain algorithms in UWB lowfrequency SAR
Kasilingam et al. Models for synthetic aperture radar imaging of the ocean: A comparison
US7112775B2 (en) Coherent imaging that utilizes orthogonal transverse mode diversity
GB2168870A (en) Imaging system
Ouchi et al. Statistical analysis of azimuth streaks observed in digitally processed CASSIE imagery of the sea surface
US7876256B2 (en) Antenna back-lobe rejection
Ramakrishnan et al. Synthetic aperture radar imaging using spectral estimation techniques
JP2667820B2 (en) Apparatus for detecting an object having predetermined and known properties with respect to a background
Sedwick et al. Performance analysis for an interferometric space-based GMTI radar system
Attia Data-adaptive motion compensation for synthetic aperture LADAR

Legal Events

Date Code Title Description
727 Application made for amendment of specification (sect. 27/1977)
727A Application for amendment of specification now open to opposition (sect. 27/1977)
727B Case decided by the comptroller ** specification amended (sect. 27/1977)
SP Amendment (slips) printed
732E Amendments to the register in respect of changes of name or changes affecting rights (sect. 32/1977)
732E Amendments to the register in respect of changes of name or changes affecting rights (sect. 32/1977)
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20040507