GB2187356A - Image-data reduction technique - Google Patents

Image-data reduction technique Download PDF

Info

Publication number
GB2187356A
GB2187356A GB8604864A GB8604864A GB2187356A GB 2187356 A GB2187356 A GB 2187356A GB 8604864 A GB8604864 A GB 8604864A GB 8604864 A GB8604864 A GB 8604864A GB 2187356 A GB2187356 A GB 2187356A
Authority
GB
United Kingdom
Prior art keywords
pixels
group
camera
band
spatial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB8604864A
Other versions
GB2187356B (en
GB8604864D0 (en
Inventor
Charles Hammond Anderson
Curtis Raymond Carlson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
RCA Corp
Original Assignee
RCA Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by RCA Corp filed Critical RCA Corp
Publication of GB8604864D0 publication Critical patent/GB8604864D0/en
Publication of GB2187356A publication Critical patent/GB2187356A/en
Application granted granted Critical
Publication of GB2187356B publication Critical patent/GB2187356B/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/001Model-based coding, e.g. wire frame

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Image Processing (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

A foveated electronic camera 100 ie. one giving a high resolution area and a low resolution area, like the human eye, employs a data reduction means 108 including a spatial-frequency spectrum analyzer 202 and window gates 204 defining selectable spatial windows. The high resolution, wide field of view signal from imager 106 is analysed by spectrum analyser 202 to give signals from contiguous subspectra bands of the image spatial frequency spectrum, in particular from adjacent octaves of the spatrial spectrum. The subspectra pass through window gates 204 to reduce the field of view of higher resolution subspectra, preferably the number of pixels in each subspectrum being the same (Fig. 5). The position of the high resolution subspectrum within the field of view is selectable. The spectrum analyser preferably employs known 'Burt Pyramid' or 'Filter-Subtract-Decimate Pyramid' filter techniques. The camera finds application in surveillance, robotic and target tracking techniques where computer handling of large amounts of high resolution data is not practical. <IMAGE>

Description

SPECIFICATION Image-data reduction technique This invention relates to a technique for reducing image data and, more particularly, to an electronic camera which may be operated automatically and/or semi-automatically.
Complex automatically controlled systems (such as surveillance television cameras, robotic systems, target tracking systems, etc.) often require the signal processing of visual-information image samples. The total number of image picture elements (pixels) to be processed depends both on the size of the field of view of the image and the spatial resolution of the image. In order to provide a high degree of spatial resolution over all of a large field of view, an extremely large number of image pixels is required. However, it is not practical to process such a large number of image pixels.
One way of overcoming this problem (employed by the human eye) is to provide a relatively high spatial resolution in one region of the field of view of the imager (the centrally-located fovea of the retina of the eye) and a relatively low spatial resolution in another region of the field of view of the imager (the periphery of the retina of the eye)-together with the controlled movement of the imager to bring the spatial portion of an image originally within a low-resolution region of the imager into the high-resolution region of the imager. Thus, a person may move his eye and his head to observe with high resolution in the fovea an image of an object which was originally observed with low resolution near the edge of his visual field.
The purpose of the present invention is also to greatly reduce the number of image pixels to be processed, while retaining the ability to observe objects with high spatial resolution, which imaged objects can originally fall anywhere within a relatively wide field of view, most of which has only a low resolution capability. However, the image-reduction technique employed by the present invention is substantially different from that employed by the human eye.
In accordance with the principles of the present invention, the spatial frequency spectrum of an image represented by an input video signal is analyzed to derive a plural number of separate output video signals that together represent an ordinally-arranged group of contiguous subspectra bands of the image spatial frequency spectrum. The image represented by the input video signal is a relatively highresolution, wide-field-of-view image that is comprised of a first given number of pixels.
However, only the first band of the ordinallyarranged group of the separate output video signals exhibits the relatively high resolution of the image represented by the input video signal. Further, only this first band is comprised of the first given number of pixels. Each other band of the group of output video signals exhibits a lower resolution and a smaller number of pixels than its immediately preceding band of the group. Thus, the last band of the group is comprised of a second given number of pixels which is the lowest number of pixels in any of the bands of the group.
The size of the field of view represented by at least one band of the group is reduced by passing a spatially-localized subset of pixels of that one band through a spatial window, which is preferably movable. The subset is comprised of no greater number of pixels than the second given number.
In the Drawing: Figure 1 is a functional block diagram of a system that employs a foveated automatic electronic camera incorporating an image-data reduction means of the present invention; Figure 2 is a functional block diagram of a first illustrative embodiment of the image-reduction means of Fig. 1; Figure 2a is a functional block diagram of a second illustrative embodiment of the imagereduction means of Fig. 1; Figure 3 is a functional block diagram of a first preferred embodiment of the spatial frequency spectrum analyzer of Fig. 2; Figure 3a is a functional block diagram of a second preferred embodiment of the spatial frequency spectrum analyzer of Fig. 2; Figure 4 diagramatically illustrates the operation of the spatially movable windows of the present invention; and Figure 5 diagramatically illustrates the relative resolution and field of view of each of respective subspectra band images derived at the output of Fig. 2.
Referring to Fig. 1, there is shown a system that includes as essential components a foveated automatic electronic camera 100 and a computer 102. The system of Fig. 1 may also include optional operator station 104. Camera 100 includes as essential components highresolution, wide-field-of-view imager means 106 and image-data reduction means 108. imager means 106 and data reduction means 108 may be integrated in a single housing of camera 100 (as shown in Fig. 1) or, alternatively, they may comprise separate modular components of camera 100.
Imager means 106 is comprised of a monochrome or color television camera for viewing objects situated within a relatively wide-fieldof-view region of space and deriving therefrom a video signal which is applied as an input to data reduction means 108. This video signal represents in real time all the pixels of each of relatively high-resolution successive image frames derived by imager means 106. For instance, each two-dimensional image frame from imager means 106 may be comprised of 512x512 (262,144) pixels. The successive image frames may occur at a rate of 30 frames per second. In this case, a serial stream of pixels are applied to data reduction means 108 at a rate of nearly eight million pixels per second by the video signal output of imager means 106.However, in some cases it is desirable in a robotic system or in an automatic surveillance camera system to provide a resolution greater than 512 x 512 pixels per image frame and/or to provide a frame rate of more than thirty frames per second (thereby increasing the pixel rate of the video signal applied to data reduction means 108 beyond eight million pixels per second).
Image analysis by a computer normally requires that an image pixel be in digital (rather than analog) form. In order to provide a sufficiently high resolution gray scale, it is usual to digitize each of the image pixel levels at eight bits per pixel. Thus, in the absence of image data reduction, it would be necessary, in a real time environment, for the computer to process in real time 16 million bits per second or more. Very few image-analyzing computers can operate at this rate and those that do are very expensive.
Data reduction means 108, to which the present invention is primarily directed, greatly reduces the amount of data that must be handled by computer 102, without sacrificing either the high-resolution or wide field of view capability of imager means 106.
The reduced image data output from data reduction means 108 (which constitutes the output from the foveated automatic electronic camera 100) is applied as an input to computer 102. Computer 102 analyzes the reduced image data applied thereto in accordance with its programming. Its programming, of course, depends on the particular purpose of the system shown in Fig. 1. For instance, in the case of a surveillance system, computer 102 may be programmed to recognize significant changes in the scene being viewed by imager means 106, such as the presence of moving objects, objects having one or more particular shapes, etc. Computer 102 may include an output to some utilization means (not shown), such as an alarm in the case of a surveillance system. Another example of the system shown in Fig. 1 is a robotic system.
In this case, computer 102 is programmed to provide a desired "eye-hand" coordination between a mechanical-hand utilization means and imager means 106. More specifically, computer 102 applies certain command signals as an output to a mechanical hand in accordance with both information contained in the reduced data applied thereto from integer means 106 and in feedback signals received from the mechanical hand utilization means.
Imager means 106 of camera 100 (depending on its use) may be either stationary or movable. For instance, in the case of a robotic system, it would usually be desirable to provide moving means 110 for imager means that is controlled by an output from computer 102 in accordance with object information contained in the reduced image data input applied thereto regarding the region of space then within the field of view of imager means 106. In this case, moving means 110 returns feedback signals to computer 102 for indicating the actual position of imager means 106.
As so far discussed, the combination of camera 100 and computer 102 provides a totally automated system (that is no human operator is required). However, if desired, the system of Fig. 1 may include optional operator station 104. As indicated in Fig. 1, station 104 is comprised of display 112 and manual control 114. Manual control 112 permits the operator to view image information derived by computer 102 and manual control 114 permits the operator to transmit manual command signals to computer 102. By way of example, the purpose of these manual command signals may be selecting the image information to be displayed on display 112, and/or manually controlling the respective outputs from computer 102 to any or all of data reduction means 108, moving means 110 or the utilization means (not shown).
Referring to Fig. 2, there is shown a first embodiment of data reduction means 108 which incorporates the principles of the present invention. A video signal input to data reduction means 108 from imager means 106 (which may be a sampled signal from a solid state imager such as a CCD imager or, alternatively, a continuous signal from a televisiontube imager) is applied to an analog-to-digital (A/D) converter 200, which converts the level of each successively-occurring pixel of the video signal into a multibit (e.g., eight-bit) digital number. Each successive two-dimensional image frame represented by the video signal is comprised of a Px pixels in the horizontal direction and Py pixels in the vertical direction.
Because imager means 106 is a high resolution imager, the value of each of Px and Py is relatively large (e.g., 512). The video signal itself is a temporal signal derived by scanning, during each frame period, the two-dimensional spatial image then being viewed by the imager of imager means 106. The digital output from A/D 200 is applied as an input to spatial frequency spectrum analyzer 202. Alternative embodiments of spectrum analyzer 202 are shown in Figs. 3 and 3a, discussed in some detail below.
Spatial frequency spectrum analyzer 202, in response to the digitized video signal representing each successive imager frame applied as an input thereto, derives an ordinally-arranged set of N+1 (where N is a plural integer) separate video output signals L0. .. LN#l and GN. The respective video output signals .... . LN#l and GN comprise contiguous sub spectra bands of the spatial frequency spectrum of the image defined by the pixels of each successive image frame of the digitized input video signal to analyzer 202. Each of video output signals .... .LN#l defines a bandpass band of the spatial frequency spectrum of the image, with LO defining the highest spatial frequency bandpass band and LN-1 defining the lowest spatial frequency bandpass band of the image spectrum. GN defines a low-pass remnant band that includes all spatial frequencies of the spatial frequency spectrum of the image which are below those of the LN-1 bandpass band.Preferably, each of the re spective bandpass bands Lo. b LN-1 has a bandwidth corresponding to each of the two spatial dimensions of the image of one octave (i.e., if the highest spatial frequency to be analyzed by spectrum analyzer 202 in any dimension is fO, the LO bandpass band in that dimension has a center frequency of 3fo/4, the L1 bandpass band in that dimension has a center frequency of 3fro/8 the L2 bandpass band in that dimension has a center frequency of 3fo/16, etc.). Thus, the first band LO of the group of output video signals exhibits the same relatively high spatial resolution as does the input video signal to spectrum analyzer 202.Further, this first band LO of the group is comprised of the same number (PxPy) of pixels per frame as is the input video signal to analyzer 200. However, each of the other bands of the group exhibits a lower spatial resolution and a smaller number of pixels than its immediately preceding band of the group.
Thus, the last band GN (the remnant band) of the group is comprised of a second given number of pixels (P'x-Pwy) which is the lowest number of pixels in any of the bands of the group.
In Fig. 2, each of the bandpass bands of LO LN 1 is applied as a signal input to a corresponding one of P'x0P'y window gates 204-0 ... 204-(N- 1). Each of gates 204-0. . 204-(N - 1) also has an individual window center control signal from computer 102 applied as a control input thereto, as indicated in Fig. 2. Each of the respective gates 204-0... 204-(N- 1) permits a localized twodimensional spatial portion comprised of P'x#P'y pixels of each frame to be passed therethrough as the respective output LO. N 1 of that gate. Each gate, therefore, operates as a spatial window for this passed-through localized two-dimensional spatial portion.The window center control signal applied to each of gates 204-O...204-(N-1) determines the relative position of this localized spatial portion of each frame. In Fig. 2, the respective outputs L'0 N - 1 from gates 204-0. .204- (N- 1) along with the GN output from analyzer 202, are applied to computer 102 either directly or, alternatively, through a multiplexer or other data communication means (not shown).
The Fig. 2 implementation of image data reduction means 108 of camera 100 incorporates the relatively least structure required to provide computer 102 image data that has been reduced in accordance with the principles of the present invention. In this case, computer 102 includes suitable memory means for storing, at least temporarily, the reduced image data applied from image data reduction means 108 and selection means for deriving this stored reduced image data control signals returned to data reduction means 108. However, in some cases, it is desirable to incorporate such memory means and selection means as part of data reduction means 108 of camera 100, rather than incorporating these means in computer 102. Fig. 2a illustrates this alternative embodiment of image data reduction means 108.
As indicated in Fig. 2a, the respective out puts L'O. . . L'N---1 and G'N from the embodiment of Fig. 2 are not forwarded out of camera 100 to computer 102. Instead, the alternative embodiment of data reduction means 108 shown in Fig. 2a further includes a group of memories 206-0. .206-N each of which is individually associated with a corresponding one of respective outputs L'o. . . L'N 1and GN.
During each successive frame, the P'x0P'y pix els of each of the L'O . . L'N 1 and GN outputs (Fig. 2) are written into its corresponding one of Px and P y memories 206-0. .206-N.
After a time delay provided by each of memories 206-0. .206-N, each of these memories is read out and the output signal therefrom is applied as a separate input to selector switch 208. Selector switch 208, in response to a switch control applied thereto from computer 102, selectively forwards the P'xOP'y stored pixels read out from any single one of the group of memories 206-0. .206-N as an output from data reduction means 108 of camera 100 to computer 102.
Spatial frequency spectrum analyzer 202 of Fig. 2 can be simply comprised of a plurality of bandpass filters, each of which derives as an output respective ones of bandpass signals LO LN 1 and a lowpass filter for deriving the remnant signal GN. In some cases, low pass filters may be substituted for one or more of the bandpass filters. However, it is preferred that analyzer 202 be structurally embodied in the manner shown either in Fig. 3 or, alternatively, in Fig. 3a. In this regard, reference is made to copending US Patent Application Serial No. 596,817, filed April 4, 1984, by Carlson et al., (corresponding to UK Patent Application 2143046) and assigned to the same assignee as the present application, which discloses in detail each of the alternative embodiments of the spatial frequency spectrum analyzer 202 shown in Figs. 3 and 3a.
More particularly, the embodiment shown in Fig. 3 is capable of implementing in real time a hierarchical pyramid signal processing algorithm developed by Dr. Peter J. Burt (and, therefore, is referred to as the "Burt Pyramid"). The embodiment shown in Fig. 3a is another type of real-time hierarchical pyramid signal processor, known as the "FSD (filter-subtract-decimate) Pyramid." As indicated in Fig. 3, the Burt Pyramid analyzer is comprised of a pipeline of generally similar sampled-signal translation stages 300-1, 300-2...300-N. Each of the respective stages operates at a sample rate determined by the frequency of the digital clocksignals CL1, CL2... CLN individually applied thereto.The frequency of the clock signal applied to any particular one of the stages is lower than the frequency of the clock applied to any stage that precedes it. Preferably, the frequency of each of the clocks of stages 300-2 300-N is one-half of the clock of the immediately preceding stage. In the following description it will be assumed that this preferable relationship among the clocks signals CL1... CLN is the case.
As indicated in Fig. 3, stage 300-1 is comprised of convolution filter and decimation means 302, delay means 304, subtraction means 306 and expansion and interpolation filter means 308. An input stream of digitized pixels G0, having a sample rate equal to the frequency of clock CL, is applied through convolution filter and decimation means 302 to derive an output stream of pixels G, at a sample rate equal to the frequency of clock CL2.
Go is the digitized video signal input to analyzer 202. The convolution filter has a low pass function that reduces the center frequency of each image dimension represented by G, to one-half of the center-frequency of the corresponding dimension represented by Go At the same time, the decimation reduces the sample density in each dimension by onehalf.
The respective pixels of Go are applied through delay means 304 as a first input to subtraction means 306. At the same time, the reduced-density pixels of G, are applied to expansion and interpolation filter 308, which increases the sample density of the G, pixels back to that of Go Then, the expanded density interpolated G, pixels are applied as a second input to subtraction means 306. The presence of delay means 304 ensures that each pair of samples Go and G1, which correspond with one another in spatial position, are applied to the first and second inputs of subtraction means 306 in time coincidence with one another. The output stream of successive samples b from subtraction means 306 defines the highest spatial frequency in each dimension of the scanned image.
The structure of each of stages 300-2..
300-N is essentially the same as that of stage 300-1. However, each of the higher ordinal numbered stages 300-2. . 300-N operates on lower spatial frequency signals occurring at lower sample densities than its immediately preceding stage. More specifically, the output stream of successive samples L, represents the next-to-highest octave of spatial frequencies in each image dimension, etc., so that, as indicated in Fig. 3, the Burt Pyramid analyzed signal is comprised of respective octave sample streams .... . LN#l (derived respectively from the subtraction means of each of stages 300-1 . .300-N) together with a low-frequency remnant signal GN (derived from the output of the convolution filter and decimation means of stage 300-N).
A primary advantage of the Burt Pyramid, discussed in more detail in the aforesaid copending Carlson, et al. application), is that it permits a reconstituted image synthesized from the respective analyzed outputs .... . LN I and GN to be derived in a manner such that the introduction of noticeable artifacts into the image processing due to image processing are minimized. A disadvantage of a Burt Pyramid is that it requires an expansion and interpolation filter (in addition to a convolution filter and decimation) per analyzer stage, and this increases both its cost and complexity.
The FSD pyramid analyzer, shown in Fig.
3a, is similar to the Burt Pyramid analyzer in several ways. First, the FSD analyzer is also comprised of a pipeline of generally similar sampled-signal translation means 300-1, 300-2. . 300-N. Second, each of the respective stages operates at a sample rate determined by the frequency of the digital clock signals CL1, CL2 . . . CLN individually applied thereto. Third, the frequency of the clock signal applied to any particular one of the stages is preferably one-half that of the clock of the immediately preceding stage.
However, the specific structural arrangement comprising each of the stages of the FSD pyramid analyzer differs somewhat from the structural arrangement comprising each stage (such as stage 300-1 of Fig. 3) of the Burt Pyramid analyzer. More specifically, each stage 300-K (where K has any value between 1 and N) of the FSD pyramid analyzer shown in Fig.
3a is comprised of convolution filter 302a, decimation means 302b, delay means 304 and subtraction means 306.
The output from convolution filter 302a, (before decimation by decimation means 302b) is applied as an input to subtraction means 306. This structural configuration eliminates the need for providing an expansion and interpolation filter in each stage of an FSD pyramid analyzer. The elimination of expansion and interpolation filters significantly reduces both the cost and the amount of inherent delay of each stage of the FSD pyramid analyzer shown in Fig. 3a, compared to that of each stage of the Burt Pyramid analyzer shown in Fig. 3.
For the purpose of explaining the operation of the system shown in Figs. 1 and 2 (or, alternatively, Figs. 1 and 2a), reference is now made to Figs. 4 and 5.
The rectangle 400 represents the relatively large size of the two-dimensional spatial region defined by an entire image frame of pixel samples. The Gw output from analyzer 202 (which is applied to computer 102 without passing through a window gate) represents this entire image-frame spatial region with the low-resolution definition provided by only P'xOP'y pixels per frame. Thus, as shown in Fig. 5, this GN signal represents a low-resolution global-view image 500 of one or more objects (such as vase 502) within the region of space then being viewed by camera 100.
For illustrative purposes, the respective values of both P'x and Pty in Figs. 4 and 5 is assumed to be 6. Thus, the entire area of the global-view low resolution spatial image region 400 (shown in Figs. 4 and 5) is comprised of only 36 pixels.
The UN I output from window gate 204-(N- 1) represents the localized spatial subregion 402. Subregion 402, which has both horizontal and vertical dimensions only onehalf of that of spatial region 400, is only onefourth the area of region 400. However, as indicated in Fig. 5, subregion 402 is also comprised of 36 pixels, thereby providing a higher-resolution intermediate view 504 of vase 502 than that provided by low resolution global-view 500.
In a similar manner, each of localized spatial subregions 404 and 406~represented respectively by the L'N 2 and L'N 3 outputs from respective window gates 204-(N-2) and 204-(N-3), neither of which is shown in Fig.
2~are also each comprised of 36 pixels (as shown in Fig. 5). However, the area represented by spatial subregion 404 is only onequarter of that of spatial subregion 402 (or one-sixteenth of that of global view subregion 400). Therefore, the resolution of intermediate view 506 of vase 502 is higher than that of intermediate view 504 (which, in turn, is higher than that of low-resolution global-view 500). Similarly, the area of spatial subregion 406 is only one quarter of that of spatial subregion 404 (or 1/64th of that of global-view spatial region 400). Thus, the resolution view 508 of base 502 is exhibited with the highest resolution.
For illustrative purposes, in describing in the operation of the present invention, the value of N has been assumed to be only 3. Therefore, in this case, U0 represents spatial subregion 406, with spatial subregions 404, 402 and 400 being respectively represented by L1, L2 (LN--1) and G3 (GN). In practice, N would have a value greater than 3 (normally at least 5 or 6). Further, in practice, the respective values of Px and Py would be greater than 6 (such as 32 or 16, for example).In such cases, the spatial image region represented by the high-resolution, wide field-of-view videosignal from imager means 106, defined by 512~512 pixels (or even 1024.1024 pixels) is reduced by data reduction means 108 to five or six separate various resolution views of either 16~16 or 32.32 pixels each.
As indicated in Fig. 2, computer 102 provides an individual window center control signal to each of window gates 204-0... 204- (N- 1). This permits each of the respective spatial subregions (e.g. spatial subregions 402, 404 and 406) to be moved in accordance with command signals from computer 102. For instance, as schematically indicated by the arrow in Fig. 4, each of the respective spatial subregions 402, 404 and 406 can be independently and selectively moved from its preceding location (shown in phantom lines) within spatial region 400 to its current location (shown in solid lines) within region 400.
In this manner, any part of the global view of the spatial region 400 can be shown in any one of the various higher resolutions.
In the case of a surveillance camera, computer 102 can first analyze the low resolution global view spatial region to determine whether or not it appears that any object of interest (such as a moving object, an object having a particular shape, etc.) is present in any subregion of this global view. If so, the computer can then examine this subregion at higher and higher resolutions for the purpose of confirming whether or not an object of interest is actually present. A somewhat similar examination by computer 102 would be useful in a robotic system to provide "eye-hand" coordination.
The important advantage of the present invention is that the amount of data that must be handled by the computer is greatly reduced, without, at the same time, reducing either the resolution or field of view capability of the imager means.
In the case of Fig. 2a, computer 102 divides a switch control for selector switch 206 that permits computer 102 to selectively examine at any one time only the data stored in any one of P'x*P'y memories 204-0... 204-N. This further reduces the amount of data that must be handled by computer 102.
It is plain that it may be desirable in certain cases to pass even the last band of the group of bands through a movable window which forwards even fewer than said second given number of pixels to the output of image data reduction means 108. Further, in the case in which imager means 106 is provided with moving means 110, it is possible to maintain the respective windows in predetermined fixed spatial relationships relative to one another and move imager means under the control of the computer to bring the object of interest into the highest resolution window. It also may be desirable to substitute a selector switch for selector switch 206 that is capable of selecting any two or more outputs of memories 204-0 to 204-N at the same time and then simultaneously displaying the selected memory outputs on display 112. Such structural modifications are contemplated by the present invention.

Claims (17)

1. An image data reduction method for use with an input video signal representing a relatively high-spatial-resolution, wide-field-of-view image that is comprised of a first given number of pixels, said method comprising the steps of: analyzing the spatial frequency spectrum of the image represented by said input video signal to derive a plural number of separate output video signals that together represent an ordinally-arranged group of contiguous subspectra bands of said image spatial frequency spectrum, wherein:: (a) the first band of said group exhibits said relatively high spatial resolution and is comprised of said first given number of pixels, and (b) each other band of said group exhibits a lower spatial resolution and a smaller number of pixels than its immediately preceding band of said group, whereby said just band of said group is comprised of a second given number of pixels which is the lowest number of pixels in any of said bands of said group; and reducing the size of the field of view represented by at least one band of said group by passing a spatially-localized subset of pixels of that one band through a spatial window, said subset being comprised of no greater number of pixels than said second given number.
2. The method defined in claim 1, wherein the step of reducing the size of the field of view comprises reducing the size of the field of view represented by at least one band of said group other than said last band.
3. The method defined in claim 1, wherein said step of reducing the size of the field of view comprises: reducing the size of the field of view of each of the individual bands of said group excluding the last band of said group by passing a spatially-localized subset of pixels of each of said individual bands through its own separate spatial window, each of said subsets being comprised of no greater number of pixels than said second given number.
4. The method defined in claim 3, further including the step of: selectively moving the spatial position of each of said windows within the wide field of view defined by the pixels of said last band.
5. The method defined in claim 4, wherein the step of selectively moving comprises: selectively moving said spatial position of each of said windows independently of one another.
6. The method defined in claim 3,4 or 5 wherein each of said subsets is composed of said second given number of pixels.
7. An electronic camera for deriving a data-reduced video output, said camera comprising: a high-resolution, wide field-of-view imager means for deriving in real time a first video signal representing all the pixels of each of successive image frames of the spatial region being viewed by said imager means, means for analyzing the spatial frequency spectrum of each image frame represented by said first video signal to derive a plural number of separate second video signals that together represent an ordinally-arranged group of contiguous subspectra bands of that spatial frequency spectrum, wherein:: (a) the first band of said group exhibits said relatively high spatial resolution and is comprised of said first given number of pixels, and (b) each other band of said group exhibits a lower spatial resolution and a smaller number of pixels than its immediately preceding band of said group, whereby said last band of said group is comprised of a second given number of pixels which is the lowest number of pixels in any of said bands of said group; and means for reducing the size of the field of view represented by at least one band of said group by passing a spatially-localized subset of pixels of that one band through a spatial window, said subset being comprised of no greater number of pixels than said second given number, to thereby derive said video output from said camera.
8. The camera defined in claim 6, wherein said means for reducing the size of the field of view comprises means for reducing the size of the field of view represented by at least one band of said group other than said last band.
9. The camera defined in claim 7, wherein said means for reducing the size of the field of view comprises: window means for reducing the size of the field of view of each of the individual bands of said group excluding the last band of said group by passing a spatially-localized subset of pixels of each of said individual bands through its own separate spatially movable window, each of said subsets being comprised of no greater number of pixels than said second given number.
10. The camera defined in claim 9, wherein said camera is adapted to be used with an image-processing computer responsive to said video output from said camera for deriving camera-control signals; and said window means includes means responsive to at least one of said camera-control signals applied thereto for selectively moving the spatial position of said movable windows within the wide field of view defined by the pixels of said last band.
11. The camera defined in Claim 10, wherein: said camera-control signals include a separate control signal corresponding to the movable window for each one of said individual bands; and said window means includes means responsive to each of said separate control signals for selectively moving said spatial position of each of said movable windows independently of one another.
12. The camera defined in claim 9,10 or 11 wherein each of said subsets is composed of said second given number of pixels.
13. The camera defined in claim 12, wherein said means for reducing the size of said field of view further includes: memory means for storing both said second given number of pixels forming the subset from each of said movable windows and said second given number of pixels from said last band of said group; and selector switch means responsive to a camera-control signal applied thereto for selectively forwarding the stored second given number of pixels corresponding to solely one of the subsets or said last band of said group as said video output from said camera.
14. The camera defined in anyone of claims 7 to 13, wherein said analyzing means is a Burt-Pyramid analyzer.
1 5. The camera defined in anyone of claims 7 to 13, wherein said analyzing means is a filter-subtract-decimate (FSD) analyzer.
16. The camera defined in anyone of claims 7 to 15 including an analog-to-digital converter for representing the level of each pixel of said first video signal as a multibit number.
17. A camera substantially as hereinbefore described with reference to: Fig. 1 optionally as modified by Fig. 2 or by Figs. 2 and 2a, Fig. 2 being optionally as modified by Fig. 3 or 3a.
GB8604864A 1985-02-06 1986-02-27 Image data reduction technique Expired - Lifetime GB2187356B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US69878785A 1985-02-06 1985-02-06

Publications (3)

Publication Number Publication Date
GB8604864D0 GB8604864D0 (en) 1986-04-03
GB2187356A true GB2187356A (en) 1987-09-03
GB2187356B GB2187356B (en) 1990-02-14

Family

ID=24806660

Family Applications (1)

Application Number Title Priority Date Filing Date
GB8604864A Expired - Lifetime GB2187356B (en) 1985-02-06 1986-02-27 Image data reduction technique

Country Status (3)

Country Link
JP (1) JPS61184074A (en)
DE (1) DE3603552A1 (en)
GB (1) GB2187356B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2179222B (en) * 1985-07-25 1990-02-21 Rca Corp Image-data reduction technique
NL1000200C2 (en) * 1995-04-21 1996-10-22 Optische Ind Oede Oude Delftoe Method for reducing image information and an apparatus for carrying out the method.
WO2001003070A1 (en) * 1999-07-01 2001-01-11 Koninklijke Philips Electronics N.V. Hierarchical foveation and foveated coding of images based on wavelets

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4754487A (en) * 1986-05-27 1988-06-28 Image Recall Systems, Inc. Picture storage and retrieval system for various limited storage mediums
WO1989000741A1 (en) * 1987-07-22 1989-01-26 Etc. Identification information storage and retrieval
DE4042511C2 (en) * 1989-10-20 2001-01-25 Hitachi Ltd Video camera monitoring system for operating states - has monitored object taking input with selective control related to state, or importance
US5095365A (en) * 1989-10-20 1992-03-10 Hitachi, Ltd. System for monitoring operating state of devices according to their degree of importance
GB9013914D0 (en) * 1990-06-22 1990-08-15 Tantara Tek Ltd Visual display
SE502975C2 (en) * 1995-01-10 1996-03-04 Foersvarets Forskningsanstalt Ways to reduce computer computations when generating virtual images
DE10210327B4 (en) 2002-03-08 2012-07-05 Arnold & Richter Cine Technik Gmbh & Co. Betriebs Kg Digital motion picture camera
DE10218313B4 (en) 2002-04-24 2018-02-15 Arnold & Richter Cine Technik Gmbh & Co. Betriebs Kg Digital motion picture camera
JP2008223257A (en) * 2007-03-09 2008-09-25 Nippon Filing Kenzai Kk Hinge

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2179222A (en) * 1985-07-25 1987-02-25 Rca Corp Image-data reduction technique

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS56166680A (en) * 1980-05-27 1981-12-21 Toray Ind Inc Monitoring system for body
PT78772B (en) * 1983-06-27 1986-06-05 Rca Corp Real-time hierarchal pyramid signal processing apparatus

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2179222A (en) * 1985-07-25 1987-02-25 Rca Corp Image-data reduction technique

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2179222B (en) * 1985-07-25 1990-02-21 Rca Corp Image-data reduction technique
NL1000200C2 (en) * 1995-04-21 1996-10-22 Optische Ind Oede Oude Delftoe Method for reducing image information and an apparatus for carrying out the method.
WO1996036012A1 (en) * 1995-04-21 1996-11-14 B.V. Optische Industrie 'de Oude Delft' Image reduction method and device
WO2001003070A1 (en) * 1999-07-01 2001-01-11 Koninklijke Philips Electronics N.V. Hierarchical foveation and foveated coding of images based on wavelets

Also Published As

Publication number Publication date
DE3603552C2 (en) 1987-09-17
GB2187356B (en) 1990-02-14
JPH0358234B2 (en) 1991-09-04
DE3603552A1 (en) 1986-08-07
GB8604864D0 (en) 1986-04-03
JPS61184074A (en) 1986-08-16

Similar Documents

Publication Publication Date Title
US4692806A (en) Image-data reduction technique
US5200818A (en) Video imaging system with interactive windowing capability
JP3328934B2 (en) Method and apparatus for fusing images
US7551203B2 (en) Picture inputting apparatus using high-resolution image pickup device to acquire low-resolution whole pictures and high-resolution partial pictures
KR0151410B1 (en) Motion vector detecting method of image signal
US6295381B1 (en) Coding apparatus and decoding apparatus of image data and corresponding shape data at different resolutions
US20020054211A1 (en) Surveillance video camera enhancement system
EP0677958A2 (en) Motion adaptive scan conversion using directional edge interpolation
GB2187356A (en) Image-data reduction technique
US5680476A (en) Method of classifying signals, especially image signals
JP3035920B2 (en) Moving object extraction device and moving object extraction method
JP3781203B2 (en) Image signal interpolation apparatus and image signal interpolation method
EP0264966B1 (en) Interpolator for television special effects system
US20070140529A1 (en) Method and device for calculating motion vector between two images and program of calculating motion vector between two images
US20040227829A1 (en) Method and apparatus for a simultaneous multiple field of view imager using digital sub-sampling and sub-window selection
AU746276B2 (en) Signal conversion apparatus and method
US20030117525A1 (en) Method for outputting video images in video monitoring system
JP3849817B2 (en) Image processing apparatus and image processing method
JPH04213973A (en) Image shake corrector
EP0673575B1 (en) Higher definition video signals from lower definition sources
EP0516778B1 (en) Apparatus and method for suppressing short-time noise pulses in video signals
JP3494815B2 (en) Video imaging device
KR900002778B1 (en) Adapting scanning converting device
Zeevi et al. Foveating vision systems architecture: Image acquisition and display
JPS5839180A (en) Correlation tracking device

Legal Events

Date Code Title Description
732E Amendments to the register in respect of changes of name or changes affecting rights (sect. 32/1977)
PE20 Patent expired after termination of 20 years

Effective date: 20060226