GB2282727A - Virtual image sensor - Google Patents

Virtual image sensor Download PDF

Info

Publication number
GB2282727A
GB2282727A GB9323780A GB9323780A GB2282727A GB 2282727 A GB2282727 A GB 2282727A GB 9323780 A GB9323780 A GB 9323780A GB 9323780 A GB9323780 A GB 9323780A GB 2282727 A GB2282727 A GB 2282727A
Authority
GB
United Kingdom
Prior art keywords
image sensor
virtual image
array
view
field
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB9323780A
Other versions
GB9323780D0 (en
Inventor
Roger Colston Downs
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from GB939317600A external-priority patent/GB9317600D0/en
Priority claimed from GB939317573A external-priority patent/GB9317573D0/en
Application filed by Individual filed Critical Individual
Priority to GB9323780A priority Critical patent/GB2282727A/en
Publication of GB9323780D0 publication Critical patent/GB9323780D0/en
Priority to US08/601,048 priority patent/US6233361B1/en
Priority to GB9725082A priority patent/GB2319688B/en
Priority to GB9601754A priority patent/GB2295741B/en
Priority to GB9807454A priority patent/GB2320392B/en
Priority to PCT/GB1994/001845 priority patent/WO1995006283A1/en
Priority to AU74654/94A priority patent/AU7465494A/en
Publication of GB2282727A publication Critical patent/GB2282727A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/142Edging; Contouring
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Circuits (AREA)

Abstract

A virtual image sensor is generated in real time from the composite video signal of an image by combining in real time subsets in parts of line scans of luminance signals extracted from one or more similar image sensors suitably synchronised figure 4 which are logically and physically organised as an array and collectively regard a continuous scenario comprising the separate and adjoining fields of view of each of the image sensors in the array figure 1. The field of view 21 of a virtual image sensor is equivalent to that of any image sensor in such an array and is capable of fast and accurate electronic slaving across the field of regard comprising the combined fields of view 5, 9, 17, 20 of the image sensors in such an array. Separate and simultaneous virtual image sensor images may be generated from such an array of image sensors and offer instantaneous images not possible with a single movable image sensor or from an an array of discrete image sensors not comprising a virtual image sensor. <IMAGE>

Description

VIRTUAL IMAGE SENSOR This invention relates to a virtual image sensor, Iflage sensors CCD or equivalent are limited for a particular magnification to a specific field of view, For image sensors which are capable of mechanical slaving in azimuth or elevation, the envelope of the image sensor's field of view is referred to as the image sensor' 5 field of regard.
A virtual image sensor extracts subsets of luminance signals from an array of appropriately positioned, orientated and and synchronised image sensors, where by combining these lurninance subsets with appropriate frame and line sync information the composite video signal so formed allows the real time generation of an image from components of images afforded by the array of image sensors, whose adjoining fields of view support the virtual sensor' 5 field of regard equivalent to that of their combined fields of view, and where the field of view of the virtual image sensor is equivalent to that of the field of view of one of the image sensors comprising the array.
Some applications exist where the appropriate positioning of static image sensors in such an array covering for eample 360 degrees allows sirmultaneous multiple fields of view pc.ssibilities not achievable from a single movable image sensor, nor from an array of static image sensors not comprising such a virtual image sensors. Further the electronic positioning of the virtual image sensor's field of view within its field of regard can be made faster and more accurately than is possible with a mechanical system.
According to the present invention there is provided a virtual image sensor corprising a number of similar image sensors organised in an array, where the logical and physical position and orientation of eacn such image sensor in the array is such that their individual fields of view may be considered collectively to cover a continuous scenario comprising the individual images from each image sensor, and where it is possible to generate in real time a virtual image sensor image fre components of images from one or more adjoining image sensors in the array such that the field of view of the virtual image sensor is equivalent to the field of view of any image sensor in the array and where the field of regard of the virtual image sensor comprises the individual fields of view of the image sensors in the array.
A specific embodiment of the invention will now be described by way cf example with reference to the accompanying drawing in which Figure 1 shows an array of four image sensors where their individual field of views are aligned to cover a continuous scenario comprising the separate images of each image sensor Figure 2 shows a representation of the time distribution of information within a frame of composite video from an image sensor Figure 3 shows a representation of the necessary time distribution of information within two frames of composite video from two horizontally aligned and synchronised image sensors capable of supporting virtual image sensor subsets in azimuth.
Figure 4 shows a representation of the necessary tie distribution of information within four frames of composite video from four aligned and synchronised image sensors capable of supporting virtual image sensor subsets in azimuth and elevation, Figure 5 shows a system block diagram identifying important functional areas and important signals between them capable of supporting a virtual image sensor Figure 6 shows important signal waveforms used in the system.
This example describes a virtual i'age sensor or sensors comprising a number of image sensors of equal magnification organised in an array, where the logical and physical position and orientation of each such image sensor in the array is such that the individual fields of view of such image sensors may be considered collectively to cover a continuous scenario comprising the individual images from each image sensor It is possible, by suitably controlling the frame and line synchronisation signal generation of each such image sensor. that luminance information continuous in time may be considered to exist between corresponding line scans of adjacent image sensor image boundaries, such that one or more subsets of luminance information may be taken from one or more of the adjacent image sensor 5 composite video signals in real time and combined with appropriate synthetic frame and line sync information so as to generate the composite video output of a virtual image sensor or sensors The field of view of such a virtual image sensor or sensors is equal to the field of view of any image sensor in the array.It is possible under manual or automatic control, by modifying in each case both the subsets of luminance signals extracted from individual image sensors in the array, and the generation of its associate synthetic frame and line sync information, to accurately slave each of the virtual image sensor or sensors field or fields of view to particular or different points in, Or scanning with each of the virtual image sensor's field of view a field of regard compri sing the individual fields of view of the image sensors in the array With reference to the drawing and here with particular reference to figure 1 an image sensors field of view may be considered of pyramid shape with rectangular cross-section extending from the image sensor such that in this eample an array of four such similar image sensors 1,2,3, 3 4 are organised as an array and so aligned as to collectively view a continuous scenario comprising the component images of each the image sensors With reference tc figure 2 the time distribution of information contained in the composite video signal of an image sensor is such that display visible luminance is contained in the central rectangle 5 and sync, porch and non visible luminance information is contained in the border area 6. Within a frame of data waveform 31 time begins, Frame begin time waveform 33), at the extreme top left corner 7 and increases left to right in successive horizontal lines from top to bottom A particular time slice XX within a frame of data is shown 8.
For a UK system the time from the etreme top left 7 to extreme bottom right 3.S is 2Oms and the duration of an individual line scans is 064ms With reference to figure 3 a representation of the necessary time distribution of information between two image sensors horizontally aligned and capable of supporting a virtual image sensor where frames of data would comprise luminance subsets in azimuth 16 is shown necessitating the relative time offset of Frame begin time 11 of the righthand image sensor of the pair, to the Frame begin time 7 of the lefthand image sensor of the pair.It can be seen that the luminance regions 5 and 9 of the two image sensors may be considered line-wise to be continuous in time and a time slice at YY would comprise the line scans 12 & 13 of the left and right hand image sensors respectively, Similarly for the four such image sensors 1 2 3 4 positioned and aligned according to figure 1 then the necessary time distribution, with reference to figure 4, of information to support virtual image sensor frames of data comprising subsets of luminance information in both azimuth and elevation 23 requires that the relative Frame begin times for the image sensors 1, 2, 3, 4 is given by the Frame begin times 7,11,19 and 22 respectively, In this particular example the Frame begin time 11 of image sensor 2 is offset from the Frame begin time 7 of image sensor 1 by the same amount as the offset of the Frame begin time 22 for image sensor 4 in respect of the Frame begin tie 19 for image sensor 3, further the offset between Frame begin tirnes 7 and 19 for image sensors 1 and 3 respectively is the same as the offset between Frame begin times 11 and 22 that for image sensors 2 and 4 respectively, With reference to the system block diagram figure 5 the four image sensor: 1 2 ::S 4 are normal CCD image sensors except that their system clock crystals have been removed and their clock input pins wired to accept an external clock drive CLF:1, CLK2, CALF, CLK4 respectively, The composite video signals CV1, CV2, CV3, CV4 from the image sensors is fed to the Frame Line Mask FLM 24 circuitry and Combined Video CV 27 circuitry.
The purpose of Frame line mask FLM 24 circuitry is to extract from each image sensor's composite video signal the Frame and line sync information L1, L2, L3, L4, waveform 34, and pass these signals to the Fine Line Position FLP 26 circuitry, Further FLM 24 uses these stripped sync signals to generate a further signal per image sensor, the Frame begin signals F1, F2. FS, and F4 waveform 33 which are passed to the Sensor Clock Synchronisation C 25 circuitry.
The Sensor c lock synchronisation SCS 25 circuitry controls the clock: generation for each image sensor in such a way as to ensure the necessary time relative frame and line sync generation of each image sensor, In this example this is achieved by stopping individual image sensor clocks to bring the synchronisation of all the image sensors to meet a particular criterion Consider that image sensor 5 1 synchronisation is being used as a reference and that the relative synchronisation of image sensor 2 is required, by way of example, to be .05ms later while image sensor 3 in respect of image sensor 1 is required to te 18ms later and image sensor 4 with respect to image sensor 1 is required to be 18. 05ms later, If the Frame begin signal F2.
for image sensor 2 is delayed by 19. 19 ms and that for image sensor 3 F3 is delayed by 2ms and similarly that for image sensor 4 F4 is delayed by 1. 95ms then if the image sensors meet the required synchronisation the F1 . and delayed Frame begin siqnals F2. F:3. and F4 will occur simultaneously and the signal Frame LocK FLK will be s--t true and remain set for a number of frame periods. Under these conditions each image sensor ill be driven continuously by a common system clock.If however image sensor 1 generates its Frame begin F1 pulse before the other image sensors delayed Frame begin pulses F(n) its clock drive is stopped. similarly the net image sensor to generate a delayed Frame begin F(n) signal has its clock stopped until all image sensors have generated their respective delayed frame begin signals.The instance the last (in this case) delayed Frame begin pulse arrives all image sensor clock drives are restarted, Synchronisation is essentially instantaneous and once synchronised FLK is used to maintain a continuous clock drive to the reference image sensor however all other image sensor clock drives may be operated on when FLK is set by the circuitry of Fine line position FLF 26 circuitry, The Fine line position FLP 26 circuitry operates on a similar principle of image sensor drive clock control using the Frame and line sync information L1, L2, L3, and L4 waveform 34 rather than Frame begin signals and exercises greater sensitivity such that only single clock periods per line scan period are used to adjust the synchronisation of a particular image sensor in maintaining an accurate relative synchronisation between all image sensors in the system. The feedback loop from the Fine line position FLP 26 circuitry to the Sensor clock synchronisation SOS 25 circuitry to achieve this functionality is through the Line LocK signals LLK2, LLK3, LLK4 where these signals are used in the presence of FLK to gate the clock drive CLK2, CLK3, and CLK4 to image sensors 2, 3, and 4 respectively.When the Frame lock FLK signal expires the Fine line position FLP 26 functionality is inhibited and the image sensors relative frame synchronisation is checked within a frame period 2Oms (UK) to ensure a continuous relative synchronisation between the image sensors, or to restore it and in either case set FLK again whereby fine adjustment of the image sensor synchronisation by FLP 26 can occur, For the image sensors now synchronised it is possible to extract one or more subsets of luminance signals from the individual image sensors composite video signals such that they represent the luminance signals of one or more virtual image sensors, each having an equivalent field of view of any component image sensor in the array, The functionality capable of supporting one such virtual image sensor, in this example, is contained within the Combined vide CV 27 circuitry where the one or more subsets of luminance signal LS1, LS2, LS3, LS4 are taken from the image sensors 1, 2, 3, and 4 forming the array and combined with an appropriate synthetically generated Frame and line sync signal 3 allowing the resulting composite video signal CV5 to be displayed on the Display D 30. In this particular example the subsets of luminance signal LS1, LS2, LS3, LS4 are achieved by the generation of suitable masks to control analogue switching of the image sensor composite video signals CV1, CV2, CV3, and CV4, The X Y outputs of a Joystick 29, or the equivalently synthesised demands and control XYC from an eternal processing system, control the magnitude of time constants associated with the generation of the vertical and horizontal luminance masks thereby effectively controlling the positioning of the virtual image sensors field of view boresight over its field of regard.It is worth noting that generally a rate demand would normal ly be more appropriate to slaving a virtual image sensor across a wide field of regard however in this simple example positional demands are used, The rising edge of the Frame begin signal F1 is used to trigger a monostable pulse Q1, with maximum duration of one frame period and instantaneous length controlled by the Joystick 29 potentiometer Y demand, representing vertical mask information, The trailing edge off Q1 drives a further monostable the duration of whose output Q4 corresponds with the frame syncing period.The falling edge of stripped sync information L1 waveforrr, 34 from the reference image sensor 1 triggers a pulse Q2 whose maximum duration is that of a line scan period, this output, controlled by the Joystick 29 X demand.
represents horizontal mask informaation. The Q2 trailing edge triggers a further monstable whose output Q3 and compliment r-: are used to generate elements of shifted sync information, Because these sync elements are generated in relation to the reference image sensor 5 Frame and line sync L1 signal the elevation control does not influence azimuth positioning of the virtual irr,sge sensor 5 field of view foresight.The output Q3 is gated with Q4 to produce shifted line sync information whist Q3 is gated with Q4 to give shifted frame sync information, the two signals being combined to form a synthetic sync signal SS which is passed to the Video Mixer VM 2:3, Subsets of luminance information are extracted from one or more of the image sensors composite video signals using analogue gates where their switching is controlled by the vertical Q1 and horizontal Q2 mask signals according to the following image sensor 1 Q1 Q1 & 2, image sensor 2 Qi & 2, image sensor 3 Q1 Q1 & 2, image sensor 4 Q1 & 2. The four luminance subsets outputs LSl 2, 3, 4 from image sensors 1, 2, 3, and 4 respectively gated by the analogue gates are passed to a luminance level balancing network in the Video mixer VM 28 before being combined with the synthetic frame and line sync information SS the resulting composite video C.V5 being passed to the display D 30 or used as a front end image sensor input to a further processing system, for example an image pattern thread processor.
The essential functionality contained in the Combined video Cv 27, Joystick 29 or equivalent, and Video mixer VM 28, can be replicated to support any number of additional autonomous virtual image sensors

Claims (3)

  1. CLAIMS 1 A virtual image sensor comprising a number of similar image sensors organised in an array, where the logical and physical position and orientation of each such image sensor in the array is such that their individual fields of view may be considered collectively to cover a continous scenario comprising the individual images from each image sensor, and where e it is possible to generate in real time a virtual image sensor image from components of images from one or adjoining image sensors in the array such that the field of view of the virtual image sensor is equivalent t the f field of view of any image sensor in the array and where the f field of regard of the virtual image sensor comprises the individual fields of view of the image sensors in the array
  2. 2 A Virtual image sensor as claimed in Claim 1 wherein relative synchronisation means is provided for establishing and maintaining the relative frame and line sync separation between image sensors in the array necessary to allow real time extraction of image sensor luminance subsets in support of a virtual image sensor 3 A Virtual image sensor as claimed in Claim 1 or Claim 2 wherei control means are provided to define the field of regard relative virtual image sensor s boresight position.
    4 A Virtual image sensor as claimed in Claim 1 or Claim 2 or Claim 3 wherein logical means are provided to identify the necessary luminance signal subsets and frame and line sync characteristics for the current virtual image sensor's field of view.
    5 A Virtual image sensor as claimed in Claim 1 or Claim 2 or Claim 3 or Claim 4 wherein composite video synthesis means are provided for combining subsets of luminance signals with an appropriatel synthetic frame and line sync signal to generate the composite video signal of a virtual image sensor 6 A Virtual image sensor as claimed in Claim 5 wherein functional replication means are provided for a given array of image sensors to support the generation of multiple, simultaneous and different virtual image sensor composite video signals,
    7 A virtual image sensor as claimed in Claim 5 c.r Claim 6 wherein manual, automatic or external processor definition means are provided toelectronically slave one or more virtual image sensors field or fields of view independently and simultaneously across field of regard comprising the fields of view of the image sensors comprising the array.
    A virtual image sensor as claimed in Claim 5 or Claim 6 or Claim 7 wherein electronic means are provided for the rapid and accurate positioning of the field of view boresight of the virtual image sensor within its field of regard.
  3. 3 A virtual image sensor as clained in any preceding claim wherein communication means are provided to all information in the virtual image sensors field of view to be communicated to another processing system.
    10 4 virtual image sensor substantially as described herein with reference to figures 1-4 of the accompanying drawing.
GB9323780A 1993-08-24 1993-11-18 Virtual image sensor Withdrawn GB2282727A (en)

Priority Applications (7)

Application Number Priority Date Filing Date Title
GB9323780A GB2282727A (en) 1993-08-24 1993-11-18 Virtual image sensor
AU74654/94A AU7465494A (en) 1993-08-24 1994-08-23 Topography processor system
PCT/GB1994/001845 WO1995006283A1 (en) 1993-08-24 1994-08-23 Topography processor system
GB9807454A GB2320392B (en) 1993-08-24 1994-08-23 Topography processor system
GB9725082A GB2319688B (en) 1993-08-24 1994-08-23 Topography processor system
US08/601,048 US6233361B1 (en) 1993-08-24 1994-08-23 Topography processor system
GB9601754A GB2295741B (en) 1993-08-24 1994-08-23 Topography processor system

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
GB939317573A GB9317573D0 (en) 1993-08-24 1993-08-24 Virtual image sensor
GB939317600A GB9317600D0 (en) 1993-08-24 1993-08-24 Image pattern thread processor
GB9323780A GB2282727A (en) 1993-08-24 1993-11-18 Virtual image sensor

Publications (2)

Publication Number Publication Date
GB9323780D0 GB9323780D0 (en) 1994-01-05
GB2282727A true GB2282727A (en) 1995-04-12

Family

ID=27266823

Family Applications (1)

Application Number Title Priority Date Filing Date
GB9323780A Withdrawn GB2282727A (en) 1993-08-24 1993-11-18 Virtual image sensor

Country Status (1)

Country Link
GB (1) GB2282727A (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2014015A (en) * 1978-01-25 1979-08-15 Honeywell Gmbh Method and circuit arrangement for generating on a TV-monitor a partial image of an overall picture

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2014015A (en) * 1978-01-25 1979-08-15 Honeywell Gmbh Method and circuit arrangement for generating on a TV-monitor a partial image of an overall picture

Also Published As

Publication number Publication date
GB9323780D0 (en) 1994-01-05

Similar Documents

Publication Publication Date Title
US4103435A (en) Head trackable wide angle visual system
DE69423338T2 (en) PROCESSING AND DISPLAY DEVICE FOR TIME VARIABLE IMAGES
GB1351157A (en) Placement of image on matrix display
GB1495344A (en) Method and apparatus for combining video images with proper occlusion
CA2025436A1 (en) Method and apparatus for displaying flight-management information
EP0138535A1 (en) Visual display logic simulation system
EP0652524A4 (en) Image processing device and method.
MY115154A (en) Apparatus and method for producing picture data based on two-dimensional and three-dimensional picture data producing instructions
EP0404395A3 (en) Image processing system
CA2055702A1 (en) Display range control apparatus and external storage unit for use therewith
US3833854A (en) Digital phase shifter
DE112018005772T5 (en) DETERMINE AND PROJECT A PATH FOR HOLOGRAFIC OBJECTS AND OBJECT MOVEMENT IN COOPERATION OF MULTIPLE UNITS
EP0391513A3 (en) Controlling the combining of video signals
US4241519A (en) Flight simulator with spaced visuals
DE3035213C2 (en) Process for the acquisition and reproduction of terrain images for visual simulators
GB2282727A (en) Virtual image sensor
US3619912A (en) Visual simulation display system
DE3881355D1 (en) DISPLAY DEVICE.
JPS57208046A (en) Image display apparatus
JPS6468074A (en) Multi-screen display device for camera capable of electronic zooming
Holmes 3-D TV without glasses
Grimsdale et al. Zone management processor: a module for generating surfaces in raster-scan colour displays
GB2282726A (en) Co-operatively slavable phased virtual image sensor array
JPS6362495A (en) Stereoscopic television device
EP0404397A3 (en) Image processing system

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)