GB2283383A - Real time remote sensing topography processor - Google Patents

Real time remote sensing topography processor Download PDF

Info

Publication number
GB2283383A
GB2283383A GB9323783A GB9323783A GB2283383A GB 2283383 A GB2283383 A GB 2283383A GB 9323783 A GB9323783 A GB 9323783A GB 9323783 A GB9323783 A GB 9323783A GB 2283383 A GB2283383 A GB 2283383A
Authority
GB
United Kingdom
Prior art keywords
vector
sets
processor
image sensor
real time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB9323783A
Other versions
GB9323783D0 (en
Inventor
Roger Colston Downs
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
DOWNS ROGER C
Original Assignee
DOWNS ROGER C
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from GB939317600A external-priority patent/GB9317600D0/en
Priority claimed from GB939317573A external-priority patent/GB9317573D0/en
Priority claimed from GB939317602A external-priority patent/GB9317602D0/en
Priority claimed from GB939317601A external-priority patent/GB9317601D0/en
Priority to GB9323783A priority Critical patent/GB2283383A/en
Application filed by DOWNS ROGER C filed Critical DOWNS ROGER C
Publication of GB9323783D0 publication Critical patent/GB9323783D0/en
Priority to GB9404654A priority patent/GB2281464A/en
Priority to US08/601,048 priority patent/US6233361B1/en
Priority to AU74654/94A priority patent/AU7465494A/en
Priority to GB9807454A priority patent/GB2320392B/en
Priority to PCT/GB1994/001845 priority patent/WO1995006283A1/en
Priority to GB9601754A priority patent/GB2295741B/en
Priority to GB9725082A priority patent/GB2319688B/en
Publication of GB2283383A publication Critical patent/GB2283383A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/142Edging; Contouring

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

A real time remote sensing topography processor comprises an array of three or more image sensors V1...V3 whose fields of view share a common scenario (figure 3), and a processor or processors (figure 4) capable of processing the composite video signals output from each image sensor in the array so as to identify image element information characterized by image signal discontinuities, representative of vectors from particular image sensors in the array to elements of topographical detail in the observed scenario, and correlate such information between sets of such information to identify real multiple vector intercepts comprising one vector from each image sensor whereby the relative position of such elements of associated topographical detail may be established. <IMAGE>

Description

REL TIME GENERIC PATTERN DERIVED TOPOGRAPHY PROCESSOR This invention relates to a real time generic pattern derived topography processor.
Humans correlate complex multiple pattern inforriation between patterns identified within images from two eyes to gain a perception of the spacial position of topographical detail of the real world The processing power to automatically recognise patterns within different perspective images of a common scenario without recourse to models of potential patterns and correlate information of similar and associated patterns between such images, to allow a meaningful interpretation of the topographical detail of the observed three dimensional scenario, is considerable, In particular the rate and volume of data generated by multiple OCO or equivalent image sensors makes the real time machine pattern recognition and correlation, necessary to etract topographical data using von Neumann architectures and serial processing techniques, very difficult This invention is based on the premise that for a group of three or more image sensors whose fields of view share a common scenario, then within this common scenario the spacial position of an element of topographical detail is represented by a unique multiple vector intersection comprising one vector from each image sensor, In particular and simplifying the problem then for a group of three or more logically and physically aligned image sensors gaining separate perspective views of the same scenario, such that common scenario image detail is contrived to register in corresponding line scans of each of the image sensors in the group, then if sets of data elements having similar attributes, excepting their position within a particiilar line scan and which data may further be considered as representative ef vectors from their host image sensor to elemental scenario detail, can be identified from each image sensor's composite video signal then the association between such vectors, contained within sets of vectors from a particular sensor with vectors contained in similar sets from the other image sensors, can be made by considering all possible combinations of vectors between such sets of sets, including one from each image sensor, where the e > i stence of comt,inations of such vectors having a common multiple and real intersection resolves the generic pattern recognition problems of vector association and the spacial positioning of the topographical detail in the observed scenario, For a phased image sensor array comprising three image sensors operating at 5mhz observing a common scenario as described above, then for a particular vector attribute and considering only one line scan of each image sensor, then if for this particular attribute the sets of such vectors, one from each image sensor, comprise 2 of all possible events of the imagers horizontal resolution say C4 events per set, then of the 64 real elements of topographical detail observed according to this attribute, in this particular line, by each image sensor. then the number of potential real and virtual triple vector intersections would be b-4:t:644:64 To identify the 64 real triple vector intersections however it is possitle to find all the real and virtual vector pair intersections, one from a reference image sensor's vector set and one from each of the other image sensors vector sets, and from such sets of sets of vector pair intersections identify by comparison those pairs of vector intersections between sets having the same solution.In this case the number of real and virtual vector pairs intersections is 2#64#64, This represents an effective necessary internal solution rate of 150mhz for real time computation If a von Neumann machine instruction budget of 600 instructions per vector pair intersection and associated set comparison is made, this implies for real tiaie processing of frames of data at the frame rate, a machine instruction execution rate of 301OOO MIP:3 would be necessary, an order of magnitude greater than is currently telieved possible.In practice software could ignore calculations associated with virtual intersections and considerably reduce the processing load, however the numbers are still very large. Considering the processing of only one line of data between image sensors at the frame rate then for a frame interleaving system this would require a dedicated Von Neumann processor capable of E NIP: to resolve vectors of one vector attribute where a system capable of resolving at least two vector attritutes would probably represent a minimum requirement, This invention addresses system architectures and processes capable of supporting such real time generic pattern recognition, that is vector identification and association whereby the spatial resolution between members 5 of sets of sets of such vectors, fundamental to automatic topographical mapping of this kind, may be made.
y system capable of interpreting the above premise may be described by considering three distinct areas of functionality which may in practice constitute sub systems.
Firstly a sut system comprising a minimum number of three image sensors organized as a phased image sensor array, in which the image sensors can te slaved to give different perspective views of a common scenario and where the position and orientation of each image sensor is such that the boresights of their individual fields of view are parallel, and common image detail is registered in corresponding line scans of each of the image sensors, that is at common angles of elevation within each image sensors respective field of view, and where frame and line sync signals of all the image sensors respective composite video signals have correspondence in time Depending on the nature of the application such a system should include the possibility of a scanner allowing electronic as well as mechanical scanning of the scenario by the image sensors for three main reasons, firstly to achieve angular resolutions of less than 50 micro radians and the ranging possibilities afforded by such resolution. the angular positioning of such a system does not lend itself entirely to mechanical slaving. One aspect therefore of electronic slaving is to allow controlled slaving to these accuracies, secondly the field of view at suc resolution for a single sensor is small therefore scanning allows a practical field of regard to be employed, thirdly the nature of this type of ranging from aligned image sensors is such that sets of vector pair intersections are parabolic and logarithmic in nature, and therefore a rotation of the field of view allows better range discrimination particularly at extreme ranges Secondly a sub system comprising an image pattern thread processor or equivalent capable for each image sensor, comprising the phased image sensor array, of simultaneously and in real time processing the composite video signal generated by each such image sensor to extract sets of vectors with specific attributes between these image sensors, and further time log the occurrence of all such vectors partitioned by image sensor. ettribute and line scan (elevation angle within the image sensor field of view) so identifying their position within the line scan (azimuth anqle within the image sensor field of view). No limit is set on the number of different vector attributes to be identified. neither on the partitioning of such sets necessary to support real time computation in the processing sub system.
Thirdly S processing sut, system is necessary capable of calculating in rea] time the existence of all possible real and virtual vector intersections necessary to identity multiple common and real intercepts, including one vector from each image sensor, in resolving the ecto@ @ association aiid spaci al positioning of the scenarios topographical detail.To achieve the effective processing rates necessary to resolve all possible multiple intersections of unassociated vectors which have been automatically selected according to a particular attribute from B number of i maqe sensors colllpo=i @e video signals in real tine, and thereby resolve the association ot members of such sets of sets of vectors between image sensors requires a processor architecture supporting partitioned and parallel processing Fur thor a requirement exists to automatically synthesise, and again in parallel, the identities of all possibi combinations of pairs of vectors between image sensors, each such pair comprising a vector taken from a set of vectors considered as from a reference image sensor and a vector taken from each of the sets of sets of vectors of similar attributes for each of the other imaqe sensors For the pairs of vector identities so synthesised and in parallel the architecture also requires an effective multiple address capability which allows the vector pair identities to synthesise the identity of the solutions to complex mathematical processes. where the finite apriori knowledge concerning the existence of possible vector intersections or other processes, permits the definition of identity transforms reprenting the result of such processes on particular pairs of identities, that is a capability to synthesise a third identity from a particular pair of identities.A multiple address capability in the conventional sense allows inforrnation bering processed to be moved from the contents of one address to the contents of another address, here the data of the input operands submitted for processing is implicit in their combined address identity and the process transforms this identity to produce a result implicit as a third identity.The identity transforms should be capable of multiple parallel operation to process other simultaneously synthesised vector identity pairs from other sets of sets of vectors, or to address a necessary precision or aspect of a particular transform Such transforms should also be capable of being cascaded to allow the interaction of other variables or results of previous transforms, The final identities fr from a single, paral lci or cascaded transform proces or processes forms the address identity for an ordered vector pair intersection buffer into which the binary existence of a process result may be written, one such buffer being dedicated to each pair of sets of vectors In this Way simultaneous vector pair intersections can be synthesised within the effective multiple addressing time, By synchronous parallel reading of sets of ordered vector Fair intersection buffers the simultaneous event of the existence of a real vector intersection teing read from each of the dedicated buffers comprising a set, satisfies the multiple vector intersection premise for determining the spacial position of an element of topographical detail.
According to the current invention there is provided a real time generic pattern derived topography processor comprising, three or more image sensors, CCD or equivalent, of defined separation, orientation and synchronisation organised as a phased image sensor array where their fields of view share a common scenario, and one or more processors capable of processing the enrage sensors composite video signals to identifv and associate binary elements of generic patterns between all image sensors in the phased image sensor array thereby resolving the system relative spacial position of topographical detail implicit in such correlated generic patterns.
A speci ic embodiment of the invention will now be described by way of example with reference to the accompanying drawings in which: Figure 1 shows a simplified combined luminance upper and lower frequency excursion event image with time slice XX phased image sensor array vector output for lower limit frequency @ eursions events Figure 2 shows a simplified combined luminance upper and lower frequency excursion event image showing for time slice XX phased image sensor array vector output for upper limit frequency excursions events.
Figure 3 shows phased image sensor array comprising three virtual image sensors and wide angle operators image sensor.
Figure 4 shows simplified schematic of information flow in processing of single line scan of V ctors of one attribute between three imag sensors.
Figure 5 shows system block diagram of phased image sensor array input t generic pattern n derived topography processor.
Figure 6 shows representation of image sensor, line scan, and vector attribute partitioned stacks.
figure 7 shows input data and write control stack address pointers fo image sensor, line scan and vector attribute figure : shows system block diagram of organisation for write an read of one partitioned stack figure 9 shows system block diagram of stack read control address pointers for set of line scan stacks s holding same @ vector attribute Figure 10 shows system block diagram for read read write "RRnW" organisation for processing of one set of partitioned stacks holding arise vector attributes -igu-e 11 shows system block diagram for combined output from R =*t of two sets of line scan vector pair intercept buffers igure 12 shows system block diagram for clear of sets of Two sets of line scan vector pair intercept buffers The example described here represents a real time generic pattern derived topography processor whose architecture is capable of the real time three dimensional topographical analysis of a scenario imaged by three virtual image sensors, CCD or equivalent. organised as as a phased virtual image sensor array, in which the virtual image sensors fields of view co-operatively and synchronously scan a field of regard, The composite video generated by the virtual image sensors, of real world differing perspective views of the same scenario is fed into El composite video processor comprising image pattern thread prr @cessors capable of the real time identification and extraction of data elements with particular and different attributes and which data is also representative of discrete vectors from each virtual image sensor to elemental detail in the observed scenario For vectors so identified identities are assigned and these are passed to assigned input stacks of the topography processor whose partitioned, parallel and dynamically configurable architecture and identity combination synthesisers and identity transform processors are capable Ot supporting the computation of the existence of common multiple 'es to intercepts one from each virtual image sensor from all combinations of vectors contained within sets of sets of such vectors having common attributes, The existence of real corarizon multiple vector intercepts comprising one vector from each virtual image sensor resolves the association between vectors from different virtual image sensors comprising the phased virtual image sensor array and the spacial position of topographical tail in the observed scenario, With reference to the drawing figure 1 is a simplified (two dimensional) picture of a real world scenario cai seen by the phased image sensor array comprising three virtual image sensors V1 , V.2, V3.
The image represents a combined upper and lower frequency excursion event image of the scenario as seen from each virtual image sensor' E perspective. The vectors indicated from each virtual image sensor position shows, for the time slice XX, the triple vector intersections for vectors identified from luminance frequency excursions through @ lower 1 imit, With reference to f figure 2 the same scenario is shown as in figure 1 where for the same time slice XX the triple vector intercepts are for vectors identified from luminance frequency excursions through en upper limit, With reference to figure ?. the phased virtual i.ri,a.3e sensor array is shown comprising three virtual image sensors V1, V.2, V@ each comprising two image sensors 1 and 2, 3 and 4, and 5 and 6 respectively. Each pair of image sensors comprising a virtual image sensor is positioned, aligned and synchronised such that the foresights of each of their fields of regard is parallel and that a common scenario is observed between the virtual image sensors and common image detail of the scenario is registered in corresponding line scans of each virtual image sensor and where further time coincidence exists between the characteristics of the frame and line sync signals of each of the virtual image sensors.A wide angle image sensor 7 with overall view of the scenario allows the field of regard of the phased virtual image sensor array to b positioned.
Referring to figure 4 which shows a simplified system schematic of R single vector attribute single line scan processing element comprising a number of identifiable sub systems including::- A data input sub system 55t comprising three virtual image sensors V1, V2, and V3 organised as a co-operatively slavable phased virtual image sensor array which can electronically slave the fields of view of the virtual image sensors across a common scenario maintaining their differnt perspective views.Composite video fr from the virtual image sensors is fed to image pattern thread processors IFT1 IPT2, IPT3 which generate sets, per virtual image sensor, of sets per line scan, of sets of vectors having similar attributes which stored in physically and logically partitioned high speed double buffered hardware stacks, where for a particular 1 ine scan and vector attribute one set 1A, 1B, 2A, 2B, 3A, 3B respectively is shown here A processing sub system 201 comprising a number of line scan processors (only one shown) organised in parallel and each performing multiple parallel hardware interstack vector identity combination synthesis in the Combination SYnthesiser CSY, and parallel multiple address identity transforms in the identity Transform Processors 13TP and 12TP which compute the existence of specific vector pair intersections which are stored in dedicated parallel triple buffered ordered vector pair intercept buffers E,F, G,H. I,J.
An output sub system 202 performing synchronised parallel automatic intercept comparisons between members of sets of vector pair intersections held in the ordered intercept buffers E,F, G,H, I,J.
A control sut system CSS 20:3 is provided which configures the machine's partitioned and parallel architecture via sets of three state gates indicated by the ted lines and controls the sequencing if the parallel processes which at any time define the state of the macine.
The overall function of such a machine may be described briefly WRRnWRW process that is as a write 204 read 205, cascaded read 206, write 207, read 208 and write process (performing a "clear" function two frames out of phase with the write 207 but not shown her @laborating, a series of parallel asynchronous write cycles "W" 204 WRRnWRW constituting the data input cycle. A series of synchronous parallel cascaded read cycles terminating in a write "RRnW" 205, 206 207 WRRnWRW from the processing cycle, A series of parallel synchronous read cycles "R" 208 WRRnWRW which generate the system output. A house keeping cycle comprising a series of clEar write cycles "W" WRRnWRW 18 not shown jut two frames out of phase, for particular set of sets of dedicated output buffers, with 207 allows the processes to repeat.The input, processing, output and clear cycles operate in parallel, processing data continuously on a frame basis to effect real time operation necessitating double buffering of input stacks, parallel operation of the line scan processors and triple buffering of output buffers The processes may be amplified -,- follows: - An input cycle 200 where fsr every frame period vector @ identity dat characterising image pattern thread information for the entire frame from each virtual image sensor in the phased image sensor array V@ V2, and V3 is written asynchronously and in parallel fashion to partitioned high speed input stacks. s, For one line in frame ^ vector attribute the memories 1A, and 1B, 2A, and 2B, 3A, and 3B correspond to the input stacks for virtual image sensors V1, V2, and V3 respectively.
A processing cycle 201 where for ever frame. data written during the previous frame is read by 1 in scan processors performing sequential cycles of svnchronous parallel reads from the partitioned input stacks generatin3, in relation to a reference virtual image sensor V1 stack 1A, 1B for any particular set of sets of vector identities, simultaneous pairs of combinations of vector identities comprising on from the reference virtual image sensor set and one from each of the other sets of vectors, comprising the same set of sets of vectors from the he other virtual image sensors V2, VS, The combinations of @ector identities form compound addresses identities driving a ries of parallel cascaded read operations in the identity Transform processors 13TP and 12TP , whose final output in each case is an address definition within an ordered output buffer, E,F, or (3. H, or I.Jr one such buffer E, G, I or F, H, J for each set of pairs of sets of vector identities, into which the existence in each case of a real or @ir vector intersection is written, This cycle is repeated until all combinations of vector identities between sets of sets of such @ec has been made, It is also possible as part of the cascaded read sequence for other processor external parameters to interact and themselves modify address generation, in this particular example such an interaction is the use (not shown in Fig 4) of the scanner bearing of the foresight of the virtual sensors fields of view with, n their field of regard. No limit is set on the number of such cascaded read operations of the identity Transform processors @ nsr on the size of the address generation at any particular stage of the process nor on the number of combined operations performed in parallel by the line scan processors, The data sets defining the transforms performed on identity combinations at the various stages of the cascaded identity Transform processors are generated off line by high level modelling techniques.
n output cycle 202 where all data processed in the previous frame is now ready sequentially within each buffer and in synchronous parallel fashion between sets of ordered output buffers (only one set shown in fig 4B where the simultaneous existence, from all members of the same set of buffers, of vector pair intersections yields the existence of a multiple vector intersect ion, one vector from each virtual image sensor, whose address identity is the real or virtual spacial position of an element of topographical detail, This parallel reading of the dedicated output buffers resc 1 ves the multiple vector intersection premise and can simultaneously perform range gating to discriminate virtual solutions associated wit some topographically symmetrical scenarios, This reading of the dedicated output buffers represents the output of the system three dimensional data basr and is intended to support downstream app Ii cation processing The dedicated output buffers are cleared in the frame period following their particular output cycle which necessitates triple buffering of these partitioned parallel memories.
With reference to figure 5 the WRRnWRW 1 processor representing according to system sizing parallel elements of the Control sub system 203 and Combination synthesiser CSY 209) input system block diagram shows the individual image sensors 1,2,3,4,5,6 comprising the phased virtual sensor array V1, V2, V3, Image sensor synchronisation is provided through the functionality of the @@ Frame line mask FLt1 -.
circuitry, Sensor clock synchronisation SCS 11 circuitry and Fine line position FLP 12 circuitry. 7is is achieved using a lost clock principle to bring the various Image sensors to synchronisation The operator wide angle image sensor 187 7 also derives its clock drive from the same functionality, The Scanner 14 allows controlled electronic scanning of their field of regard by all virtual image sensors either autonomously or by accepting an external processor or manually control.
Composite video CV1,2,3,4,5,6, f from the individual i image sensors 1,2,3,4,5,6 respectively in the phased virtual image sensor array is processed by the Luminance differential processor LDP 9 circuitry and binary event data representing upper BEU1,2,3,4,5,6 and lower BEL1,2,3,4,5,6 frequency excursions through set limits are sent to the Scanner SCNR 14 circuitry where virtual imager subsets UVBE1,2,3 and LVBE1,2, 3 for upper and lower frequency excursion events appropriate to the current field sf view of the virtual image sensors V1, V2, VS are extracted and sent to the Stack pointer SP 13 circuitry and WRRnWRW 18 processor.
The Stack pointer SP 13 circuitry is driven by binary event data from each virtual image sensor, for each attribute, in this example upper and lower frequency excursions UVBE1,2,3 and LVBE1,2,3, It generates sequential stack pointer addresses VISP1U, VISP1L, VISP2U, VISP2L, VISP3U, VISP3L for every partitioned virtual image sensor. line scan, and vector attribute determined stack 114 which it passes to the write W WRRnWRW 18 processor.
The Video mixer VM 16 and Display D 17 allows operator display of all the individual image sensor or virtual image sensor, normal video or binary event data, Scanner SCNR 14 outputs, or generic pattern derived topography processor read R WRRnWRW 18 outputs DOP.
Data input to the generic pattern derived topography processor WRRnWRW 18 from the Address generation circuitry ADG 10 comprises a frame counter F0,1,2 allowing the WRRnWRW 18 processor to schedule double buffering of input stacks 114 and triple buffering of the output vector intercept buffers 116, the time log VI and .1 of frequency excursion events representative of the azimuth and elveation identities of such events respectively within a virtual image sensors field of view, valid for each virtual image sensor, and the End rilarl--e signal EM, representative of the end of luminance signals. valid for each virtual image sensor, The Scanner's SCNR 14 output to the WRRnWRW 18 processor is m4 which identifies the scan position of the virtual image sensors fields of view boresight within their field of regard, also output are the binary event signals UVBE1,2,3 LVBE1 2, 3 appropriate to the current scan position from each of the virtual image sensors.
Figure 6 represents the physically partitioned double buffered E' stacks predicated to each virtual image sensor, line scan and vector attribute, During any particular frame either the A or B set of stacks will be written to by the "W" write processor logic of the WRRnWRW 18 as a function of the FO signal from the address generation ADG 10 circuitry, allowing the "R" read processor logic cf the "WRRnWRW" 1::3 to read the other set, The first subscript denotes virtual image sensor, the second the line scan SI identity, and the third the vect@r attribute, In this example there are only two such attributes thosc of frequency excursions through an upper "V" and lower "L"set limits.
With reference to figure 7 which identifies the elements of the partitioned stack address pointers and data generation. The vector identity VI 20 representative of the azimuth angle in any virtual image sensor's field of view for any frequency excursion provides common data for all stacks, The stack identity E;;I 21 reflects the current line scan, that is line in frame, for all virtual. image sensors and is used in conjunction with the vector identity stack pointers VISP1U 22, VISP2U 24, VISP3U 26, VISP1L 23, VISP2L 25, VISP3L 27 to generate individual partitioned stack addresses 28,29,30,31, for data to be written to them, The virtual binary event data UVBE1,2,3 and LVBE1,2,3 are used to generate WE commands for their appropriate stacks, With reference to figure 3 representing the organisation between ".e writing and reading of one stack element comprising MA 34 and MB 25.
During writing, a stack element is selected according to the virtual image sensor, line scan SI 21 identity and particular vector attribute UVBE1,2,3 and LVBE1,2,3. The data written WD is VI 20 position in line scan of a particular event and is common between 11 virtual image sensors and stacks for vector identities of different attribute. The stack element address WA is given by VISP1U 22, VISP2U 24, VISP3U 26, VISP1L 23, VISP2L 25 or VISP3L 27, Write memory control W WRRnWRW generates a WE signal derived from the binary event data UVBE1,2,3, LVBE1,2,3.
During reading of stack information, cycles of parallel synchronous reads are performed on sets of stacks containing identities of vectors ot similar attributes, of events which occurred during corresponding line scans of each of the virtual image sensors Depending on system sizing a number of such line scan red processes- will be performed in parallel b similar 1 line scan processing elements each operating on a set of stacks containing vectors of similar attributes With reference to figure 9 the reading of stacks is again controlled by pointers, a set of such pointers exist for each parallel line scan processing element of the system, which allow selection of a particular set of sensor 5 stacks according to line scan identity and vector attribute, Coordination between such parallel ,--ead processes is maintained tly pointers, one for each such process, PSn 120.Within such a line scan read processor two pointers control the actual stack addressing one Pin 49 is the reference virtual image sensor V1 (n denotes the vector attribute) stack pointer. For every identity read from the reference virtual image sensor V1 stack all identities in the other virtual image sensors V2 and V3 associated stacks are read. this is controlled in this example bv a common pointer P23n SO, Clocking of the reference image sensor stack. pointer Pin occurs when the logical end 53 of either of the other virtual image sensor V2, V3 associated stacks is reached.The reference virtual image sensor stack pointer is reset when the end marker 54 for its own data is detected this also allows each parallel processing element to move on to process further sets of stacks, in tis example by using the pointer PSn 120, For the reading of the other virtual image sensor associated stacks their pointer P23 are reset by occurrence of an end marker from any virtual image sensor V1, V2, V3 associated stack 53 including the reference stack 54.In this way all combinations of vectors identities between sets of associated vectors and all sets of such sets are read, The iteration rate of processing solutions depends on the rate at which the P23 pointer can be clocked by ITC.LIX 48 (function of multiple address time in identity transform processors), It should be noted that all such stack reads are performed in parallel and sets of stacks 5 of different attributes may also be read simultaneously by the use of their own dedicated stack pointers so the actual (vector pair intersection) solution rate for this example of three virtual image sensors and two vector attributes is four times (pair of pair of vector identity combinations) the iteration rate multiplied by the number of parallel line scan processors, With reference to figure 10 which indicates how data read, as a result of parallel read "k" WRRnWRW operations, f@ from a particular set of sensor stacks A11U 55, A21U 56, and A31U 57 appears simultaneously on the output lines of these particuler stacks and passes through the double buffering of the three state gates 59 60 61 under control of the "RRnW" 76 WRRnWRW sequencing.This first read operation now precipitates a series of cascaded reads Rn through read only memory POM using the address combinations fred from identities read from, the reference virtual image sensor V1 stack 55 and each of the other virtual image sensors V2 and V3 stacks 56 and 57 respectively (in response to address generation Pln and P23n).
The combinations of identities taken from the reference virtual image sensor V1 stack 55 and each of the other image sensor V1 V2 stacks 56 and 57 and the identities of the real or virtual vector pair intersections defined by the vectors they represent are known apriori and implicit in the identity transforms held in ROM.In this example the identities simultaneously generated at ROM1 62 and ROM3 64 outputs represent polar range information of such intersections for the --:ombination of identities from stacks 55 and 56 for V1 and V2, likewise 55 and 57 for V1 and V3 respectively. It will be noted that in parallel ROM2 6:3 address definition has been driven by the reference virtual image sensor V1 vector identity and the Scanner output m4, The ROM2 6:3 identity transform output represents the vector rotations necessary to support the Identification of real vector intersections.
This rotation identity output from ROM2 63 now forms address combinations with range data identities output from ROM1 62 and ROM3 54 which are cascaded into ROM4 65 and ROM5 66. These ROMs 65 66 generate as outputs from their identity transforms the identities of the real or virtual vector intersection for the particular combination of vector identities output from stacks 55 and 56. Similarly the identity combination output from ROM2 63 and ROM 64 form address combinations into ROM6 67 and ROM7 6::3 whose identity transform outputs similarly identifies the real or virtual vector intersection identities from the combination of vector identities from stacks 55 and 57, The identity combination. generated by ROM4 65, ROM5 66 and FOM6 67, ROM7 68 now form the address definition for the RAM dedicated ordered output buffers 120BU 74, and 130BU 75 into which the existence of these real or virtual vector intersections are now simultaneousl written by a write pulse generated by the "RRnW" WRRnWRW 76 control.
The timing of these write pulses is controlled such that the cascaded address generation has become stable following the various stages ot the cascaded read operations (and this ultimately controls the P23n iteration c clock rate ITCLK 48 figure 9) however, the duration of the comple@ mathematical computations that have in effect just taken place are many orders of magnitude less than required for serial computation by software and further the two results produced simultaneously here represent the output of a single line scan processor element processing members of one set of sets of members of the same vector attribute, similar line scan processor elements operating in parallel are simultaneously generating their solutions to vector pair intersections from other sets of vector identity stacks, Note that the precision of the result is a function of the sensitivity of the virtual image sensors and reflected by the vector identities, which determine the maximum size of the address definition passed to ROM1 62, ROM2 63, ROM3 64 however no limit ii placed on address definition for any of the ROM memories used and by using parallel ROMs, at each stage of the process, increasing precision can be achieved in the same effective process time as defined by the cascaded ROM identity transform processor described.The memories used as dedicated output buffers 74 75 are triple buffered allowing continuous processing of data whereby one frame period is used for writing processor output to the dedicated output buffers, one frame for resding the scenerio date base output, and one frame to subsequently clear the data bases. The triple buffering is achieved in similar fashion to the double buffering of data input, except here three sets (write, read, clear processes) of three state gates are used for address and data definition for each of the three sets of memories comprising the triple buffered output memories of which only one set is shown in figure 9 comprising 69,70,71,72,73.
With reference to figure 11 which shows two subsets 74, 75 and of one set from the sets of triple (not shown) buffered vector pair intercept buffers holding the output from two line scan processors each having processed a set of sets of different vector identities (in this example with the attributes upper and lower frequency excursions) for the same line in frame. During data base output the dedicated vector pair intercept buffers containing all the identified vector pair intersections written during the previous frame cycle are read sequentially in synchronous parallel fashion.The oddress definition for this process is that controlled by the Scanner SCNR 14 88 output through the action of the Address generation circuit@ ADG 10. From the reading of vector pair intersections from sets of sets of such buffers for a line in freme output of the data base, in this example, 74, 75, and 88, 89, their (anded) simultaneous e@istence at 106 or 107 respectively implies the e@istence of a unique triple vector intersection where the current output buffer address identifies the three axis spacial position of an element of topographical detail.
For the set of sets of intersect buffers here 74, 75 and 88 89, being read synchronously and in parallel, their outputs are ored forming part (single line scan output) of the sequential output of the data base DOP 96. The parallel reading of the vector pair output intersection buffers for all other pairs of sets and all such sets of sets for all the other line scan processor output buffers constitutes the output representing the whole of of theobserved scenarios topographical three dimensional detail, which is intended to comprise the input to a more conventional processing system in support dependant functions such as for example scenario navigation. Note again the three state gates 82 83 84 85 90 91 92 and 93 effecting address and data triple buffering control lad by the "R" WRRnWRW logic. This example allows for diagnostic purposes simple operator sensible displays of a t!o dimensional subset of this data bas- any particular set of processors processing sets of sets of vactors with a common azimuth or elevation attribute by generating a binary event signal associated with the existence of a multiple vector intercapt found during the reading of a set of data base buffers. This output is mixed with Scanner SCNR 14 produced synthetic frame and line sync 88 in the Video mixer 16 to produce a two dimensional display similar to that of a radar display.
With reference to figure 12 all dedicated vector pair intercept buffers necessitate in this example being cleared in the frame cycle following data base output before receiving new intersect data in the following cycle, this is again effected using addressing from the Address generation circuitry ADG 10, To allow continuous processing requires triple buffering of the address and data though the action of the three state buffers 98 99 100 101 102 103 104 and 105 controlled hepe by the clear control logic which comprises the lact write W" of the WRRnWRW MC 18 sequencing logic.
The high level modelling techniques allow for the identification of maginary solutions in the definition of the apriori identity transforms for vector intercepts and other functions employed in the system. Since these processors must produce a solution based on their current mix of input parameters the identification of anomalous solutions for example ranging outside the resolution of the system or ranging behind the systems origin are identifiable and for such cases solutions are produced which even in cascaded systems will be recognised as imaginary identities and given a unique and system recognisable identity to allow exclusion in downstream processes.
Whilst the mathematics of unique multiple vector intercepts general v works well in the random real world some situations of symmet@@ can @esult in anomalous multiple vector intercepts, these however can be @ecognised as short ranging solutions to the unique real world vector intercepts and can be therefore be ignore during the ordered @eadin of the dedicated vector intercept output buffers.

Claims (7)

  1. A @ real time generic pattern derived topography processor comprising, three or more image sensors, CCD or equivalent, of defined separation, orientation and synchronisation organised as a phased image sensor array where their fields of view share a common scenario, and one cr more processors s capable of processing the image sensors composite video signals to identify and associate binary elements of generic patterns between all image sensors in the phased image sensor array thereby resolving the system relative spacial position of topographical detail implicit In such correlated generic patterns.
  2. - A @ real time generic pattern derived topography processor as claimed in Claim 1 wherein identification and partitioned storage means are provided which allow binary event signals comprising generic image pattern information, output from an image pattern thread processor or equivalent and representative of vectors of particular and different attributes, to be generated ed from each of the image sensors s composite video signals and for each binary event comprising such signals to assign an identity and pass such information as partitioned sets of data to other processes.
  3. 3 A real time generic pattern derived topography processor as claimed in Claim 1 or claim 2 wherein combination synthesis means are provided which allows combinations of vector identities from sets of sets of such identities to b synthesised and passed to other processes
  4. 4 A real time generic pattern derived topography processor as claimed in Claim 1 or Claim 2 or Claim - wherein identity transform means are provided which support parallel and cascaded operations between sets of sets of synthesised identity combinations where the apriori transform knowledge allows the solutions to vector equations or other processes to be computed within an effective multiple address time
  5. 5 A real 1 time generic pattern derived topography processor as claimed in Claim 1 or Claim 2 or Claim 3 or claim 4 wherein dedicated ordered partItIoned buffer means are provided to register the existence of results from parallel identity transform processes and allows, through the cyclic ordered reading in synchronous parallel fashion of sets of sets of such buffers, the output of the system's three dimensional scenario data base, comprising unique multiple vector Intercepts.
  6. 6 A real time generic pattern derived topography processor as claimed in any preceding claim wherein processor architecture means are provided to allow dynamic reconfiguration of the parallel processor resources necessary to support real time operation, 7 A real time generic pattern derived topography processor as claimed in any preceding claim wherein co-ordinated scanning of a common scenario by the image sensors whilst maintaining their different perspectives allows improved range discrimination particularl at system extreme ranges.
  7. 7 A real time generic pattern derived topography processor substantially as described herein with reference to Figures 1-12 of the accompanying drawing.
GB9323783A 1993-08-24 1993-11-18 Real time remote sensing topography processor Withdrawn GB2283383A (en)

Priority Applications (8)

Application Number Priority Date Filing Date Title
GB9323783A GB2283383A (en) 1993-08-24 1993-11-18 Real time remote sensing topography processor
GB9404654A GB2281464A (en) 1993-08-24 1994-03-10 A graphic macro diagnostic topography processor system
GB9725082A GB2319688B (en) 1993-08-24 1994-08-23 Topography processor system
US08/601,048 US6233361B1 (en) 1993-08-24 1994-08-23 Topography processor system
GB9601754A GB2295741B (en) 1993-08-24 1994-08-23 Topography processor system
PCT/GB1994/001845 WO1995006283A1 (en) 1993-08-24 1994-08-23 Topography processor system
GB9807454A GB2320392B (en) 1993-08-24 1994-08-23 Topography processor system
AU74654/94A AU7465494A (en) 1993-08-24 1994-08-23 Topography processor system

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
GB939317601A GB9317601D0 (en) 1993-08-24 1993-08-24 Real time generic pattern derived topography processor
GB939317602A GB9317602D0 (en) 1993-08-24 1993-08-24 Co-operatively slavable phased virtual image sensor array
GB939317573A GB9317573D0 (en) 1993-08-24 1993-08-24 Virtual image sensor
GB939317600A GB9317600D0 (en) 1993-08-24 1993-08-24 Image pattern thread processor
GB9323783A GB2283383A (en) 1993-08-24 1993-11-18 Real time remote sensing topography processor

Publications (2)

Publication Number Publication Date
GB9323783D0 GB9323783D0 (en) 1994-01-05
GB2283383A true GB2283383A (en) 1995-05-03

Family

ID=27517205

Family Applications (1)

Application Number Title Priority Date Filing Date
GB9323783A Withdrawn GB2283383A (en) 1993-08-24 1993-11-18 Real time remote sensing topography processor

Country Status (1)

Country Link
GB (1) GB2283383A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4370724A (en) * 1979-09-10 1983-01-25 Siemens Aktiengesellschaft Circuit for the sensor controlled distance measurement
EP0121411A2 (en) * 1983-03-31 1984-10-10 Kabushiki Kaisha Toshiba Stereoscopic vision system
WO1988002518A2 (en) * 1986-10-02 1988-04-07 British Aerospace Public Limited Company Real time generation of stereo depth maps
US4916302A (en) * 1985-02-09 1990-04-10 Canon Kabushiki Kaisha Apparatus for and method of measuring distances to objects present in a plurality of directions

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4370724A (en) * 1979-09-10 1983-01-25 Siemens Aktiengesellschaft Circuit for the sensor controlled distance measurement
EP0121411A2 (en) * 1983-03-31 1984-10-10 Kabushiki Kaisha Toshiba Stereoscopic vision system
US4916302A (en) * 1985-02-09 1990-04-10 Canon Kabushiki Kaisha Apparatus for and method of measuring distances to objects present in a plurality of directions
WO1988002518A2 (en) * 1986-10-02 1988-04-07 British Aerospace Public Limited Company Real time generation of stereo depth maps

Also Published As

Publication number Publication date
GB9323783D0 (en) 1994-01-05

Similar Documents

Publication Publication Date Title
CN106407974B (en) The method of target positioning and Attitude estimation for interesting target
US8660362B2 (en) Combined depth filtering and super resolution
US6052100A (en) Computer controlled three-dimensional volumetric display
WO2018089163A1 (en) Methods and systems of performing object pose estimation
US5282262A (en) Method and apparatus for transforming a two-dimensional video signal onto a three-dimensional surface
CN109816769A (en) Scene based on depth camera ground drawing generating method, device and equipment
US20190206119A1 (en) Mixed reality display device
Miknis et al. Near real-time point cloud processing using the PCL
WO2014206243A1 (en) Systems and methods for augmented-reality interactions cross-references to related applications
CN110782531A (en) Method and computing device for processing three-dimensional point cloud data
CN110637461A (en) Densified optical flow processing in computer vision systems
Villalpando et al. FPGA implementation of stereo disparity with high throughput for mobility applications
JP4570764B2 (en) Motion information recognition system
US6233361B1 (en) Topography processor system
Bolduc et al. A real-time foveated sensor with overlapping receptive fields
Thompson et al. Model-directed object recognition on the connection machine
Nakanishi et al. Real-time line extraction using a highly parallel Hough transform board
CN110651475A (en) Hierarchical data organization for dense optical flows
Lim et al. Use of log polar space for foveation and feature recognition
US5261012A (en) Method and system for thinning images
GB2256109A (en) Transforming a two-dimensional image video signal on to a three-dimensional surface
GB2283383A (en) Real time remote sensing topography processor
Ballard et al. Transformational Form Perception in 3D: Constraints, Algorithms, Implementation.
US5982377A (en) Three-dimensional graphic displaying system and method
RU2705049C1 (en) High-adaptive autonomous mobile robot control device

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)