WO2008073081A1 - Procédé et appareil pour acquérir et traiter des données de transducteurs - Google Patents

Procédé et appareil pour acquérir et traiter des données de transducteurs Download PDF

Info

Publication number
WO2008073081A1
WO2008073081A1 PCT/US2006/046979 US2006046979W WO2008073081A1 WO 2008073081 A1 WO2008073081 A1 WO 2008073081A1 US 2006046979 W US2006046979 W US 2006046979W WO 2008073081 A1 WO2008073081 A1 WO 2008073081A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
transducer
tcf
sensor
transducers
Prior art date
Application number
PCT/US2006/046979
Other languages
English (en)
Inventor
Steven W. Havens
Original Assignee
Havens Steven W
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Havens Steven W filed Critical Havens Steven W
Priority to PCT/US2006/046979 priority Critical patent/WO2008073081A1/fr
Publication of WO2008073081A1 publication Critical patent/WO2008073081A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q9/00Arrangements in telecontrol or telemetry systems for selectively calling a substation from a main station, in which substation desired apparatus is selected for applying a control signal thereto or for obtaining measured values therefrom
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Definitions

  • F336I5-02-M-1189 USAF
  • DASG 60-030P-0280 U.S. Army Space and Missile Defense Command
  • the present invention pertains to a method for establishing a common basis for the exchange and processing of transducer (Le. sensor and emitter) data acquired from a plurality of diverse transducers.
  • the invention pertains to a method and apparatus for providing a common basis for capturing data in real time from diverse transducers.
  • the data may be used immediately or may be archived without corruption
  • the data produced by various transducers may be generated at different times and at different frequencies or intervals depending on the system requirements. For example, it may be necessary to record the position, or temperature of a sensor less frequently than it is necessary to record the image produced by such sensor This is because the image may change more frequently than the temperature of the sensor, and the motion of the sensor may be uniform, and hence its position may be readily computed.
  • the invention is based upon the discovery that transducer data may be expressed or modeled uniformly in terms of a Transducer Characteristic Frame (TCF). Each transducer will have a unique TCF All models for a transducer will employ the same TCF structure.
  • TCF Transducer Characteristic Frame
  • FIG. 1 is a schematic illustration of a sensor showing the stimulus and the response characteristics.
  • FIG. 2 is a schematic representation of an in-situ sensor and its corresponding response.
  • FIG. 3 is a schematic block diagram of a remote sensor showing the instantaneous field of view and the corresponding response.
  • Fig. 4 is a schematic illustration of an active remote- sensor showing IFOV and the characteristics of the source illumination and response.
  • FIG. 5 is a schematic representation of a scanning IFOV remote sensor showing object and image space.
  • FIG. 6 is a generalized illustration of sensor frames showing dimensionality and relative timing samples for diverse sensors. . .
  • Fig. 7 is a schematic representation of a framing sensor showing the derivation of look angle vectors.
  • Fig. 8 is a schematic representation of coordnate TCF samples for various sensors.
  • Fig. 9 is a description of spatial and timing data for a line sensor.
  • FIG. 10 is a description of the IFOV distribution for a sensor sample.
  • Fig. 11 is a description of the response spectrum of a sensor over its response range.
  • Fig. 12 is an illustration of the stimulus-response-transfer function of an exemplary sensor. . . . .
  • Fig 13 is a schematic illustration of a sample order for a sensor and a corresponding different transmission order for the same device
  • Fig 14 is a depiction of a basic Cartesian coordinate systme
  • Fig 15 is a depiction of a basic polar coordinate system
  • Fig 16 is a depiction of an earth centered coordinate system
  • Fig i 7 is a depiction of a platform coordinate system
  • Fig 18 is a depiction of coordinate transformations in euler angles
  • Fig 19 is an illustration of an exemplary coordinate transformation
  • Fig 20 is an illustration of an exemplary interpolation
  • Fig 21 is an illustration of exemplifying the relationship of sensor data taken at different rates
  • Fig 22 is a geneialized. schematic block diagiam of the overall system according to the invention
  • FIG 23 is a generalized schematic diagram of a conventional system
  • Figs 24°- 24c are schematic block diagrams of various embodiments of a generalized system according to the invention.
  • FIG. 25 ia a schematic block diagram of an airborne capture and transmit system according to the invention
  • Fig 26 is a schematic block diagram of a terrestrial or ground receive processing and drsplay system according to the invention.
  • Fig 27 is a graphical representation of a model curve, which exemplifies a frequency response curve
  • Fig 28 is an example of an encoding fragement DESCRIPTION OF THE INVENTION
  • the invention is directed to a method and apparatus for collecting, organizing, correlating and communicating transducej. data from a plurality of diverse transducers
  • a transducer is any device which produces an output which may be monitored
  • Transducers may include known sensors or emmitters of various types which are discussed below
  • Transducers may also include devices yet to be developed, but which according to the invention, may be readily incorporated in accordance with the TCF of the newly developed sensor,
  • a transducer may be defined as any device that produces a response as a function of a stimulus (i e a sensor), or a device that produces transmitted energy (i ⁇ an emitter)
  • the term transducer is general and is used as such, the terms sensor and'emitter are particular examples of specialized transducers and are used throughout this discussion as appropriate
  • Fig 1 illustrates a sensor producing an output (response) m response to an input (stimulus)
  • Theie are basically two general types of transducers, namely m-situ transducers es and remote transducers
  • In-situ sensors are transducers that make measurements at the on gin of the stimulus They are typically m contact with an object of which a measurement is made
  • Some examples are pressure and temperature sensors, rotational encoders, geo- positiomng satellites (GPS) and internal measurement units (MU)
  • Fig 2 illustrates an m- ⁇ itu sensor m contact with an object and its response
  • a remote transducer which can be active or passive sensoi, measures cbaracten sites of the environment represented by the state of a remote object
  • Remote sensors typically measure energy modulation
  • a remote sensor can be charactenzed as having an Instantaneous Field Of View (IFOV)
  • Fig 3 illustrates a passive remote sensor with an IFOV wherein the information detected by the sensor is reflected energy
  • Such sensors measure the emitance and reflectance of an object
  • Such sensors typically rely on the illumination from ambient sources for their reflected illumination
  • Active remote transducers are sensors that piovide the necessary illumination in order to measure the ⁇ eflectance from the object
  • the chaiactenstics of the illumination souice must be known
  • Fig 4 illustrates such an active sensoi wheiein an illumination souice or emitter produces illumination and the sensor is responsive to the reflected energy from the object IFOV.
  • the illumination source and sensor are each considered to be transducers.
  • Emitters can be ⁇ generally characterized as transducers which are the reciprocal, of a remote sensor. Instead of an input energy producing a response signal, an emitter uses an in put signal to produce an output energy. ⁇
  • Remote sensors may be further employed to scan the IFOV in order to cover, a wider field of view (FOV) by taking many samples, each sampling different space of their own IF.OV.
  • FOV field of view
  • remote sensors which scan the IFOV in the entire FOV either by sequencing multiple detectors or by moving the IFOV of a single detector and taking multiple samples from that detector during the motion are referred to as scanning sensors.
  • the scanning of a remote sensor is typically implied in trie data structure of -the sensor response.
  • Some imaging sensors rely on IFOV scanning to produce an image of the object space.
  • Some sensors known, as framing sensors do not scan, -but have multiple detection with individual IFO Vs arranged in such a way so as to cover a large field of view.
  • Samples making up a frame form a framing sensor are all from the same instant of time.
  • the sample from a scanning sensor have sequential time sampling over an imaging interval.
  • the invention provides a method and apparatus for combining the data from any number of diverse transducers.
  • Transducers produce measurements.
  • a measurement is a single data point.
  • a transducer sample comprises one or more measurements. Every measurement in a sample corresponds to particular temporal and spatial coordinates.
  • a Transducer Characteristic Frame comprises a set of transducer samples.
  • the TCF is the minimum set of samples that rriust be considered in order to characterize the transducer.
  • Transducer data is sent in clusters.
  • a cluster is a specified whole TCF.
  • a cluster may be a portions set fraction of a TCF when the TCF is very large.
  • the invention provides a method and apparatus for: (a) describing (that is, modeling) the particular data structure of any. transducer relative to the hierarchy of ⁇ measurements, samples, TCF and clusters; (b) communicating the data from any transducer using this hierarchy- and (c) correlating the data from any arbitrary set of transducers using ' the model and data structure. .
  • Transducers are normalized by providing a universal method of modeling transducer data based on a transducer characteristic frame (TCF).
  • TCF transducer characteristic frame
  • the Transducer Characteristic Frame is a set of samples comprising a series of measurments organized so as to compartmentalize the measurements in a way thatresembles the physical 'layout of the transducer. If, for example-, the transducer is a push broom scanner having 1 x n pixels, the TCF will be organized in a 1 1 X n array,
  • a model represents a feature of the transducer which may be expressed.
  • a pixel may have a selected orinetation with respect to the focus of a camera. This orientation has two spatial components, sometimes referred to as alpha ( ⁇ ) and beta ( ⁇ ) angles as described using a spherical- coordinate system.
  • the model for ⁇ . is a series of numbers for the ⁇ component of each 1.x n pixel; and the ⁇ model is a series of . numbers for the ⁇ component of each 1 x n pixel.
  • Other features, described hereinafter, will- be modeled in the same way so that all models of data look like the TCF of the transducer.
  • the models appear as layers having the same appearance or corresponding properties and characteristics of a transducer.
  • the dimensionality characteristic of the TCF is the number of object space coordinates needed to specify the spatial characteristics of each transducer sample relative to a transducer reference system. In normal three dimensional space, the dimensionality can be zero, one, two or three. It should be understood that dimensionality is not so limited, but may be easily expanded if desired.
  • Each dimension of the TCF can be assigned a spatial coordinate from one of the coordinate systems.
  • the object space can be either Cartesian coordinates, i.e. x, y, z coordinates, or spherical coordinates, i.e ⁇ , ⁇ , r coordinates.
  • any coodinate system such as cylindrical
  • any coodinate system such as cylindrical
  • In situ transducers which have no IFOM have a single sample and have a dimensionality of zero.
  • zero dimensional (that is, non-dimensional) transducers include rotational encoders, thermocouples, voltmeters, global position system (GPS), microphones and inertial navigation sensors shown in Fig. 6.
  • Non-dimensional transducers are usually in-siru sensors.
  • A. single sample may have one or more measurements..
  • a thermocouple may give a temperature measurement.
  • a global positioning system (GPS) on the other hand, may produce latitude, longitude or altitude.measurements in a single sample.
  • One dimensional TCFs use one coordinate from the set of object space coordinates to characterize the spatial characteristics of each sample within the TCF.
  • a radar sensor or a depth sounder are examples of a one-dimensional TCF, because each sample in the TCF represents the response or stimulus at a certain range from the transducer. See-Fig. 6.
  • Two dimensional TCFs use two coordinates to characterize the spatial relationship of each sample.
  • Most imagining sensors have a two-dimensional TCF.
  • An n x m framing sensor o ⁇ a pan scan sensor, depicted in Fig 6, are examples of two dimensional sensors.
  • the dimensions on the TCF can be any pair of coordinates taken from the coordinate,systems (Cartesian, spherical, etc.).
  • all TCF models of a particular transducer have the same TCF configuration. If the TCF is 11 ,000 samples in a 50 by 220 grid, then all models will have 11000 values in a 50 by 220 grid. If the TCF is a 4 x 4 grid, the models of the data will be 4 x 4 as well. There is a one to one correlation between the samples of the measurement TCF and the tcfjnodels, Some common models of transducers include a coordinate TCF model; a temporal TCF model; and a sequence TCF model. Certain transducers have additional models which maybe employed to describe changes between samples particular to the transducer samples, for example, radiometric gain.
  • a coordinate model defines the spatial ambiguity of transducer samples.
  • the coordinate model associates the transducer samples to the physical world
  • a framing sensor camera for example, having a 4 x 4 array of 16 pixels, as depicted in Figs, 6 and 7, each sample has two dimension spatial components, namely ⁇ and ⁇ , associated in each sample dimension, x and y
  • 2-dimensions modeling 3-dimensional space leaving one dimension ambiguous
  • the spatial ambiguity for a camera illustrated-in Fig 7, is based on the fact that rays pass through a vertex point located at the origin of the transducer reference system to st ⁇ ke the transducer lying m a focal plane In Fig.
  • each pixel has a corresponding alpha and beta measurement associated therewith
  • the alpha and beta measurmentscontained in a TCF structure called a tcfjnodel, so that the data is self consistent Examples of the alpha and beta tcf models for the arrangement of Fig 7 are illustrated in Table I
  • Fig 8 illustrates TCFs for four types of sensors including a 4 x 4 framing sensor, a 1 x 16 pushbroom sensor; a 2 x 13 line scan sensor, and a 1 x 12 conical scan sensor
  • the TCF for the 4 x 4 framing sensor is similar to the an angements of Figs 6 and 7
  • the alpha and beta values in the TCF for certain ones of the roeasurments are shown schematically asarrows labeled for the particular pixel.
  • the orientation of the transducer reference system is . chosen such that the set of coordinates chosen model the ambiguity space (if any) of each sample. ⁇ .
  • the pushbroom sensor has alpha and beta measurements for the sample look vector in the TCF shown as arrows.
  • the line scanner shown in Fig. S has the alpha and beta values identified in accordance with the TCF of a line scanner.
  • the conical scan sensor also shown in Fig. 8 has a model of the alpha and beta values described as a 1 x 12 array of arrows.
  • Fig. 9 illustrates an example of a simple two- dimensional coordinate model for a 1 x 7 sensor .
  • Each-tick represents an interval in a coordinate model. The ticks allow the range of the coordinate to change without changing the tick count for each sample.
  • sample image space coordinates and vector frame ⁇ and ⁇ have-the same TCF configuration for a 1 x n sensor.
  • Fig. 11 is a normalized response function showing frequency response over a norminal bandwidth
  • Fig. 12 shows the normalized input/output transfei function over a range of stimuli.
  • the data representing IFOV Response, frequency response and I/O response may be expressed in terms of the TCF -for each sensor element (i.e may vary as a function of a sample position with the TCF).
  • the invention thus provides a means for not only modeling the sample measurements, but also provides a means for modeling the various characteristics associated with those transducer measurements
  • a sample has a set number of measurements
  • Each measurement has an arbitrary number of properties and characteristics.
  • properties are simple name-value pairs e.g frequency - Hz, angle - radians; volume - db; color - yellow and the like
  • Characteristics are a combination of related properties Characteristics may also include a curve, such as a simple sequence of numbers that maybe interpreted as a graph or curve. These curves represent variations inherent in measurements. In a camera, for example, depending on the position of the pixel with respect to the central axis, the sensitivity of the pixel may be higher or lower than a nearby pixel Alternatively, the frequency response of a sensor may vary over a range. The curve in Fig 11 illustrates this characteristic.
  • Fig. 12 illustrates a response of a pixel or detector compared with the stimulus In other words, the response is a function of trie stimulus and must be taken into account.
  • the correction factor for each pixel can be characterized by using a TCFjnodel of the transducer
  • the temporal model defines the relative time delays or offsets of each sample within a TCF relative to the first sample.
  • the temporal model uses the TCF.
  • the values in the temporal model are given in time intervals also referred to as ticks.
  • Table III illustrates an example of a two dimensional temporal model using time - ticks TABLE III
  • Functions or curves can be described by using -a numeric function model or . fjnodel.
  • the range and/or endpoints ofthe independent variable comes from the calling element for the tjnodel.
  • the fjnodel contains the set of data points representing the . , dependent variable spread linearly or logarithmically across the range ofthe independent variable. There may be one or two independent variables such that two or three dimensional functions can be modeled.
  • a framing sensor such as shown in Fig. 6, all samples of a TCF are taken simultaneously, that is, there is no time offset between the First measurement in a sample and any other measurement in a sample in the frame. Accordingly, the temporal model of the framing sensor is represented by a 4 x 4 matrix of zeros in each of the boxes.
  • Fig 6 also shows a pan scan sensor, where each sample in a line is taken at the same time.
  • the second, third and fourth lines are sampled on later ticks.
  • the first line is at tick time zero (G) and lines 2, 3 and 4 are on successive corresponidg ticks (1), (2), . and (3) respectively,a ⁇ illustrated by the numerals inscribed in the boxes.
  • tick increments can and should be much faster than the increment between samples so that nonlinearities in timing can be characterized.
  • the TCF is comprised of a set number of samples. Each sample is comprised of a set number of measurements. For instance, a camera may have 1000 by 1000 samples, called pixels, within its characteristic framework. In the example, the 1000 X 1000 samples makes an image having 1 million pixels (or 1 megapixel).
  • each-sample has one measurement that may be a gradient of black, e g 256 grey scales
  • each sample has three measurements, one for the gradient of cyan, magenta and yellow, or red, green, blue
  • the samples are contained in the TCF, and there will be a sample of 1 million samples
  • Each sample within the TCF has a corresponding coordinate, time and sequence to describe its relative or internal spatial orientation, its internal or relative timing relative to other samples within the TCF, and its sequence order in the transmission stream such that it can be sorted into its internal sample sequence
  • the data will be sent in a string of binary data
  • the data may look like a string of numbers ***, ***; ***; This data string represents, for example grey scales whic follow the TCF of the sensor
  • the device is a color camera, which has three measurements, for example Red (R), Green (G), and Blue (B) for each sample
  • the data will be sent by interleaving the binary measurement from each measurement.
  • the data will be in groups of three measuments which have the form: ***, &&&, %%%; *" *,&&&,%%%
  • the sampling order is the order in which the samples are taken
  • the sequence of samples can be any desired older
  • the sampling order is given by the spatial or temporal orientation of the samples within a TCF, This order may be disturbed during the serial transmission of the data
  • the order in which the samples are transported shall be the same as the order in which the timing and coordinate TCFs are transported.
  • To retrieve the onginal order the coordinate TCFs can be sorted. to retrieve the spatial order.
  • the original temporal order will then result from the similar sorting of its TCF.
  • a TCF will be used which gives the intended numenc position of each sample in the transported TCF (Fig 13).
  • Sequence TCF may vary greatly and may not always represent a left-o-nght, top-to-bortom, fronl-to-back scan of the TCF structure.
  • One way to organize the data such that the data is organized spatially correct is to sort the data samples according the the coordinates of the coordinate TCF's This sorting although possible may be a computationaly intensive task. To facilate the sorting a sequencing TCF is introduced (Fig 13.).
  • situtions where the intended organization of the data is not orthoginal (e g a random spatial distribution)
  • the TCF is a 2-d ⁇ mensional structure
  • two sequencing TCFs would be used, one sequence TCF for each coordiante
  • One sequence TCF would indicate the column position of the sample and the other sequence TCF would indicate the row position of the sample Samples do not need to bepositioned at every column-row orderd pair
  • the spatial structure of the data is not orthoginal, ihen the non-orthoginal structure shall be described using an all inclusive orthoginal coordinate space
  • Known approaches to the sequencing TCF implement the senal sequence number or the coordinate sets to represent the row, column, plane position of the samples
  • Encoding is a characteristic that must be defined for each measurement
  • the encoding characteristic defines the bits, the data type, the units, and the range properties of a measurement
  • the encoding characteristic provides the information required to allow applications to parse data within a cluster
  • the model can provide any number of characteristics for a particular measurement Some characteristics include frequency response, instantaneous field of view and gain
  • the model can specify dependency, which is defined as a condition wheie the value of a property is dependent upon another property, or is dependent upon a measurement value generated by another transducer As indicated above, measurement is specified or identified as a name-value pair.
  • dependency is defined as a condition wheie the value of a property is dependent upon another property, or is dependent upon a measurement value generated by another transducer As indicated above, measurement is specified or identified as a name-value pair.
  • a dependency on the other hand, a property has a name-dependency identifier e g gam - temperature, and the like, rather than a name-value pair
  • the invention uses the dependency identifiers to define the relationship between transducers to thereby define a system
  • a system is an aibitrary set of transducers
  • the invention characte ⁇ zes a system, by providing the individual models of the transducers and then specifying the mterdependency of the properties of the transducers using dependency identifiers
  • a first- transducer may have variable gam dependent upon temperature.
  • a second transducer may be a gain.sensor
  • the interdepend encies are specified outside of the sensor models themselves. This approach enables system specifications to incorporate sensor models without changing the sensor models That is, systems can utilize "plug and play" sensor models.
  • the transducer onentation characterizes the space-time relationship or geometry of the transducer data
  • the interior and exterior orientation of a transducer complements each other to give a complete space-time relationship of the data
  • the interior orientation is an orientation that remains constant with respect to the transducer reference frame independent of transducer position, attitude, motion or time This orientation accounts for any of the scanning mechanics or the space and time relationships between the samples within the transducer characteristic frame.
  • the external orientation characterizes the position and attitude and timing relationship of the transducer reference system with respect to an external reference system.
  • the world reference system is an external spatial reference system that will be the common reference system for all geo-spatial data (e g ECEF reference system).
  • FIGs 14 and 15 respectively show the Cartesian and polar coordinate systems used to desciibe coordinates
  • Figs 16 and 17 show two reference systems used in this discussion.
  • Fig 16 shows an Earth Centered Earth Fixed (ECEF) coordinate system (further defined by WGS- 84).
  • Fig 17 shows a transducer reference system. If a platform reference system is required, a transducer shall be assigned to it so that it can be measured -There is no assigned o ⁇ entation of the x, y, and z axis to the transducer.
  • ECEF Earth Centered Earth Fixed
  • any orientation may be used, depending which onentation works best for characterizing the interior orientation of the transducer data
  • the description of the interior orientation may be expressed in terms of selected system coordinates (x, y, z, ⁇ , ⁇ , r). These coordinate assignments may be used to describe the interior orientation of the coordinate system axis to the transducer data.
  • Figure 1 S shows the convention used for determining the Euler angles ( ⁇ , ⁇ , ⁇ ) for rotation transforms
  • Position may be measured with Cartesian or sphencal coordinates
  • the attitude is measured with ⁇ , ⁇ . ⁇ known as Euler angles
  • Fig 19 schematically illustrates an exemplary sensor S-m coordinate frame Fl expressed as xl, yl , zl, secured in a platform (e.g. m an aircraft).
  • a in coordinate frame F2 expressed as x2,y2, z2 by an arm of length Rl.
  • An IMU on the aircraft senses the attitude of the platform frame in coordinates co, ⁇ , ⁇ .; and a GPS senses the position of the sensors in the platform relative to the Earth .
  • the attitude of the sensor S with respect to the IMU is given as a quantity derived from gimbol sensors Sx, Sy; Sz in frame F2 expressed as gX 2 , gya, % ⁇ 2 Accordingly, all necessary coordinates are available. It is not unusual for the frames to have a selectable or time varying attitude which may be measured and recorded over time
  • the attitude of the sensor frame S may be fixed with respect to the position of the MU and the position may be found relative to the GPS
  • the attitude of the IMU may be translated to the GPS position assuming the IMU and GPS form a ngid body.
  • the position dependency of (xl, yl, zl) of the sensor frame A with respect to the aircraft of frame A may be expressed as fixed numbeis, such as (12,005, -4 452, 0216) because the ami length Rl is fixed
  • numbeis iepiesent the fixed positional difference, i.e ((xl-x2), (yl-y2), (zl-z2)), between the origins of the frames Fl and F2.
  • attitude dependency of ( ⁇ , ⁇ , ⁇ ) is specified as fixed numbers, such as (0.86, -0.86, 0.13). These numbers represent the difference angle between the axes of one transducer with respect to another transducer.
  • positional transformations define the relationship between coordinate systems for related transducer frames. If the attitude varies then the attitude dependency will retarget the appropriate sensor.
  • Positioning sensors are treated like any other transducer, This approach is an important concept of invention. Position dependency may be specified based on the value of a transducer measurement. For example, the attitude of gimbol sensors (Sx 1 Sy, Sz)may measure the attitude of a transducerf relative to the attitude of an IMU. The position of a global positioning system (GPS) sensor with respect to Earth Center Earth Fixed is dependent upon the position measurements measured by itself.
  • GPS global positioning system
  • the attitude position reading of a transducer is handled the same way as any other data
  • the TCF of the gimbol is defined uniquely for the gimbol and the TCF of the image sensor is defined uniquely for the image sensor. Timing and sequencing may be different, but again, these aie handled in accordance with the TCF of the sensor. All data models and identifieres follow or are layered on the TCF of the corresponding device. Therefore, the system has a uniform and generic process for handling and communicating information.
  • the data is accurately timed and sequenced, it is possible to relate the data of different transducers in space and time.
  • Sensor S is a scanning sensor or camera in sensor Frame Fl.
  • Sensor S is attached to an arm of a given length R 1.
  • the arm is attached to an aircraft A, in a platform frame Fl in an aircraft A, with attitude measured by an IMU and position measured by a GPS
  • Sensor S2 (Sx, Sy 1 Sz) comprises roll gimbol encodeis that measure the attitude of sensor S relative to the aircraft gX 2 , gy 2 , gZ 2 .
  • Sensor S3 is global position system GPS that measures the position of theplatform relative to an earth-center earth-fixed (ECEF) coordinate system
  • Sensor S4 is an inertial measunment unit IMU that measures the attitude of the aircraft relative to ECEF,
  • transducer system topology provides the fundamental descriptions of how all of the transducer data relates Not all systems are alike so the system topology is desc ⁇ bed on a system to system basis.
  • This specification defines four types of relations Attached, Dangled, Position, and Attitude.
  • An Attached sensor is typically an m-srtu sensor measuring other parameters to support its host sensor
  • the Attached relationship will be described in the attached sensor's nomenclature.
  • An example of an attached sensor would be if one had a diagnostic sensor attached to the pnmary imaging sensor measuring another variable (such as vibration of temperature)
  • An Attached element is empty and simply references another sensor. The presence of an Attached element means that the sensor referenced by the dependency element should be treated as if it had the exact same location and attitude as the sensor referenced by the Attached element
  • the Attached relation is used to attach sensors to transducer characteristics which describe changing parameters about a transducer system, such as receiver gain
  • the Attached relation implies that there is a characteristic to "hook to”.
  • the sensor is measuring a changing parameter for that one of the transducer characteristics that TML models
  • the Dangle dependency is like the attached dependency except that there is no internal hook to a transducer characteristic.
  • the Dangle transducer simple hangs off of another transducer and provides additional measurement relating to the transducer as a whole
  • An example of a dangle relation would be a temperature measurement of a transducer's detector, to the vibration load on a particular transmitter.
  • the Position relation identifies the position of a transducer relative to the earth or another transducer
  • the Position can be a fixed location or it can be variable, where the position is measured by a sensor
  • the Attitude relation is similar to the position except the orientation of a transducer is described relative to the earth or another transducer If the orientation is variable the orientation maybe desciibed with a sensor,
  • the Position and attitude relations are the principal relations for determining the exterior orientation of any transducer.. . . . . .
  • the invention also provides a method for communicating data.
  • the transducer models are sent first followed the actual data generated by the transducers.
  • the models enable applications to correlate the data of transducers by . describing (1) what the data represents, (2) how to parse the clusters of data that are sent and (3) how to calculate the dependencies in the data, especially the dependencies of position and attitude. . . . .
  • Each transducer .broadcasts data in clusters.
  • the transducer model defines the size of the cluster.
  • the transducer broadcasts these clusters at its own rate.
  • Each cluster has a time stamp.
  • the cluster contains either a set number of transducer characteristic frames (TCF) or a set fraction of a TCF.
  • TCF transducer characteristic frames
  • Each TCF contains a specified number of samples.
  • the temporal model of the transducer specifies the time relationship between the time stamp of the cluster and the samples within a cluster.
  • An application uses the temporal model to calculate'th ⁇ time of a specific sample within a cluster.
  • the time stamp on the cluster represents the time of the first TCF in the cluster. If multiple TCFs are in a cluster the other TCF time stamps can be calculated by adding the TCF period to the time stamp. If a TCF is broken into multiple clusters all clussters shall have the same time stamp.
  • a system defines that the properties of certain transducers are dependent upon the values of the data created by other transducers. Most notably, the position of one transducer will be dependent upon the readings of a position sensor. Since each transducer or sensor broadcasts at its own rate, there will not be samples from two transducers with the exact same time stamp. The resultant value for a transducer is calculated by interpolating the values from the other transducer.
  • the following example is intended to illustrate how to interpolate these dependent values in a system of three transducers.
  • the system includes an attitude sensor Sa; a position sensor Sp; and an image sensor Si.
  • the system broadcasts the following clusters depicted in Fig. 20 with the specified time stamps.
  • the image sensor has 100 TCF in each cluster. Each TCF is one tick later in the time stamp It is possible to calculate the point on the earth to which a set of pixels (samples) in a picture is pointing For example if the first sensor image is initaited at time stamp 2789, as shown, and the 32nd pixel in the cluster has time stamp 2821, i.e. 2789+32 (one tick per pixel). The time in quesiton i e of the 32nd pixel Ti is therfore: 2821.
  • the invention may be described as a method and apparatus for acquiring in a universal way transducer data from the plurality of diverse sensors or emitters.
  • the method is particularly useful for efficient and accurate real-time capture and observation of the data.
  • the invention facilitates real time capture and utilization of the data because the data is presented in such a way that pertinent information is modeled in accordance with the Transducer Characteristic Frame (TCF).
  • TCF Transducer Characteristic Frame
  • the data follows a scheme which is uniform and and self consistent, and which permits the system to readily accept new forms of transducers as they become available without significant modification of the system.
  • the system accepts transducers as so called "plug and play" devices. . . .
  • the invention allows for accurate and precise acquisition of transducer data whichmay be readily processed, interpreted, archived and retrieved with known accuracy and precision and without corruption of -the acquired data.
  • the invention compartmentalizes the infonnation associated with each transducer sensor in such a way that it is possible to collect the information with reduced overhead.
  • Transducers have diverse characteristics tailored to function or performance requirements. However, any transducer may be characterized in .accordance with the model described herein which exemplifies the essential characteristics of the transducer. The TCF . is only part of the characterization. . Thus there is a self consistency of all models of tbe-data for any transducer.
  • the sensor response is fully still characterized by the "what", “where”, and “when” characteristics.
  • the “what” characteristics describe: what is being measured; encoding and formatting rules are used to describe the measurement; the units of the measurement; the uncertainties (absolute and relative) of the measurement; the frequency response of the detector;- the input-output transfer function; and the instantaneous field of view. . .
  • the "where" characteristics describe where (spatial position) in space the measurement corresponds.
  • the spatial relationship of the sensor with respect to the platform is characterized by the sensed orientation of the platform and a time tag. If one wishes to characterize the position of the platform relative to some other location, for example, an earth surface station, the position and orientation of the platform relative to the earth is sensed and
  • a time tag maintains relative timing between samples and frames, and an absolute time can be measured with a time sensor measurement which has a relative time tag associated with it.
  • Time tags give relation timing between TCFs
  • the timing TCF gives relative timing within the TCF and the world clock sensor provides absolute time Time tag in start tag of world time sensor correlate world time to system time tag.
  • the sensor data may be fused or summed with the platform data; and the platform data may be fused with the earth station data.
  • the raw data for each sensor is collected independently of other sensors, The arrangement therefore simplifies data collection because complex calculation steps are not performed pnor to collecting and charactenzing the data. The arrangement thus avoids problems associated with data corruption, because data is preserved as it is taken without modification
  • Fig 19 the chain of relationships is traceable back to some desired reference point e.g an Earth Centered Earth Fixed Reference ECEF system AU sensors should be traceable to ECEF.
  • the position of the aircraft A relative to the Earth is defined by the earth platform vector R 2 , which can be characterized as a absolute radial distance with an azimuth and elevation.
  • the vector 2 may be characterized by Cartesian or sphe ⁇ cal coordinates.
  • any consistent coordinate system may be employed to characterize these data Accordingly, it possible to know in real-time the position/attitude with respect to the sensor relative to the ECEF reference system.
  • the above described characterization of sensor data transforms one reference system, for example, the reference system of the sensor to an ECEF reference system
  • each sample In order to model the sensor data and represent it as a temporal model, each sample must have associated with it the time when it was acquired and what was being sampled at that instant For example, in the sensor data frame, there will be associated data in similar arrays to describe the timing and spatial data for each sample The values in each corresponding location of the timing tables relate to the relative time that the sample was acquired in relation to the other samples in the frame. The sampling rate within a TCF as well as the rate at which TCF are acquired may be quite different for different sensors
  • Fig. 21 illustrates this concept.
  • Sensor 1 data occurs at a higher frequency and as different times than sensors 2-4 this is because it may be necessary to receive data which changes frequently, e g image data more often than condition data, e.g , temperature
  • all data is time tagged so that the relationship of the data from any Sensor may be related temporally to any other sensor
  • the data for any sensor maybe interpreted to relate it to a time tagged sample of any other sensor.
  • the sampling order and transmission order maybe very different Data may be acquired in a certain sequence and transmitted in yet a different sequence and the received data maybe unscrambled at the receiving station in order to reconstruct the image or data
  • the spatial data i e. spatial vectors
  • the receiving station may unscramble the data by comparing the vector information transmitted with the vector map of the sensoi frame
  • each spacial vector defines or characterized the corresponding sensor sample, including the location of the data in the sensor frame, and thereby orders the data accordingly.
  • the transmission of sensor response samples may be random, but as long as the corresponding spatial coordinates (i.e vectors) are scrambled in the same order as the sensor response samples, the sampling order can be recovered by sorting the vectors in the spatial frame, then sorting the response frame in a similar manner.
  • Fig. 22 generally illustrates the overall system 100 according to the invention, employing a collection system 102 and a processing system 104 coupled over a link 106.
  • the collection system 102 includes one or more sensors producing data 108. Each sensor has a corresponding model 110. Data generated by the various sensors 108 is transmitted over the link 106 in a common data and sensor model format
  • the processing system 104 includes an application module 112 which receives and reads the data
  • the application module 112 is responsive to a library 114 of common data format processing functions. Accordingly, all of the sensors may be modeled in the same way and their outputs may be processed and interpreted in a common and uniform way.
  • the uniform modeling of all data of a transducei in effect, constitutes preprocessing of data in such a way that it is self consistent and uncom ⁇ table.
  • Fig 23 shows a conventional arrangement
  • sensor data is collected in a common format.
  • this is a proprietary format which does not include a model of the sensor.
  • the sensor data is sent to the processing system, where the application employs a unique sensor model to process the data
  • the disadvantage is that each time a new sensor is developed, a new model must be incorporated into the system.'
  • the models are transmitting with the data and the sensor consistent
  • processing occurs concurrently with or before modeling. Therefore, the data is not self consistent and may be corrupted before it is archived
  • Fig. 24 illustrates an airborne collection system 120 in which the data from sensors 122-1...122-n is formatted in data formatter 124 and transmitted over the link 126.
  • the airborne system also includes ancillary data means 128-1...128-n for each corresponding sensor 122-1 ..122-n.
  • the ancillary data means may be tailored for the corresponding sensor model.
  • the ancillary data is sent along with the sensor response in. a data stream 128 as illustrated
  • the transduce, data description may likewise be transmitted at the commencement of the transmission
  • Fig 25 illustrates a ground or terrestrial receive, process and display station 130 in which the data carried over link 126 is coupled to input parser 132 which separates or demodulates the data for each sensor into separate streams 134-1....134-n respectively.
  • the streams include the sensor data and sensor data description for each sensor.
  • the data is coupled to aprocessor module 136 including a processors 138-1. .138-n for processing the sensor data for each sensor, and a corresponding configuration module 140- 1...140-n. for processing the sensor data description in order to properly configure the processor handling the sensor.
  • Each processor 138-1. .138-n may be coupled to an appropriate display 142- 1. 142-n. It should be understood that the various processing, display and configuration modules may be combined in an appropriate workstation as desired
  • the sensor data description is appropriately matched in the processor 136 for the sensor data to be processed .
  • the software libraries, 142 are adapted to facilitate the universal interpretation of sensor data in the processor.
  • the model of the sensor system topology describes the relationship of the various sensors used in the multi sensor system. This modeling provides a cohesive picture to fuse all of the data together for the various sensors on board a platform. This model describes the chain of sensors and what parameters, if any, are modified by previous sensor measurements in the chain. For example, a detector look direction relative to a transducer is modified by the gimbol angles relative to the internal measurement sensor of the platform and the latitude of the platform relative to earth.
  • This sensor environment data enables vectors to be manipulated and common reference frames to be converted into other common reference frames.
  • each sample and the sensor response frame to each sample in the timing and spatial frame. Accordingly, each sample can be mapped to any surface with relative ease
  • the arrangement provides for rapid targeting based solely on data collected from the sensor system
  • the sensor data and metadata to describe the sensor are packaged in a form for transport to a remote location or to an archive.
  • the shell is generic and uses a compatible markup language as a carrier for the data elements of the model, e.g : transducer markup language (TML)
  • TML transducer markup language
  • transducer markup language employed in the invention.
  • the description has the material subdivided into a series of sections with section headings followed by TML text and, where appropriate, followed by explanatory text discussing the feature of interest.
  • the TML document represents a sitesam.
  • the opening tag initiates the stream
  • a closing tag terminates the sitesam.
  • the fust element in the stream should be a system element ,
  • the remainder of the stream is any sequence of sy ⁇ tem_update elements and data elements
  • the element t ransducerML is the default root element Specific protocol implementations of TML may need to replace the root element
  • System contains a sequence of models, sensors and dependencies, in that order.
  • a system has a unique identifier
  • the models element contains zero or more model elements
  • the model element contains a datapoints element and may contain a description element
  • the description element is geneiic throughout the schema
  • a model defines a curve
  • the data points are evenly distributed across the x-ax ⁇ s
  • the values are the positions relative to the y-axis
  • the sample model M000l defines the curve shown in Fig 27
  • a sensor has a identifier unique withm the system definition
  • sensosrs would use a uniform resource name (URN) as their identifier
  • UPN uniform resource name
  • a stream could begin with a simple empty sensor element as follows
  • the subscnber could check if it has this sensor definition already locally stored If not then the subscriber could look up the sensoi in some well-known repository If that should fail, then the subscnber could ask the publisher to- send the complete sensor definition
  • a sensor contains a description and a single frame
  • a frame contains an space- time model and a single sample definition
  • a cluster may contain one frame, multiple frames, or a fraction of a frame
  • the number of frames within a cluster remains consistent for a particular sensor
  • the count attnbute indicates the number of frames per cluster.
  • Some sensors such as sound have very small frames It is useful to bundle several small frames into a single data element (cluster) to reduce overhead
  • a space-time model has a time model, zero or more axis models and a sequence model
  • the scf_dime ⁇ sion attnbute indicates how many axis models there should be
  • a frame consists of samples
  • the space-time model defines the relationship of the samples to space and time
  • a frame will have a set number of samples For example, if the sensor provides an. image that is 1000 x 1000 pixels, then the sample size is 1 million
  • Each sample consists of one or more measurements If the sensor provides a monochromatic image, then each sample is one measurement of a gray scale If the image is color, then each sample is thiee measiuements foi red, green and blue MuI ti -spectral analysis can actually create thousands of measurements for each sample
  • a measurement contains a description and an encoding followed by zero or more properties or characteristics in any order.
  • the previous fragment defines a sample o three measurements or simplicity o explamation, the example above does not include some mandatory elements which are not relevant to the discussion
  • the first measui emeiit is 6 bits, the second is 8 bits, and the third is 6 bits
  • the total sample is 20 bits, which can be expressed with 5 hexadecimal characters shown in Fig 28
  • the hex string "558Bl" would represent a first - measurement value of 13, a second measurement value of 57 and a third meas ⁇ ieme ⁇ t value of 9
  • Charactenstics provide more fidelity than pioperties
  • a charactenstic element can contain a model element and zero oi more property elements
  • Charactenstics enable us to communicate complex properties such as frequency response For instance, the following characteristic tells us that the frequency response is a typical bell curve extends from 300 ⁇ Hz to 700 ⁇ Hz
  • the attribute dependencyJD is set for reference later in the dependencies section. For instance, this characteristic flags the gain property of the stim_respjfcn (stimulus response function) characteristic as dependent upon some other sensor's measurement value A dependency will reference this dependency identifier in the dependencies section of the system definition
  • the dependencies element contains zero or more dependency elements. Each dependency element references a particular sensor by its unique identifier All the dependencies for a particular sensor should be defined within a single dependency element
  • a dependency element contains either an attached element or a position and attitude element followed by zero or more dependentjvalue elements.
  • An Attached element is empty and simply references another sensor
  • the presence of an Attached element means that the sensor referenced by the dependency element should be treated as if it had the exact same location and attitude as the sensor referenced by the Attached element.
  • the Position element defines the x, y and z dimensional position of a sensor relative to another sensor. The difference is simple arithmetic The value added can be either a number or a measurement value reading If it is a number, then it must be accompanied by an accuracy value The following fragment states that the position of sensor S00l is dependent upon the location of sensor S002
  • the Attitude element defines the omega, phi and gamma ( ⁇ ; ⁇ , ⁇ ) angle positions of a sensor relative to another sensor
  • the Position and attitude of a particular sensor can be dependent upon different sensors
  • the measurement_value element defines the dependency.
  • the measurement_value element refeiences the unique identifier of a sensor sample measurement defined in the sensors section
  • Any sensor property can be dependent upon another sensor measurement The following completes the dependency of the gain property for a sensor upon the measurement of another sensor
  • a system of sensors may change aftei the stream of data has begun These changes come as system_u ⁇ date elements
  • the system_ ⁇ date can contain new models, sensor updates or dependency updates For sensor updates and dependency updates, only the information that has changed is sent Updates are sent within the proper nested elements
  • sequence model is read left-to-nght, top-to-bottom, front-to-back across the dimensions of the space model
  • the x-dimension and y-dimension aie both 4, as indicated in their sample property
  • IRIS Ideas and Services Corporation
  • a transducer includes both sensors and transmitters.
  • TransducerML TransducerML
  • the TML data stream describes events as they happen at the source in a running time sequence.
  • the data stream may be played back at the destination at the same time or at some later time in order to replicate events exactly as they happened at the source.
  • This document describes a transducer acquisition format to enable the acquisition of transducer data and describe any transducer in terms of a common transducer model.
  • Transducer parameters described using the model configure the processor to efficiently process the transducer data. This would theoretically enable a single processor to process any transducer's data, as long as the processor could process the full capabilities of the transducer characterization model.
  • the document begins by defining a new data exchange concept specifically tailored for transducer data (i.e. not display based).
  • the concepts were initially developed for sensors but eventually expanded to handle transmitters as well. This was necessary because active sensors provide their own illumination of the object space. The response of these sensors is also a function of the illumination energy. So for these types of sensors it is necessary to characterize the transmitter as well.
  • the format captures data created from multiple simultaneous events (transducer measurements) at the source and can replicate those events at a destination in the same time relationships as they occurred, at the same or different location, and at a later time ranging from nanoseconds to years. To make the format useable to a wide range of transducer systems it incorporates a methodology for characterizing transducer data which is common for all transducers.
  • the TransducerML data stream represents events as they happen at the source in a running time sequence.
  • the data stream may be played back at the destination at the same time or at some later time in order to replicate events exactly as they happened at the source.
  • TML Transducer Mark-Up Language
  • the transducer model "describes the data” and relates the data to real world parameters.
  • the detailed transducer mechanics are transparent.
  • Transducer data is produced (response) by any receiver (sensor) or sent to (stimulus) any transmitter.
  • a feature of the transducer model is the definition of the transducer characteristic frame, this is a unique frame structure for a transducer which contains the minimum set of samples required to characterize the transducer.
  • the frames from different types of transducers are different. Each frame has a dimensionality indicating how many dimensional coordinates the data structure of the transducer requires.
  • the characteristic frames can be used to acquire the transducer data and to associate modeling data to the transducer measurement data because there is a one-to-one relation between data and model.
  • the transducer model has corresponding frames used to describe spatial and timing relationships within the transducer data.
  • the dimensionality of an optical camera is 2; this means that the sensor generates data in a two-dimensional array.
  • the dimensionality of a thermocouple or accelerometer is zero.
  • one sample for example, temperature or acceleration, repeats on a periodic basis.
  • Non-dimensional or zero dimensional transducers have only one sample in the Transducer Characteristic Frame. In such an arrangement, there is no dimensionality defined for the thermocouple or accelerometer because there is no implied special relationship between the samples in the frame.
  • sensors real or virtual
  • dynamic parameters metal
  • metadata metadata
  • transducer's operating environment The same modeling technique can be used to describe all of the sensors.
  • system topology concept was developed based on transducers as nodes and relationships as links between the nodes. Similar methodologies are used to model data structures in relational databases.
  • transducer data format described herein was demonstrated as a result of a scientific research contract sponsored in part by the United States Air Force and in part by private R&D funds.
  • This format was developed for the purpose of providing a common means for capturing real-time multi-sensor and transmitter (i.e. transducer) data for processing in real time over a communication channel, or recorded and played-back later time.
  • This document pertains to a concepts and methods for establishing a common basis for the capture, transmission, storage, and processing of transducer data acquired from of a plurality of diverse transducers.
  • the motivation for this development was to achieve: 1) Higher accuracies in the derivation of target geo-spatial coordinates derived from remotely sensed data.
  • FIG. 1 shows the typical reconnaissance cycle.
  • IMINT IMage INTelligence
  • NITF National Imagery Transmission Format
  • the raw sensor data between sensor and processing is referred to as primary imagery. After acquisition this data is transported to a processing/exploitation function. The transport may be via various means including data link, physical exchange of recorded media, or network connection.
  • the sensor data will be exploited to extract intelligence information to answer a particular intelligence request.
  • the output of the exploitation function will be a report that answers the intelligence request. This report will include written interpretation of the sensor data as well as annotated imagery.
  • the processed annotated imagery data produced as a product of the exploitation process is referred to as secondary imagery.
  • Figure 1 Simplified Reconnaissance Cycle
  • NITF National Imagery Transmission Format
  • Other system unique formats are used for the capture of primary data as well.
  • Cross interoperability among the systems is not possible because of the differences in formats.
  • NITF is the closest thing that exists for a common sensor data standard.
  • NITF is designed as a display base format standard for Secondary Imagery.
  • SDE Support Data Extensions
  • the SDE incorporates additional metadata, which is used to further modify or describe the data contained in the NlTF file.
  • additional metadata For example, to properly focus Synthetic Aperture Radar (SAR) imagery the SAR phase data must be corrected by the Doppler created from the motion of the aircraft. This requires very precise and accurate correlation between the SAR phase history data and the motion data from the Inertial Measurement Unit.
  • NITF will record the SAR Phase History Data in the NITF segment and put the motion data in an SDE in the segment sub-header.
  • the Pulse Repetition Frequency of the SAR and the update rate of the Inertial Measurement Unit (IMU) and not the same.
  • Metadata is data about the image. Metadata can be divided into two categories, one set of metadata for the processing of the sensor data and another set of metadata for the exploitation of the sensor information. Sometimes elements may overlap theses boundaries. For example processing metadata may give details about the position and attitude of the camera system when a particular image is taken, as well as describing specific characteristics about the image such as resolution and dynamic range. The exploitation metadata give more administrative information about the sensor data.
  • Time sensitive metadata such as position and attitude are required to be precisely correlated to the image data in order to position the image data in space and time. By grouping this time sensitive data into the header of the image data all relative timing is lost. It would be preferable to time tag multi-source data with a common clock to maintain relative timing relationships.
  • FIG. 4 illustrates a generic functional flow for an airborne reconnaissance system.
  • a key concept is the way a universal trnaducer model is used in describing the transducer in a common transducer data format.
  • the model and the data format are complementary.
  • the second is to be able to display or represent the data to a human (or information processor) in an understandable and/or desirable form.
  • a model should first be developed for the capture, transport, and archiving of sensor data, which also describes to a processor how to unravel the information contained in the sensor data. Then another model needs to be developed to describe how the originator intended the data to be represented. This document will focus on the exchange of transducer data.
  • XML extensible Mark-up Language
  • W3C World Wide Web Consortium
  • XML is a transport mechanism only, it gives no instruction on how to represent the data.
  • Cascading Style Sheet or XSLT (extensible Stylesheet Language Transformation) descriptions are needed to represent the data carried by an XML file.
  • TAU Transducer Acquisition Unit
  • TPU Transducer Processing Unit
  • the TAU and TPU are only generic names given to the formatter, which interfaces to the transducers, and the processor, which receives the specially formatted transducer data.
  • the bold lines in Figure 4 illustrate the focus of this document.
  • this document describes a common method for characterizing transducers and employs this method in the exchange of transducer data from one system to another.
  • Transducer data exchanged in this fashion shall promote the data fusion of transducer information and promote cross-system interoperability.
  • Particular attention is paid to the space and time relationships among and within measurements as well as the precision and accuracy (relative and absolute) of measurements.
  • the common model characterizes space and time aspects of both the internal and external orientations of the sensor (transducer) as well as measurement characteristics.
  • a key characteristic is that all transducer measurements should be accompanied with an uncertainty figure of merit such that when a resultant measurement is taken the errors can be propagated through the system.
  • Figure 5 is in contrast to Figure 2 in that some of the time critical metadata in Figure 5 is tracked by using a sensor. By handling sensors independently and time tagging data from all the sensors with a common system clock the data from all sensors can be correlated in time.
  • a transducer may be defined as a device that produces a response as a function of the stimulus which may change as a function of time.
  • the measurement is a digital sampling of the response of the sensor.
  • the measurement is a digital sampling of the stimulus.
  • Figure 6 illustrates a Venn diagram of the classes of transducers and subclasses within receivers and transmitters.
  • the stimulus or output can be inferred from the response by knowing the input/output transfer function of the detector.
  • the output of response can be inferred from the stimulus by knowing the 10 transfer function of the transmitter.
  • the data captured from receivers (sensors) is the response (output) of the receiver (sensor).
  • the data captured from a transmitter is the stimulus (input) to the transmitter.
  • Table 1 shows examples of where transducers reside in the various classifications.
  • a remote transmitter may be defined as a device that produces transmitted energy which is a function of an input signal, which may be a function of time.
  • a remote receiver may be defined as a device that produces a digital response which is a function of a received signal characteristic.
  • the flow of data (or information) through a receiver is opposite to that of a transmitter, characteristics used to characterize receivers can also be used to characterize transmitters as long as the processor processing either of the two devices realized that they are reciprocal relationships.
  • Figure 7 illustrates the stimulus and response for both a remote receiver and transmitter.
  • Both remote devices are characterized by having an Instantaneous Field of Measurement (IFOM).
  • IFOM is the volume of space which is either illuminated by a remote transmitter or sensed by a remote receiver. For imaging sensors this is typically referred to as the instantaneous field of view (IFOV).
  • the measurement from a receiver is a measurement of the response or the output from the receiver.
  • the measurement from a transmitter is of the input or stimulus to the transmitter. The characteristics of the receiver input or the transmitter output can be extrapolated from the measured data by using input-output transfer function.
  • the object space is the 3-dimensional environment in which we all live. Many characteristics can be measured in the object space and we capture some of these characteristics with receivers.
  • Remote transducers may be further employed to cover a larger spatial extent by either scanning a single detector or emitter, or staring of multiple detectors or emitters.
  • a single detector or emitter can cover a larger space.
  • the detector or emitter usually methodically scans the entire measurement area (i.e. FOM) by taking many samples, each sampling different space one sample after the other.
  • remote transducers may be employed to cover a larger region by using several detectors or emitters each measuring a different area of space at the same instant.
  • staring transducers samples covering the entire FOM are all sampled at the same time, whereas scanning transducers are required to sample the entire FOM sequentially. To properly characterize the interior geometric properties of the transducer both the spatial and temporal characteristics must be described.
  • Figure 8 illustrates the use of scanning and staring process to enable transducers to cover a larger spatial area. If the samples are organized properly the data from the scanning and staring transducers can be used to generate an image.
  • the scanning mechanics or staring element orientation of a remote transducer is typically implied in the data structure of the transducer data. This information must be known prior to processing the data, but it is very infrequently sent along with the data.
  • Remote transducers which sample the FOM over a finite time duration are susceptible to motion disturbances during the sampling period. If data acquired during this period is not adequately correlated with the relative motions between the transducer and the environment then the spatial placement of the measurement data will be in error. This is an important fact to remember when acquiring data from scanning transducers.
  • each sample within a specific frame (later to be defined as a Transducer Characteristic Frame) of a transducer will have spatial coordinates which are constant relative to the transducer reference system.
  • the number of coordinates assigned to each sample depends on the shape of the ambiguity space.
  • the coordinates are chosen from the set of coordinates which comprise either the Cartesian or spherical coordinate systems (x, y, z, alpha, beta, r).
  • the data from remote scanning and staring transducers are bundled into structures called frames.
  • Each sample within a frame has a space and time attribute associated with it.
  • Scanning transducers samples are acquired in sequence, such that there is a time difference between the time the first sample within the frame is acquired and the last sample in the frame. With starring transducers all samples in the frame are acquired at the same time.
  • the previous paragraphs discuss an example using an imaging camera to illustrate some of the issues required to be characterized for the internal geometric orientation of a transducer. To minimally characterize a transducer one must answer the questions of "what" is the measurement, "Where" in space does the measurement relate to, and "when" in time did the measurement occur. The where (space) and when (time) characteristics are answered by a combination of the interior and exterior orientation of a transducer. We have talked briefly about the spatial aspects of the interior orientation.
  • interior orientation is only applicable for remote transducers. There is no geometric interior orientation applicable to in situ transducers.
  • the transducer orientation characterizes the space-time relationship or geometry of the transducer data.
  • the interior and exterior orientation of a transducer complements each other to give a complete space-time relationship of the data.
  • the interior orientation may be thought of as an orientation that remains constant with respect to the transducer reference frame independent of transducer position, attitude, motion or time. This orientation accounts for any of the scanning mechanics or the space and time relationships between the samples within the transducer's data frame.
  • the external orientation characterizes the position and attitude and timing relationship of the transducer reference system with respect to a world reference system.
  • the world reference system is a spatial reference system that will be the common reference system for all geo-spatial data.
  • Figure 9 shows the coordinate systems used to describe coordinates.
  • Figure 10 shows the two reference systems allowed in this specification.
  • a platform reference system is not. required, if a platform reference system is required, then a transducer may be assigned to it so that it can be measured.
  • the description of the interior orientation will be in terms of a coordinate system coordinates (x, y, z, alpha, beta, r). These coordinate assignments used to describe the interior orientation set the stage for the orientation of the coordinate system axis to the physical transducer.
  • the earth reference system on the other hand had a fixed orientation of its coordinate system described by WGS-84. Points in each reference system can be described by using either coordinate system (Cartesian or Spherical)
  • Figure 11 shows the convention used for determining the Euler angles ( ⁇ , ⁇ , K) for rotation transforms. Particular attention should be paid to the order in which the rotations (K then ⁇ j> then ⁇ ,or ⁇ then ⁇ then K) are applied depending on whether defining coordinates in the rotated reference system or the un-rotated reference system.
  • the first rotation co is about z axis shown in red. Clockwise rotations as viewed looking at the origin from +z are positive.
  • the second rotation ⁇ j> is about the new y axis (y 1 ) shown in Green.
  • the third rotation K is about the new x axis Cx") shown in Blue. Clockwise rotations as viewed looking at origin from +x" are positive.
  • the ambiguity space is defined relative to the transducer reference system and thus becomes part of the transducer's interior orientation.
  • a 2-dimensional sensor images a 3-dimensional environment there is an ambiguity in one dimension.
  • An object from a 2-dimensional image is known to exist somewhere along the particular ambiguity space for that particular sample for the transducer.
  • Different transducers have different ambiguity shapes.
  • Figure 12 illustrates the ambiguity space for the camera.
  • the shape of the ambiguity space is not always a straight line or ray. Different types of sensors have different ambiguity shapes.
  • the set of coordinates used to characterize the sample coordinates with respect to its transducer reference system also characterizes the shape of the ambiguity shape.
  • an alpha and beta angle define a ray
  • an alpha angle and a range or a beta and a range define two different circles or arcs.
  • a single alpha defines a plane
  • a single beta defines a cone
  • single r defines a sphere.
  • Cartesian coordinates may be used to define ambiguity spaces as well.
  • Table 2 describes some different shapes for various sensors.
  • the optical framing camera and the optical line scan sensor both have an ambiguity shape of a ray originating at the frontal optic node of the focused lens.
  • the ray for a particular sample is defined by the alpha and beta angle spherical coordinate angle for the camera reference system.
  • Radar sensors have an ambiguity shape of a sphere and if they are correctly processed they can be compressed to a constant range arc for each sample. These are only a couple of examples, more shapes such as cones and planes exist as well.
  • the ambiguity space is the volume of space of possible locations of an object.
  • a two dimensional image is made of 3-dimensional space, there is an ambiguity in one dimension.
  • An object that appears in the image is known to be positioned somewhere within the ambiguity space.
  • the shape of the ambiguity space is a ray.
  • the ray originates at the frontal node of the optics. There is a ray for every pixel in the camera frame. When a pixel is centered on a small object there is a ray from the camera through the center of the object. This will be the shape and orientation of the ambiguity space.
  • the 3-dimensional coordinates of the object's center can be calculated by using another source of data.
  • the other data can be another image of the center of the object from a different perspective of a known surface where the object is known to exist.
  • To determine the 3-dimensional coordinate for the center of the object the ambiguity spaces from two or more receivers are transformed to a common reference system. The ambiguity spaces from the samples corresponding to the objects center will intersect at the 3-dimensional location of the object's center.
  • the ambiguity space for the pixel of the center of the object can be intersected with the surface to find the three dimensional location for the center of the object.
  • Transducer type - a transducer is a super class for including either transmitters of receivers. Transmitters and receivers can be each subdivided into the remote and in situ subclasses. This description is useful for determining what type of characterization is required for a particular transducer. For example, if a transducer is classified as an insitu receiver, then we would not expect the transducer to include characterization of the IFOM.
  • Source of Reflective Illumination Remote receivers measure reflected and/or emitted energy. If the measurement is a result of reflected energy, then a description of the source of the energy is important.
  • the source may be natural illumination such as ambient illumination from the Sun or it may be artificial illumination from another remote transmitter.
  • Frequency response/Power spectral density The characteristic of how fast or slow a detector can respond to a changing input, or the range of sensitive frequencies is referred to as the frequency response of a receiver.
  • Transmitters have a reciprocal characteristic known as the power spectral density.
  • the power spectral density describes the frequency distribution of the output energy
  • Measurement nomenclature a description of the measurement is a required characteristic for all transducers.
  • Data type This characterized how the measurement value is digitally represented.
  • the data type should be selected from a set of pre-defined values. Examples: unsigned integer, signed integer, real, complex, logical, character
  • An imaging sensor may output 12 pixels which are captured digitally in a 16 bit field. The 12 bit pixel occupies the least significant 12 bits of the 16 bit field, the remaining 4 bits are set to zero.
  • Input/Output Transfer Function This characteristic describes the relationship between the inputs to a transducer and the output. In the case of a receiver the output or the response is the measurement value, for a transmitter the input or the stimulus is the measurement value. This is usually shown by a plot of input vs. output. The measurement value is the independent variable in the transfer function and the Observable is the dependent variable. Characteristics which affect the transfer function as sensitivity, noise level, saturation, input dynamic range, output dynamic range, output gain, output bias, hysteresis.
  • Measurement Reference - if measurement is a difference measurement (e.g. Phase, altitude), then the reference must be identified.
  • a difference measurement e.g. Phase, altitude
  • a transmitter or receiver may incorporate a calibrated input.
  • the value of the measurement can then be compensated for any gain and bias errors by analyzing the difference between what the measurement to the calibrated source was and what it should be.
  • transducer data from a plurality of transducers can be aligned in the proper spatial and temporal orientation by using a combination of the interior and exterior geometric orientations. Knowing the external relationships of one transducer to another is fundamental to transducer data fusion.
  • position and attitude receivers provide data which provides the external orientation for a principal transducer.
  • the principal transducer is positioned and/or attitudinally oriented relative to a position and attitude receiver.
  • Position and attitude receivers measure their own position and attitude relative to a common datum (e.g. Earth) or relative to another position and/or attitude receiver.
  • a common datum e.g. Earth
  • the position and motion data is only used to get the image close enough to register it to a reference image.
  • the reference image is usually an ortho-image in which geographical coordinates can be calculated for every point in the image.
  • the data is registered to the ortho-image target coordinates can be extracted by cross reference. This process is acceptable, assuming reference images are available and you have the time to register the two images. Also, in some cases the accuracy of the reference images may not be sufficient to derive coordinates of required accuracy.
  • sensor technology improves the ability to derive accurate target coordinates directly from sensor data becomes more and more feasible.
  • the metadata once required only to position images in the general vicinity of a reference image, now may be accurate enough to position data better than the previously used references. Having data this precise it becomes more important to be able to characterize the errors of the measurements, specifically the geometrical errors.
  • Sources of geo-position error are due to only a few contributors: atmospheric errors, installation alignment errors, transducer measurement and characterization errors, A/D quantization, timing and latency errors, and data timing association error. Many times it is just as important to characterize the error as it is to characterize the data. When a target coordinate is calculated from remotely sensed data it is imperative that the accuracy of the coordinate is known. It is important that we understand these errors so that we can characterize them and give error predictions on resultant measurement accuracies that are calculated from the raw transducer data. When transducer data is processed to derive critical information it is important that each of these error contributors are characterized. A responsible transducer data exchange format should provide the means to characterize these individual error contributors or the total resultant system error.
  • Figure 13 shows a simplified diagram of the resultant positional and attitudinal errors which result in large target positional errors. Note however that this is a simplified diagram and does not show all contributors of random and systematic errors.
  • the resultant positional and attitudinal errors may be the result of many sensor measurements. The errors must be propagated through the system to derive the resultant geometrical errors.
  • Atmospheric distortion is an error induced in remote transducers by the refraction of electromagnetic energy by the atmosphere. This error is not normally characterized by data contained in the data format. The error is normally calculated at the processor by knowing parameters such as altitude and incident angle to the surface. This however does not prohibit the use of sensors which measure the atmospheric distortion. Resultant Positional
  • the ordering of data is sometimes rearranged during the serialization of data required to transport or archive the data.
  • a rectangular frame CCD camera may sample all of the detectors in a rectangular array at the same instant.
  • the array needs to be scanned serially for transmission.
  • the array can be scanned. It may be scanned left to right then top to bottom, or top to bottom then left to right, or there could be interleaving of fields within the frame.
  • This specification may be described as a method for acquiring, exchanging, archiving and processing, in a universal way, transducer data from the plurality of diverse transducers.
  • the method describes a process which is particularly useful for capture and description of raw or original transducer data.
  • This transducer format specification employs a "rigorous sensor model" to characterize transducers as a whole.
  • a rigorous sensor model tracks and accounts for all of the motions and physics of the transducer system and its interaction with the environment. If it moves or changes it is tracked with a sensor. Every sensor is positioned relative to something.
  • a common rigorous model applicable to any sensor will enable the model to be used to describe all sensors and their relations in the system of sensors.
  • One unique characteristic of this model is that it treats everything as a sensor.
  • This specification will facilitate a way to track and organize all the measurements required to facilitate a truly robust rigorous sensor model.
  • Figure 14 shows the relation of several sensors in a system of sensors. All of the sensors in the chain must be characterized to
  • the specification describes a method for characterizing data from transducers such that a universal transducer processor can process the data, using the methods described herein, from a plurality of transducers without prior knowledge of transducer characteristics from any other source.
  • the universality of the data format allows for the exchange and archive of the data by standard open systems protocols. Transducer systems utilizing this method may be able to interoperate among each other utilizing a wide variety of transducer data sources.
  • TML TransducerML
  • the data format which may be packaged in many forms (file, stream%) contains two basic entities: 1) data and 2) data description.
  • the data description describes the data, such that a universal processor can process the data.
  • the processor may receive the data description with the data or it may receive it by other means. The processor only needs to receive the data description once. Then data can be continually received after that.
  • Transducers produce measurements. Sometimes a transducer will capture many measurements characterizing different attributes about the environment. For example a multi-spectral sensor measures the reflectivity at different wavelengths. A GPS will sample latitude, longitude, altitude, velocity, etc. The set of measurements constitutes one sampling sample set or sample of the transducer. Figure 15 illustrates the hierarchy of the format data structure. Cluster
  • a Sample is composed of one or more measurements. Every measurement in a sample corresponds to the temporal and spatial coordinates associated with that sample. Samples of the measurement Transducer Characteristic Frame are composed of one or more measurements all corresponding to the same spatial ambiguity. A set of samples comprises the Transducer Characteristic Frame.
  • a Transducer Characteristic Frame is a key concept required to facilitate the association of transducer data to transducer modeling data.
  • Transducers which scan with a small number of detectors or stare with an array of detectors require their data to be organized spatially such that the detector sample represents the spatial orientation intended.
  • the samples within this group have space and time relationships which must be understood to characterize the data, particularly if there is any motion of the transducer.
  • the minimum set of samples which must be considered to characterize a transducer comprise the Transducer Characteristic Frame. We will use these frames to group the data from the transducer as well as use the same size and shape of a frame to contain model data, such that there is a one-to-one relation between the model data and the transducer data.
  • Dimensionality is a characteristic of the TCF.
  • the number of object space coordinates (x, y, z, alpha, beta, or r) used to specify relative spatial characteristics of samples is referred to as the dimensionality of the TCF.
  • Sensors which have a single sample have no implied spatial relationships among other samples with the TCF, so single sample TCF sensors have a dimensionality of zero (0).
  • a TCF is a logical organization of transducer Samples for the purpose of associating corresponding numeric transducer models.
  • Figure 16 illustrates array configurations for the different dimensions of TCFs.
  • Each type of transducer will have its own characteristic frame. For example: transducers such as rotational encoders, thermocouples, volt meters, GPS, inertial navigation sensors, all fall into the non- dimensional TCF category.
  • These non-dimensional TCFs normally describe in situ transducers where there are no spatial assignments of coordinates to each of the transducer samples.
  • Non dimensional TCFs only have one sample per TCF.
  • One dimensional TCF transducers on the other hand have one coordinate, from the set of object space coordinates (x, y, z , alpha, beta, r), which is used to characterize the spatial relationship of each sample in the TCF.
  • An example of this type of transducer is a radar.
  • the sample that is captured by a radar receiver or transmitter represents the response or stimulus at a certain range from the transducer. There is no information that provides any other spatial information other than knowing the IFOM of the receiver and transmitter.
  • Two dimensional TCF transducers are fairly common. All of the imaging sensors fall into this category.
  • a camera for instance has a two dimensional TCF in which the object space dimension assigned to the horizontal dimension of the TCF is the Alpha angle and the vertical dimension is assigned the Beta angle as illustrated in Figure 18.
  • a synthetic aperture radar processed image for example has a two dimensional TCF where the dimensions of the TCF are assigned to the alpha and range coordinates as illustrated in Figure 17. Coordinate TCF
  • TCFs will be used to associate data from transducers with models which are used to characterize the data from the transducer.
  • the principal types of TCFs which we will discuss are the:
  • timing TCF - describes the interior timing characteristics of the transducers.
  • the measurement TCF contains a set of transducer samples.
  • Other types of TCFs can be described on an as-needed basis. For example the gain of an array of detectors which comprise an array bay all have different bias and gain values. A TCF could be created which describes this variability across the TCF.
  • Transducer data is captured in mTCFs and transported in clusters.
  • the size of the measurement TCF is the same size as the modeling TCF's so there is a one-to-one relationship between measurement and relative space and time. This provides the synchronization required between the data and the model,
  • the measurement TCFs are all time stamped with the time of the first sample. This provides the time synchronization required to align the responses from multiple sensors.
  • the TCF structure will be used to associate measurements (transducer data), spatial, timing and other information needed to characterize the transducer.
  • transducer modeling by introducing the TCF model. Later we will use function models to describe other features of transducers.
  • Some of the different TCFs used for interior orientation modeling are: Coordinate TCF, Temporal TCF, and Sequence TCF.
  • TCFs may be other TCFs to describe transducer characteristics which vary over the TCF such as radiometric gain. However they will follow the same sort of reasoning.
  • TCF model elements are included in the TransducerML stream as part of the data_desc entity.
  • the TCF model for a particular sensor is the same size, shape, and order as the measurement TCF (transducer data) so there is a one-to-one relationship between a transducer sample and the space time parameters for that sample. Every sample in a TCF has an associated spatial ambiguity and time relative to the other samples in the TCF.
  • the coordinate TCF (cTCF) which may be further divided, (cTCFx, cTCFy, cTCFz, cTCF ⁇ , cTCF ⁇ , cTCFr): contain the set of interior coordinates for describing characteristics about the transducer data contained in the mTCF.
  • the specific combinations of coordinates chosen for the cTCF's dimensions also define what the shape of the spatial ambiguity is.
  • a two dimensional TCF will have two cTCF. Which two cTCFs are used will depend on which coordinates are used to describe the internal spatial relations. For example, if the TCF were of a camera then the two choices would be a cTCF ⁇ for the alpha coordinates and cTCF ⁇ for the beta coordinates.
  • a three dimensional TCF will require three cTCF to describe it.
  • Figure 19 illustrates some example coordinate TCF of some sample transducers.
  • the numbers used in the cTCF to describe the spatial characteristics of the TCF are in "ticks"
  • the range (FOV) of one dimension is divided up into n number of equal distance intervals called ticks. The position of each sample is measured in ticks. When the FOM changes the coordinates should remain approximately the same. There may be situations where a cTCF is referenced but the TCF model is empty. This may be the case to save space when the geometric characteristics are not that important. In this case the angular or linear distribution of the ticks with integer range can be considered linear.
  • Figure 22 illustrates the use of ticks for measuring spatial coordinates of a 1x7 array 2-dimensional TCF. The first dimension is the alpha angle and the second dimension is the beta angle.
  • the tTCF frame gives the relative sample time of each sample within the TCF. These times are created by a sensor calibration process. The times within the TCF are measured in time ticks. Depending on the timing resolution desired, the number of time ticks can be increased or decreased during the calibration process. Time ticks divide to TCF frame duration (time of last sample - time of first sample) into a number of evenly divided time ticks. Each sample time occurs on a specific time tick relative to the first sample. This time for a particular sample is captured in the tTCF cell. Time ticks in the cell do not need to be consecutive, in fact ideally there should be at least a order of magnitude more ticks than samples in a particular dimension. This will enable the ticks to describe any aberrations or non-linearties in the timing.
  • the relative time of any measurement can be calculated by using the tTCF (interior orientation time) and the timestamp in the start tag.
  • tTCF internal orientation time
  • Figure 24 shows this one to one relationship between the timing TCF and the measurement TCF.
  • the corresponding tTCF time tick measurement gives the offset time in time ticks from the time stamp of a particular measurement sample. The period of a tick is equal to the TCF duration divided by the number of time ticks. So the offset from the time stamp is equal to the tick measurement value times the time period of one tick.
  • the IFOM may be described by a single number (angular range) representing the angular range of the IFOM. If a more detailed profile of the IFOM is required then the function model may be used.
  • Figure 25 shows an example IFOM profile.
  • the IFOM can be described using a function model as illustrated below. This would be described as a 2-dimensional model with data defined in an array of 1 row and 15 columns.
  • the IFOM angular range is divided into a number of equal-angular intervals called ticks. Each column would contain 1 sample or data point characterizing the function.
  • the data points describe the normalized magnitude of each tick increment. In this example 15 data points describe a normalized tear-shaped IFOM profile. There can be more than one IFOM profile for any transducer.
  • the frequency response for receivers or power spectral density for transmitters is another characteristic which can be modeled with the function model. They can simply be modeled by two numbers 1) the center frequency and 2) bandwidth. If a more detailed profile is required then a function model may be used to describe the profile.
  • the frequency range is again divided into a number of ticks. The data points represent the normalized response for the frequency corresponding to each tick over the frequency range. The frequency range is then centered on the center frequency.
  • Figure 26 illustrates an example frequency response function.
  • the Input-Output transfer function is another characteristic which may use the function model to characterize.
  • the input/output transfer function describes the transfer function between the input stimulus and the output response in the case of a receiver, and the input signal and the output energy in the case of the transmitter.
  • the input output transfer function can be modeled with a function model as described below.
  • the response or signal range is divided up into a number of different measurements. In this function the measurement is always plotted on the range (x-axis) as the independent variable.
  • the stimulus (for receivers) or response (for transmitters) is plotted on the domain (y-axis) as the dependent variable.
  • Figure 27 illustrates the input-output function for both receivers and transmitters. The range is again divided up into a number of equally spaced data points.
  • the function can be profiled by the set of data points each corresponding the set of equally spaced range values. Note however that the scale of the range can be linear or logarithmic.
  • the input-output function is described as a normalized function. Gain (multiplicative) and bias (additive) factors are used to describe the actual input-output transfer function. The gain and bias values may be changing in which case they may be monitored by a sensor. In the following example 16 data points are used to characterize the input-output transfer function, (i.e.: If you had an 8 bit integer response with a range of 256 then you could very well list the stimulus that corresponds to each of the responses).
  • the range for the transfer function is defined x_min and either x_max or x_range. The range corresponds to the allowed values of the measurement.
  • the units of the domain are the same as the units for the measurement.
  • TML One of the features of TML is its flexibility to adapt to transducer modeling configurations.
  • the gain measurement which is a characteristic of the input-output transfer function can be a fixed value in a field or the field can refer to another transducer to describe its changing state (refer to the dependency_id_ref attribute).
  • Another way in which TML offers flexibility in the characterization of transducers is by the assignment of specialized TCFs.
  • TCF gain equalization factors
  • a transducer characteristic such as frequency response can be measured with a single value, or it may be assigned a model to describe the frequency profile.
  • the characteristics of the frequency response profile can be defined by constant values or the characteristics can be assigned to either a TCF model or a transducer.
  • the TCF model would describe any variations across the TCF while the transducer would capture any changes as a function of time of any of the parameters.
  • a f cn_modif y element describes how the four methods interact.
  • TML is a mark-up language using the commercial standard XML (extensible Mark-up Language).
  • XML uses a robust description technique (Data Type Description, DTD or XML Scheme) to describe the data model and organization of data elements being exchanged in an XML file or stream.
  • the TML specification describes a unique DTD and XML Schema which explains the relationship of the data elements used to describe a transducer to a common processor.
  • UML Unified Mark-up Language
  • DTD Unified Mark-up Language
  • DTD Unified Mark-up Language
  • sample XML implementation are described further in this document. It should be understood that only one DTD is required to characterize TML.
  • the DTD is not modified to accommodate different transducers or different transducer characteristics. This enables a common processor which knows how to handle the DTD to process any file which is validated using it. Validation is the term used to describe that an XML file is in compliance with the rules set forth in the DTD or Schema. The actual transducer characterization takes place in the XML file or stream.
  • TML is the root element in a file or stream.
  • ⁇ tml> is the first and last thing sent in a TML data exchange. Within the ⁇ tml> element there are two other elements: ⁇ data_desc> and ⁇ data>. These two elements do exactly what their name implies.
  • the data description ⁇ data_desc> element describes the data ⁇ data> element.
  • the data element contains the transducer data and the data description element contains the description of the transducer data.
  • the DTD or XML Schema provides the rules for which an XML file is created. Readers and writers of XML files use the DTD or Schema to validate that the file or stream was created properly. This check is completed on every file or stream so the chances of improperly packed data is very small.
  • Synchronizing metadata is paramount to precision geo-location of targets imaged by imaging transducers, or more fundamentally transducer fusion.
  • the concepts described here will enable metadata which is changing (e.g. aircraft position and attitude, receiver gain, transducer mode, diagnostic data, etc. ) to be described by a sensor either real or virtual.
  • the metadata will be related by a dependency mechanism to be described later.
  • Figure 28 illustrates time sensitive metadata being described by data from a meta-sensor. Then the meta sensor would be described using the common modeling techniques, referred to in Figure 28 as Fundamental Metadata.
  • FIG. 30 illustrates a topology diagram for a system of transducers. The bubbles represent transducers and the lines in between represent the relation of one transducer to another. This is similar to entity-relationship data modeling.
  • transducer system topology provides the fundamental descriptions of how all of the transducer data relates. Not all systems are alike so the system topology is described on a system to system basis. This specification defines four types of relations: attached, dangled, position, and attitude.
  • An attached sensor is typically an in-situ sensor measuring other parameters to support its host sensor. The attached relationship will be described in the attached sensor's nomenclature. An example of an attached sensor would be if one had a diagnostic sensor attached to the primary imaging sensor measuring another variable (such as vibration of temperature). An attached element is empty and simply references another sensor. The presence of an attached element means that the sensor referenced by the dependency element should be treated as if it had the exact same location and attitude as the sensor referenced by the attached element.
  • the attached relation is used to attach sensors to transducer characteristics which describe changing parameters about a transducer system, such as receiver gain.
  • the attached implies that there is a characteristic to "hook to”.
  • the sensor is measuring a changing parameter for that one of the transducer characteristics that TML models. .
  • the dangle dependency is like the attached dependency except that there is no internal hook to a transducer characteristic.
  • the dangle transducer simple hangs off of another transducer and provides additional measurement relating to the transducer as a whole.
  • An example of a dangle relation would be a temperature measurement of a transducer's detector, to the vibration load on a particular transmitter.
  • the position relation identifies the position of a transducer relative to the earth or another transducer.
  • the position can be a fixed location or it can be variable, where the position is measured by a sensor.
  • the position and attitude relations are the principal relations for determining the exterior orientation of any transducer.
  • the attitude relation is similar to the position except the orientation of a transducer is described relative to the earth or another transducer. If the orientation is variable the orientation may be described with a sensor.
  • Figure 30 illustrates an example of a transducer system topology.
  • This example uses two primary transducers (CCD camera and IRLS) with six supporting transducers (roll encoders, vibration, gain, position, and attitude)
  • the primary sensors are positioned and oriented relative to the roll encoders.
  • the roll encoder position is fixed relative to the GPS, and the roll encoders measure its attitude relative to the IMU.
  • the gain is an attached dependency the IRLS, measuring the setting for the gain measurement which is used in characterizing the input-output transfer function of the IRLS.
  • the vibration is a dangled dependency, only measuring the vibration of the CCD camera.
  • the model of the transducer system topology describes the relationship of the various sensors used in the multi sensor system. This modeling provides a cohesive picture to fuse all of the data together for the various sensors on board a platform. This model describes the chain of sensors and what parameters, if any, are modified by previous sensor measurements in the chain. For example, a detector look direction relative to local earth is modified by the gimbol angles relative to the internal measurement sensor of the platform and the latitude of the platform relative to local earth.
  • This sensor environment data enables vectors to be manipulated and common reference frames to be converted into other common reference frames. In-situ sensors may also be attached to another sensor to provide other measurements such as sensor state or diagnostic information. These can be described as well in the system topology.
  • the dependency element in the system element of the TransducerML stream provides the sensor system topology
  • Imaging sensors must rely on other sensors to provide them with position and attitude information, such as IMUs, GPSs, rotational and translational encoders, and many other sensors which are available to assist in the positioning and orientation measurement of the primary imaging sensor.
  • position and attitude information such as IMUs, GPSs, rotational and translational encoders, and many other sensors which are available to assist in the positioning and orientation measurement of the primary imaging sensor.
  • IMUs Inertial Navigation System
  • GPSs GPSs
  • rotational and translational encoders and many other sensors which are available to assist in the positioning and orientation measurement of the primary imaging sensor.
  • a sensor may be attached to a gimbol system which can steer the sensor.
  • the gimbol system may be relative to an Inertial Navigation System that measures a platform's position and attitude. To calculate the sensor's attitude relative to the earth the transducer's relationship
  • the sensor's relationship to the INS can be calculated by.knowing the gimbol's position and attitude relations to the FNS.
  • the gambol roll encoder measures its attitude.
  • the sensor's relationship to the earth can then be calculated by knowing the INS's position and attitude relative to the earth.
  • the INS measures its own position and attitude.
  • Figure 31 shows this chain of measurements from various sensors to provide position and attitude data for the primary imaging sensor.
  • the common reference frame to serve as a datum for the combining of all sensors is the Earth Centered Earth Fixed reference system.
  • the spatial and temporal Transducer Characteristic Frames describe the interior orientation of a transducer.
  • the exterior orientation is described through the transducers position and attitude dependencies. Several iterations of the position and attitude may be necessary depending on how many coordinate transformations are required, do to intermediary sensors.
  • the dependencies describe the relationships such as attitude and position of sensors to other sensors and to earth. Every sensor should be traceable back to earth.
  • To determine the exterior orientation of the last transducer in the chain the position and attitude relations must be summarized through a process of coordinate transformations and vector additions.
  • Figure 32 shows the resultant exterior orientation of the last transducer in the chain. Since not all transducers are attitude dependent - these sensors are not required to have attitude dependencies.
  • target coordinates can be calculated.
  • the pixel on the target represents a specific ambiguity space.
  • the ambiguity space is very precise relative to the transducer reference system. If the transducer reference system can be positioned accurately enough relative to earth through the exterior orientation, then the position and orientation of the ambiguity space relative to earth can be determined. Now it is known that the location of the target is somewhere in the ambiguity space and it will be left to the sensor processors to intersect the ambiguity space with a terrain model or another ambiguity space to determine the targets three-dimensional earth coordinates. To determine the accuracy of the ambiguity space position, one must propagate the errors from every measurement alone the chain which resulted in the exterior orientation. The error of the target position is then the result of the exterior orientation error, interior orientation error and the terrain model error.
  • the analog to digital-sampling of transducer measurements should be at least at a Nyquest rate of the changing observation in which the transducer is measuring.
  • the digital samples are compiled into the characteristic frame of the transducer and give a timestamp for the time when the first sample of the TCF was acquired. This timestamp should account for any processing latencies.
  • the goal is to have the time stamp coincide with the time in which the observation state occurred.
  • the timestamp comes for a clock which is common to all of the transducers in a system. This is referred to as the system clock, and it provides relative time for the relative temporal alignment of transducer data.
  • Figure 33 illustrates that transducers in a system all have their own rates in which they sample and output a TCF's worth of data.
  • This concept enables a processor to maintain this relative timing relationship between TCFs of different transducer by the use of the system clock time tag.
  • a stable clock should be utilized. Temporal skew due to transport and archive data buffering and sequencing and sensor data latencies can be accounted for.
  • a master or system clock shall be used which can time tag all transducer data and provide a basis for the temporal alignment of data.
  • Figure 33 shows how various sampling times of different transducers may look as they are plotted against time.
  • a transducer may be described as either real or virtual.
  • a virtual transducer can be created to characterize data which is changing as a function of time. This will enable the state of any data or metadata to be known at any instant in time, because all will be time tagged with a relative clock.
  • This metadata element now can become a "metasensor” which can describe time varying metadata.
  • all transducers are treated equally or captured and described in the same manner (using a common model). To determine the state of the system of transducers at any instant it will become necessary to interpolate sensor data from one update to the next. By maintaining the proper temporal relationships of the sensor updates this can be accomplished very precisely.
  • This document will define a common model in which to characterize all transducers. This model contains a set of fundamental metadata required to describe any transducer to a common transducer or sensor processor. This fundamental transducer model will be described later. We will use this model to describe the sensors which are in turn used to describe a principal transducer.
  • This specification will describe a method of capturing transducer data which is unlike other methods.
  • the data from a transducer will be handled in units of the TCF.
  • Each transducer will have its own TCF configuration.
  • Each TCF of transducer data will be encapsulated in its own shell (measurementTCF) and time stamped with the acquisition time from a system clock and the transducer id number for where it came from.
  • MeasurementTCF MeasurementTCF
  • Data from all transducers within a system will be captured this way.
  • Each transducer's data will be captured as though it were the only transducer in the system.
  • the transducer data will not be accumulated to form rectangular images as other standards do, nor will sensor data be inserted into the header of another sensor.
  • the configuration (size and shape) of the TCFs for a single transducer are all identical, such that there is a one-to-one correlation between samples of a TCF.
  • the 5 th sample of the measurementTCF corresponds to the 5 th sample of the coordinateTCF (cTCF), which corresponds to the 5 th sample of the timingTCF (tTCF).
  • mTCF the 5 th sample of the measurementTCF
  • cTCF corresponds to the 5 th sample of the coordinateTCF
  • tTCF the 5 th sample of the timingTCF
  • spatial characterization of the sensor frame is accomplished by a sensor manufacturer, or at a central calibration facility, or the calibration may be approximated by the sensor system integrator, depending on the degree of accuracy required.
  • the transducer characterization data needs to be sent to the processing location once. If the processing location already has the transducer characterization data it is not necessary to resend it. As hereinafter discussed, as the orientation and position of the transducer changes, appropriate data is communicated via other sensors to the processing location which allows for the rapid interpretation of the changing spatial information.
  • the timestamp is the precise time in which the sample measurement was taken. This time is used to time correlate all of the sensors in the system. This time will allow temporal corrections to be made by the processor to maintain relative time relationships between sensors. Temporal skew due to transport and archive data buffering and sequencing and sensor data latencies can be accounted for.
  • a cluster is a data structure mechanism to improve the amount of overhead required to send very small TCFs. For example, to send audio information, where a TCF is composed of one Sample and a Sample is composed of one measurement of 8 bits, and the TCF frame rate (sample rate) was 22KHz. the small headers of TransducerML would soon overwhelm the data and transmission overhead would be very high. To compensate for this, the cluster structure was introduced to group multiple TCF into one "transmission packet" or cluster.
  • Clusters can also be useful when sending very large TCFs, on the order of many MB. It may be desirable for a number of reasons to break up the large TCF into a number of clusters.
  • a cluster may contain one or more TCFs, or may require more that one Cluster to encapsulate a single TCF.
  • FIG 34 shows the structure of the Cluster.
  • a Cluster may be composed of one or more Transducer Characteristic Frames (in the case where TCF are small) or the TCF may be broken up into several Clusters (in the case where TCFs are large).
  • the TSF is composed of one or more Samples, and each Sample is composed of one or more measurements.
  • TCF Transducer Characteristic Frame
  • TCF Transducer Characteristic Frame
  • TCFs may be grouped into Clusters for transmission or archive efficiency. This may be required when the TCFs are relatively small. When TCFs are relatively large, each 7CF may be split among a set of Clusters. This would enable less latency in the sensor data and would provide synchronization points more often.
  • Each TCF is composed of one or more Samples. The Samples may be arranged .in an n-dirnensional array as defined by the system initialization data. Each Sample is composed of one or more Measurements.
  • the TransducerML stream will be applicable to several external system configurations. There may be a real time full rate connection between the sensor collector and the processor, in which case TransducerML is used in a live transport mode. There will be “data on the wire” when the sensors are on and “blank space” when sensors are off. Sensor data may be recorded on digital media for replay at a later time or place.
  • the data description element is a sub element within the TransducerML stream.
  • the data description element is composed of the models (TCF models and function models), transducer descriptions, and the transducer dependencies (system topology).
  • the data description Element is inserted into the TransducerML Stream at any point where the data description element configuration changes.
  • Figure 35 illustrates a data description element inserted in the data stream to identify that the data description has been updated at this point.
  • the update may be a change or an addition.
  • the data description has a system clock time tag in the start tag of the element. This enables the Processor to know exactly when the change occurred.
  • the data element contains the real-time multiplexed Stream transducer data.
  • the data_desc element data may need to be Update updated during a transducer acquisition period. The updates are inserted when the change occurs.
  • the Data stream is composed of sensor clusters. If more than one sensor is in the data stream the sensor clusters will be multiplexed together. Figure 36 and Figure 37 show how this process takes place:
  • Figure 36 Time sequence and duration of sensor events in data stream
  • Figure 37 shows the real-time sensor data multiplexed onto a serial channel.
  • the Cluster is written to the channel at the completion of the Cluster.
  • the timestamp (TS) represents the time of the beginning of the Cluster.
  • TS represents the time of the beginning of the Cluster.
  • the Clusters could be multiplexed across the channels using any number of multiplexing schemes.
  • the sensor data and support data to describe the sensor are packaged for transport to a remote location or to an archive.
  • the shell is generic and uses a markup language as a carrier for the data elements of the model.
  • the extensible Mark-up Language (XML) was chosen as the shell for the exchange and archive of the transducer data and the transducer data description. The shell does not add significant overhead to the basic data elements.
  • Figure 37 illustrates an exemplary concept for transporting the data elements with the data elements put into tables and transported using a simple header.
  • the header or mark-up for sensor data contains a high resolution time tag and an identifier for the originating sensor type of table being transported.
  • the time tag represents the value of time for the start of each frame.
  • Fig. 8 illustrates varying sample period for different sensors. Sensors with different update rates are tagged accordingly to maintain frame to frame timing relationships.
  • Figure 38 gives an idea of how a TML data stream file might look. This is a simplified example of an imaging sensor a IMU and a GPS.
  • the sequence TCF has array coordinates associated with each sample location in the TCF.
  • the sequence TCF is sent in the same sequence.
  • the sequence TCF is received it is then re-sequenced to put it back into its intended configuration.
  • the sequence TCF is sorted the associated TCF models and data are sorted in the same sequence.
  • Transport Order 1 Characterization (l,l)(l ) 2)(l ) 3)(l > 4)(2,l)(2,2)(2,3)(2,4)
  • sequence order is the order given to position each individual sample in it proper array position. For example, every sample of a two dimensional data structure will have two coordinates indicating its position in the Transducer Characteristic frame. The coordinates will represent row and column numbers.
  • the next order is the sampling order.
  • the sampling order is given by the timing order in which each sample of the TCF was acquired.
  • the transport order describes the order in which the samples occur in the cluster. The order of the samples may have to be sorted in order to be put back into the proper sequence order.
  • the XML document represents a stream.
  • the opening tag initiates the stream.
  • a closing tag terminates the stream.
  • the first element .in the stream should be a system element.
  • the remainder of the stream is any sequence of data_desc elements and data elements.
  • the element TML is the default root element. Specific protocol implementations of TransducerML may replace the root element.
  • the default root element designates the version of the schema (document type). When implemented in protocols, the namespace designation for the elements will indicate the version.
  • TML implements a time tagged implementation of XML. What this means is that a system time clock count is inserted into the start tag of tml elements to signify the relative time (from the sys_clk) of when the data contained in each TML element was acquired.
  • the system elk should be of sufficient resolution to adequately relate time differences at a transducer sample sub-sampling interval (approximately an order of magnitude faster that the fastest sample clock in the system) and enough digits to minimize the possibility of a roll over.
  • the tml element is the root element. Every TML file or stream will have a beginning ⁇ tml> and ending ⁇ /tml> tag.
  • the TML Transducer Mark-up Language
  • XML extensible Mark-up Language
  • DTD Document Type Definition
  • TML is specifically designed for the exchange of simultaneous sensors and emitters (transducers) data. TML does not define how to represent the transducer data.
  • version attribute identifies the version of the TML to which the file or stream complies. Currently this value is fixed at "0.92beta"
  • the data_desc element contains all of the information required of a TML processor to process data from any set of transducers.
  • the data desc element may or may not become part of the TML data exchange, depending on whether the subscriber already has the data desc information or not. If the TML subscriber has never seen or processed the particular set of transducers or the particular configuration of transducers then the subscriber should request that the publisher send the data desc element prior to sending the transducer data. The subscriber also has the option to download the particular system element from a secure URN prior to receipt of the TML data.
  • the data_desc element was designed with plug and play transducers in mind. The transducer element and the applicable model elements can be carried internally to each transducer.
  • transducer When a transducer is connected to a system it automatically supplies the system with the transducer element information including all of the model and calibration data. This data is automatically integrated into the TML data stream.
  • the systems integrator need only configure the relationship of the transducer to the other transducers.
  • a data description of a system of transducers may change after the stream of data has begun. These changes come as data_desc elements interleaved between data elements.
  • the data description update can contain new sys_clk, models, transducer models, or relations updates. For transducer updates and dependency updates, only the information that has changed is sent. Updates are sent within the proper nested elements.
  • the ⁇ data> element contains all of the ⁇ cluster> elements which carry the transducer data. This element carries “pure”, “raw” transducer data.
  • the ⁇ data_desc> element must be read to know that the format, structure, and relationships of the data clusters.
  • the data_des ⁇ element contains the sequence of elements: sys_clk, models, transducers and relations, in that order.
  • a data_desc element has a unique identifier.
  • the data desc element provides the metadata the ability to describe the various transducers which make up a
  • ⁇ CO YR GHT IRS CO PO IO transducer system It provides a description of each of the transducers, how the transducer data is structured and the relationships between the transducers.
  • the data desc element should be resident at the destination system prior to receiving any transducer data.
  • the data desc element may be omitted from the stream if the receiving system already has the particular system element. Likewise any elements within the system element may be omitted from the system element if they are already resident at the destination, (e.g. transducer model)
  • the system clock is used to temporally align the start of transducer characteristic frames among frames from the same transducer and frames from other transducers.
  • the sys_clk is a stable, high resolution counter whose count value is latched and recorded in the start tag of each transducer cluster at the instant the first sample of the first TCF within the cluster is measured. Ideally the sys_clk should run at least an order of magnitude higher than the highest sampling rate of any of the captured transducers.
  • the sys_clk maintains relative alignment of transducer data.
  • a chronograph can be utilized as one of the sensors in the transducer suite.
  • the output TCF of the chronograph will have a sys_clk value in the start tag of the TCF or cluster.
  • the sys_clk value and the world time captured by the chronograph sensor are then relatable.
  • the period element describes the time period in seconds of one count of the sys_cl k .
  • the rel_accy element describes the average drift in temporal accuracy over time. This value is unsigned. A value such as 1E-9 would indicate a one count error in 1E9 counts of the clock. The relative time accuracy needs to be taken into account when comparing data from different times. The temporal error may not be significant unless the time difference is large or the rel_accy is large. Temporal errors are a contributing error for the derivation of resultant positional and temporal error estimates.
  • Models form the basis for describing the individual characteristics of each transducer.
  • One objective of TML is to facilitate plug-n-play transducers into a system.
  • the transducer models may be derived by the manufacturer or a certification facility and installed into the transducer or transducer interface. Preferably the models are carried along with the transducer and integrated into whatever system the transducer is plugged into.
  • Each transducer will have a set of models to describe its characteristics to varying degrees of fidelity. The higher the fidelity of description required - the more robust the model descriptions are. In many cases the models can be implied (i.e. derived with actually sending or receiving them) as a first order fidelity. If higher fidelity is required then the detailed models must be sent.
  • function models are for characterizing transducer properties such as frequency response, input-output transfer functions and IFOM beam patterns.
  • TCF models are for characterizing transducer properties which have parameters that vary as a function of sample position within the TCF, such as detector look angles, sample time within TCF, and radiometric correction.
  • the coordinate TCFs (cTCF ⁇ and cTCF ⁇ ) used to describe the ⁇ and ⁇ angle of the look vector of each particular sample within the TCF of an optical camera can be estimated by knowing the ⁇ and ⁇ range (FOV) and the number of rows and columns in that area.
  • FOV ⁇ and ⁇ range
  • the TCF models used to describe a transducer have the same number of samples, the same number of dimensions and the same size of each dimension as the TCF used to capture the data from the transducer, so there is a one-to-one correlation between the measurement and the TCF model characteristic.
  • Function models are for modeling functions. The function is described using a set of data points. The set of data points represent the range (dependent variable or y axis). The range data points comprise the f_model. To use the fjnodel the domain (independent variable or x axis) must be known. The place where the fjnodel is used in the TML structure will describe the domain by describing its start value, end value or range, scale (log
  • TCF models are for characterizing transducer attributes which have parameters that vary as a function of sample position with the TCF. This element contains the elements to describe a particular TCF model. TCF models relating to a single transducer all have the same structure, i.e. dimensionality, size, and sequence (or order). There is a one-to-one relationship between corresponding elements within a TCF from a single transducer. For example: the fourth sample of a coordinate TCF corresponds to the fourth sample of the timing TCF, which corresponds to the fourth sample of the measurement TCF, and so on.
  • the model will either be a function model or a TCF model type.
  • a unique id or identifier is given to the function model.
  • a unique id or identifier is given to each tcfjnodel as well.
  • TCF models other than coordinate, timing, and sequence modify a transducer single value characteristic that may vary over a TCF such as gain.
  • the TCF can either replace the single value characteristic, add to it, or multiply by it (replace
  • the following rules describe how the TCF model and the attached sensor modify the transducer characteristic. If no TCF or attached sensor is available for a characteristic then the characteristic is the single value found in the characteristic element. If a TCF model is available then the TCF model will modify the single value according to the "f cnjmodif y" attribute.
  • the attached sensor value will modify the single value according the to "f cn_modif y" attribute in the attached sensor. If both a TCF model and an attached sensor are present for a single characteristic then the TCF model will modify the single value according the the "f cn_modif y" attribute, and the attached sensor will modify the resulting value of the TCF model and the fixed value according to the "f cnjnodif y" attribute of the attached sensor.
  • Each TCF sample (single element value) (replace
  • TCF sample (single element value) (replace j + j x) (variable sensor value) or
  • Each TCF sample (single element value) (replace
  • the transducers element contains the set of all transducer elements in the system (or suite) of transducers.
  • the transducer element is independent of the relation element. Each transducer stands alone until a systems integrator defines the relationships between the transducers.
  • the transducer element contains the set of all data required to characterize a single transducer.
  • Transducers may use models to characterize parameters or characteristics of a transducer. The models may be shared among transducers.
  • transducer id A unique id or identifier given to each transducer.
  • the convention for transducer id is to begin each id with the letter "t” followed by a sequential number. Note: when a transducer is removed from one system and inserted into another the sequential id may change.
  • the universal resource name identifies a location where the characteristics of a particular transducer can be found. This enables a transducer processor to read the transducer characteristics prior to receiving the transducer data, and negating the requirement to transmit the transducer description data along with the transducer data.
  • This element contains the set of ⁇ dependency> elements.
  • the ⁇ Transducers> and ⁇ Models> elements stand alone and do not provide any connections (relations) between them.
  • the ⁇ relations> element provides the exterior orientation and relationships between transducers & transducers, and transducers and models.
  • the ⁇ relations> element data is completed by the transducer system integrator whereas the ⁇ transducers> and ⁇ models> element data contain the interior orientation of the transducer can be carried with each individual transducer for a plug and play capability.
  • Relations are defined outside of the transducer definition in order to enable plug-and-play of transducers into different systems. The relations section enables us to plug-and-play transducer together into a data_desc definition.
  • the position and attitude are the key properties necessary for most sensor fusion efforts. However, any property or characteristic of a transducer can be dependent upon another sensors measurement.
  • the relations element contains zero or more dependency elements. Each dependency element references a particular transducer by its unique identifier. All the dependencies for a particular transducer should be defined within a single Relations element.
  • This element describes a single relationship between a transducer and a transducer or a transducer and a model.
  • Each dependency will be assigned a dependency id number.
  • the dependencies do not identify where the dependency comes "from”, the dependency id needs to be traced back to find where the dependency originates.
  • the dependency only points "to" a model, a measurement or a transducer.
  • the model dependencies identify the TCF or function model assigned to a particular transducer.
  • the attached dependency comes from a dependency reference from within the ⁇ transducers> element.
  • the dependency will assign a measurement from an external (real or virtual) transducer to track a changing parameter from within a subscribing transducer.
  • the dangle dependency is to associate other types of measurement to a particular transducer. This can be used to communicate mode settings or to associate diagnostic or transducer system health data.
  • the dangle dependency does not originate from within the transducer element (i.e. does not have a "from" point).
  • the dangle dependency provides data about the parent transducer as a whole.
  • the description of the dangled transducer provides specific description as to its relationship with the parent transducer.
  • the position and attitude dependencies provide the exterior orientation of a transducer relative to the earth or another transducer. All of the transducers should be referenced back to the earth (ECEF reference system) as the common datum.
  • transducer id number what the id number of the parent transducer is (e.g. t004). Relationships always start from the top or principal transducers as the parent. This is the transducer that:
  • the models element contains one or more model elements.
  • the model element contains a datapoints element and may contain a description element.
  • the function model can describe any 2 or 3 dimensional fraction such as IFOM, frequency response, or input-output transfer functions.
  • the description element is generic throughout the schema.
  • a model has an identifier unique within the data_desc definition.
  • a 2-dimensional ftinction model defines a curve or the values of the normalized range of values (y) of a function where the domain (x) is defined elsewhere. The domain is defined by parameters such as xjrange, and x_center. The data points are evenly distributed across the domain (x-axis).
  • the three dimensional functions only add another dimension to the independent variable.
  • the mod_desc element is common to the model elements within TML
  • the mod_desc contains as a minimum a nomenclature for the model. If the TCF has a raster structure then the tcfjnodel descriptions will contain descriptions of the number of rows, columns and planes. The tcfjnodel datapoints will be read in column (left-to-right) then row (top-to-bottom) then plane (front-to-back) order respectively.
  • TCF models represent how transducer characteristics vary over the TCF such as relative locations of sample ambiguity spaces, sample timing relationships, and detector radiometric gain adjustments.
  • a remote transducer data structure relates to the spatial distribution of the samples which make up an image. If the model has an orthogonal data structure (the spatial distribution of samples may not be orthogonal and still have an orthogonal data structure, e.g. conical scan) then the samples within the TCF can be described as having row, columns, and planes. The row number is incremented after the column number has been incremented through the entire range. If the description of a model has a row, column, or plane descriptor the model can be easily organized into an n-dimensional matrix. The column increment is the most rapid incrementing dimension of the array. The column is normally thought of as having a constant x (row) and variable y (column) within a single plane of computer memory space.
  • a remote transducer data structure relates to the spatial distribution of the samples which make up an image
  • the model has an orthogonal data structure (the spatial distribution of samples may not be orthogonal and still have an orthogonal data structure, e.g. conical scan) then the samples within the TCF can be described as having row, columns, and planes.
  • the row number is incremented after the column number has been incremented through the entire range.
  • the row is normally thought of as having a constant y with variable x dimension within a plane of computer memory space.
  • the rows, columns and planes correspond to the transducer data structure and not the spatial or scanning structure of the transducer. A non-dimensional transducer will not have any rows, columns, or planes in its description.
  • a one-dimensional transducer may have rows, columns, or planes in its description.
  • a two-dimensional transducer may have rows and columns, rows and planes, or columns and planes in its description.
  • a three dimensional transducer may have all three, rows, columns, and planes in its description.
  • the samples with a TCF may or not be sorted, so even though a TCF can be organized into a neat matrix, it does not mean that samples are in their proper position relative to their spatial orientation. If a samples are not in their proper order, then a sequence TCF will be made available to re-sequence the transducer samples within the TCF space. If a TCF does not have an orthogonal structure then no rows, columns or planes description is given even though the transducer may have a dimensionality greater than 1. When this is the case the coordinate TCF's must be utilized to position the samples in space.
  • a remote transducer data structure relates to the spatial distribution of the samples which make up an image. If the model has an orthogonal data structure (the spatial distribution of samples may not be orthogonal and still have an orthogonal data structure, e.g. conical scan) then the samples within the TCF can be described as having row, columns, and planes. The row number is incremented after the column number has sequenced through the entire range. If the description of the tcf has a row, column, or plane description then the tcf can be easily organized into an n-dimensional matrix. The plane number is the last number to increment after the row has incremented through its entire range. The plane is normally thought of as giving depth (z value) to the xy plane as it is organized in computer memory space.
  • the data points represent the ordinate or corresponding value of the independent variable.
  • the independent variable is given in the location which references the function model.
  • the independent variable is a set of equidistant points on the x axis.
  • the scale of the x-axis can be linear or logarithmic.
  • This value represents the number of values which are separated by a space in the corresponding value of the datapoints element.
  • transducer has a identifier unique within the data_desc definition.
  • transducers would use a uniform resource name (URN) as their identifier.
  • UPN uniform resource name
  • a stream could begin with a simple empty sensor element as follows:
  • the subscriber could check if it has this transducer definition already locally stored. If not, then the subscriber could look up the sensor in some well-known repository. If that should fail, then the subscriber could ask the publisher to send the complete sensor definition. Within the stream, elements would reference the stream by its shorter ID attribute rather than its longer URN attribute.
  • transducer This is a description of the particular transducer and the measurements it takes, and possibly its relationship within the system. If the transducer is a virtual transducer it shall be noted in this description.
  • the value for this property identifies the model number for the subject transducer. If the transducer is a virtual transducer this element is not required.
  • the ⁇ clust_desc> element contains the set of elements which describe the number or fractional number of TCFs per cluster ( ⁇ tcf_per_clust> ) and the set of elements ( ⁇ tcf>) which describe the incorporated TCF
  • the cluster is a data structure for transporting TCFs. Sometimes it is more efficient (i.e. less overhead) to group small TCFs into a larger cluster to transport them. Similarly, in other cases it may be beneficial to split up large TCF into smaller clusters to acquire a sync more frequently.
  • the sys_clk in the start tag is the time for the first sample of the first TCF.
  • the sys_clk for the following TCFs are calculated offsetting each TCF by the tcf_period. If the TCFs do not have periodic update rates then it is not possible to cluster TCFs.
  • This element contains the set of elements which describe the characteristics of the TCF for a particular transducer. There will be one TCF described for every transducer.
  • This element contains the set of elements which characterizes the transducer sample. Every transducer will have at least one ⁇ sample> element.
  • the transducer sample is composed of one or more measurements.
  • This element contains the set of elements which characterizes each measurement of a transducer. There will be at least one measurement for every sample. More than one IFOM element can be used so that multiple power levels can be plotted. Likewise, more than one IO transfer function can be characterized so that the hysteresis can be characterized by plotting the forward and reverse direction of the function.
  • This element contains the elements which describe the TCF.
  • the enclosed elements include the ⁇ dim> element, the ⁇ coord_sys> element, and the ⁇ no_samples> element.
  • the dimensionality of the TCF is equal to the number of spatial coordinates assigned the coordinate TCF. For example a CCD optical camera would have a two dimensional TCF. Each sample of the measurement TCP correlates to two coordinate TCFs which describe the alpha and beta coordinate for each sample of the TCF. Alpha and beta are the spherical coordinates relative to the transducer reference system.
  • This property will be utilized for remote sensors and emitters only. This property identifies the coordinate system used to characterize the transducers spatial coordinates. In situ sensors have non-dimensional TCFs, so no spatial coordinates are assigned to the TCF.
  • This element contains the set of elements which describes the timing aspects of the transducers sampling characteristics.
  • the timing characteristics include TCF time duration, TCF period, number of time ticks per TCF duration, and relative accuracy of the timing offsets.
  • the tcfmod j dep_id_ref attribute references a TCF model if appropriate which gives the temporal offset of each sample (in time ticks) in the TCF relative to the first sample. It should be noted that the time for the time ticks may be different than the sysjime.
  • the sysjime clock count for each sample can be calculated by taking the clock value in the start tag and adding the corresponding offset calculated as follows; (t n *( ⁇ tcf_time_duration>/ ⁇ ticks>))/ ⁇ period>, where t ⁇ is the corresponding time tick value from the tTCF for a particular sample within the TCF.
  • This element contains the set of elements which describe the spatial characteristics of the transducer. There are n number of ⁇ tcf_coord> elements for each transducer, where n is the dimensionality of the transducer. Each sample from a transducer will have 0 or more coordinates assigned to it. The dimensionality of the TCF describes the number of coordinates assigned to each sample. Coordinates for transducer measurement samples are contained in coordinate TCF models.
  • ⁇ coord_assoc> there is a ⁇ coord_assoc> element for each ⁇ coord> for each transducer.
  • the number of coordinates for a transducer is defined by the dimensionality ⁇ dim> of the transducer TCF.
  • the ⁇ coord_assoc> identifies the spatial coordinate assigned to a particular coordinate of the TCF or each sample. Coordinates are contained in the coordinate TCFs.
  • a single coordinate TCF (cTCF) contains the value of a single coordinate for every sample in the TCF. If more than one coordinate is required then additional coordinate TCF shall be used.
  • This element identifies which coordinate is assigned the cTCF which the ⁇ tcf_coord> references using the "tcfmod_dep_id_ref" (indirect pointer) attributes. Allowed values are for the ⁇ coord_assoc> element are: alpha, beta, r, x, y, and z.
  • the range describes the extent of the spatial coordinate described in ⁇ coord_assoc> .
  • the range would be an angular measurement representing the Field of View.
  • the ⁇ coord_assoc> were r, x, y, or z then this element would contain the linear measurement in meters describing the extent of the TCF relative to the transducer reference system.
  • the ⁇ coord_range> is variable then it will be measured with a sensor which is referenced using the "de ⁇ endency_id_ref " attribute.
  • the number of ticks are typically an order of magnitude greater than the number of samples such that any perturbations or non-linearties in time or coordinates can be characterized.
  • This element indicates the starting value of the transport order of the sequence for either the column, row or plane coordinate. If the TCF is a three dimensional structure then three TCFs will be required: one for column, one for row, and one for plane.
  • the start and end number enable a simplified sequencing if only the scan direction changes. For example when row start and end number's are 1 and 714 respectively then the scan is properly sequenced and requires no column sorting in the row, however if the start and end numbers were 714 and 1 respectively then the columns in the row need to be reversed. The same applies to the row and plane coordinates.
  • This element indicates the ending value of the transport order of the sequence for either the column ⁇ row or plane coordinate. If the TCF is a three dimensional structure then three TCFs will be required: one for column, one for row, and one for plane.
  • the start and end number enable a simplified sequencing if only the scan direction changes. For example when row start and end numbers are 1 and 714 respectively then the scan is properly sequenced and requires no column sorting in the row, however if the start and end numbers were 714 and 1 respectively then the columns in the row need to be reversed. The same applies to the row and plane coordinates.
  • the elapsed time in seconds between the time of the first sample and the time of the last sample is the duration of the. TCF.
  • the duration id divided up into ticks or a number of equal-time intervals.
  • the sample times recorded in the tTCF are given in tick offsets within this duration. If the TCF duration varies then it can be measured with a sensor.
  • the Mependency_JLd_ref attribute points (indirect pointer) to the dependency id that points to the sensor which measures the duration.
  • This element contains the time value in seconds for the ⁇ tcf_duration> and the ⁇ tcf _period> elements. The accuracy of this time is characterized in the parent element.
  • This value contains the 2 sigma value of the deviation of the absolute measurement accuracy based on NIST standards. If the abs_accy varies as a function of TCF sample position then a TCF model will enumerate the accuracy for each sample.
  • the "tcfmod_de ⁇ _id_ref” attribute will reference the dependency which points to the TCF model that describes the abs_accy as a function of sample position within the TCF. If the accuracy varies and can be measured then a sensor will capture the absolute accuracy value. The sensor will be referenced by the "dependency__id_ref " attribute. If the measurement accuracy values vary as a position within the TCF then a TCF model will characterize the 2 sigma accuracy as a function of TCF position.
  • the ⁇ dependency> element will contain the function modifier which identifies how the sensor value modifies either the tcfjnodel values or the ⁇ abs_accy> element value.
  • this attribute points to the dependency id which points to the sensor measuring the absolute accuracy of the measurement and modifies either the ⁇ abs_accy> value or the values of the TCF model according to the ⁇ f cn_modif y> element.
  • This element identifies the relative accuracy of the measurement when the two measurements reside in different TCFs.
  • the difference between two measurements is better the closer together they are.
  • the relative accuracy accumulates at the rate indicated by the value of this element.
  • the inter-TCF (between different TCFs) accuracy is in terms of how many TCFs will accumulate one unit of error. For example a 1E-6 indicates 1 unit, 2 sigma, of error in 1 E6 TCFs when measurements are taken between two corresponding samples from two different TCFs of the same transducer (i.e. error/TCF).
  • the error between corresponding samples of different TCFs may be different than continually accumulating the intraTCF error rate between them.
  • the value of this element represents the number of transducer ⁇ tcf_duration> ticks for one cycle of a TCF (TCF start - to - TCF start).
  • a sensor may be used to measure the rate at which the TCF samples.
  • the "dependency_id_ref" attribute points to the dependency id that points to the sensor which measures the period of the TCF frame rate. If the TCF sampling is a one shot sample or random sampling then this property shall be omitted.
  • This element contains the time value in seconds for the ⁇ tcf_duration> and the ⁇ tcf _period> elements.
  • the accuracy of this time is characterized in the parent element.
  • This val ⁇ e contains the 2 sigma value of the deviation of the absolute measurement accuracy based on NIST standards. If the abs_accy varies as a function of TCF sample position then a TCF model will enumerate the accuracy for each sample.
  • the "tcfmod_dep_id_ref” attribute will reference the dependency which points to the TCF model that describes the abs_accy as a function of sample position within the TCF. If the accuracy varies and can be measured then a sensor will capture the absolute accuracy value. The sensor will be referenced by the "dependency_id_ref " attribute. If the measurement accuracy values vary as a position within the TCF then a TCF model will characterize the 2 sigma accuracy as a function of TCF position.
  • the ⁇ dependency> element will contain the function modifier which identifies how the sensor value modifies either the tcfjmodel values or the ⁇ abs_accy> element value.
  • this attribute points (indirect pointer) to the dependency id which points to the TCF model of the absolute accuracy values for each of the corresponding TCF sample locations.
  • this attribute points to the dependency id which points to the sensor measuring the absolute accuracy of the measurement and modifies either the ⁇ abs_accy> value or the values of the TCF model according to the ⁇ f cnjnodif y> element.
  • This element identifies the relative accuracy of the measurement when the two measurements reside in different TCFs.
  • the difference between two measurements is better the closer together they are.
  • the relative accuracy accumulates at the rate indicated by the value of this element.
  • the inter-TCF (between different TCFs) accuracy is in terms of how many TCFs will accumulate one unit of error. For example a 1E-6 indicates 1 unit, 2 sigma, of error in 1E6 TCFs when measurements are taken between two corresponding samples from two different TCFs of the same transducer (i.e. error/TCF).
  • error between corresponding samples of different TCFs may be different than continually accumulating the intraTCF error rate between them.
  • Data type Data type:
  • the number of ticks are typically an order of magnitude greater than the number of samples such that any perturbations or non-linearties in time or coordinates can be characterized.
  • This value represents the relative accuracy of the frame period. This value shall be represented in the same fashion as the rel_accy of the sys_clk element. Knowing this is useful in determining the temporal accuracy of the start time of a TCF which is embedded within a large cluster.
  • dependency_ID_ref - (Optional) This attribute is used if the relative accuracy of the parent element value varies with time and is measured by another sensor.
  • the "dependency_id_ref” attribute is a reference to the dependency number which identifies the sensor that measures the ⁇ rel accy> value.
  • ⁇ ! allowes to designate a variable cal response ref (i . e . sensor) which cam be used to adjust the gain on the sensor from which the cal sensor was pointed from the # PCDATA value allows a fixed calibrated response data, a referenced dependency_id_ref points to a sensor whch as realtime updates of illumination values . — > ⁇ ! ELEMENT meas_ref (#PCDATA) > ⁇ ! — measurement datum e .g . phase reference — > ⁇ ! ATTLIST r ⁇ eas ref dependency_id_ref IDREF #IMPLIED >
  • This element contains the set of elements which describe the encoding of each measurement.
  • the encoding element contains several child elements including: bits per measurement, significant bits per measurement, data type, units, minimum value, maximum value, and allowed values

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Arrangements For Transmission Of Measured Signals (AREA)

Abstract

L'invention concerne un procédé et un appareil pour corréler des données de transducteurs brutes dans un système de transducteurs qui communique des données de transducteurs à un format commun. Les données de transducteurs et des relations entre les transducteurs sont caractérisées dans un format commun; des interdépendances de transducteurs sont définies pour modéliser un système. Les données sont corrélées temporellement à partir des divers transducteurs, puis archivées, mises à jour et échangées sans être corrompues.
PCT/US2006/046979 2006-12-11 2006-12-11 Procédé et appareil pour acquérir et traiter des données de transducteurs WO2008073081A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2006/046979 WO2008073081A1 (fr) 2006-12-11 2006-12-11 Procédé et appareil pour acquérir et traiter des données de transducteurs

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2006/046979 WO2008073081A1 (fr) 2006-12-11 2006-12-11 Procédé et appareil pour acquérir et traiter des données de transducteurs

Publications (1)

Publication Number Publication Date
WO2008073081A1 true WO2008073081A1 (fr) 2008-06-19

Family

ID=39511992

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2006/046979 WO2008073081A1 (fr) 2006-12-11 2006-12-11 Procédé et appareil pour acquérir et traiter des données de transducteurs

Country Status (1)

Country Link
WO (1) WO2008073081A1 (fr)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2053529A3 (fr) * 2007-10-19 2009-07-22 Panasonic Corporation Appareil de collecte d'informations de santé, appareil de gestion, système de collecte d'informations de santé et procédé pour collecter des informations de santé
US20170106530A1 (en) * 2015-10-16 2017-04-20 Hitachi, Ltd. Administration server, administration system, and administration method
JP2021524598A (ja) * 2018-05-18 2021-09-13 ゼンダー・インコーポレイテッド オブジェクトを検出するためのシステムおよび方法
CN113465643A (zh) * 2021-07-02 2021-10-01 济南轲盛自动化科技有限公司 拉线位移编码器的误差分析方法及系统
US20230123736A1 (en) * 2021-10-14 2023-04-20 Redzone Robotics, Inc. Data translation and interoperability

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4770184A (en) * 1985-12-17 1988-09-13 Washington Research Foundation Ultrasonic doppler diagnostic system using pattern recognition
US20030024975A1 (en) * 2001-07-18 2003-02-06 Rajasekharan Ajit V. System and method for authoring and providing information relevant to the physical world
US20030073461A1 (en) * 1999-10-12 2003-04-17 John Sinclair Wireless comumnication and control system
US20060064030A1 (en) * 1999-04-16 2006-03-23 Cosentino Daniel L System, method, and apparatus for combining information from an implanted device with information from a patient monitoring apparatus

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4770184A (en) * 1985-12-17 1988-09-13 Washington Research Foundation Ultrasonic doppler diagnostic system using pattern recognition
US20060064030A1 (en) * 1999-04-16 2006-03-23 Cosentino Daniel L System, method, and apparatus for combining information from an implanted device with information from a patient monitoring apparatus
US20030073461A1 (en) * 1999-10-12 2003-04-17 John Sinclair Wireless comumnication and control system
US20030024975A1 (en) * 2001-07-18 2003-02-06 Rajasekharan Ajit V. System and method for authoring and providing information relevant to the physical world

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2053529A3 (fr) * 2007-10-19 2009-07-22 Panasonic Corporation Appareil de collecte d'informations de santé, appareil de gestion, système de collecte d'informations de santé et procédé pour collecter des informations de santé
US20170106530A1 (en) * 2015-10-16 2017-04-20 Hitachi, Ltd. Administration server, administration system, and administration method
JP2021524598A (ja) * 2018-05-18 2021-09-13 ゼンダー・インコーポレイテッド オブジェクトを検出するためのシステムおよび方法
CN113465643A (zh) * 2021-07-02 2021-10-01 济南轲盛自动化科技有限公司 拉线位移编码器的误差分析方法及系统
CN113465643B (zh) * 2021-07-02 2024-01-30 济南轲盛自动化科技有限公司 拉线位移编码器的误差分析方法及系统
US20230123736A1 (en) * 2021-10-14 2023-04-20 Redzone Robotics, Inc. Data translation and interoperability

Similar Documents

Publication Publication Date Title
US20100235132A1 (en) Method and apparatus for acquiring and processing transducer data
US7061510B2 (en) Geo-referencing of aerial imagery using embedded image identifiers and cross-referenced data sets
US20100204974A1 (en) Lidar-Assisted Stero Imager
US7042470B2 (en) Using embedded steganographic identifiers in segmented areas of geographic images and characteristics corresponding to imagery data derived from aerial platforms
CN1764828B (zh) 摄录物体声学图像的方法和装置
WO2008073081A1 (fr) Procédé et appareil pour acquérir et traiter des données de transducteurs
US20090089078A1 (en) Bundling of automated work flow
CN104284233A (zh) 视频和遥测数据的数据搜索、解析和同步
US9436708B2 (en) Method and system for providing a federated wide area motion imagery collection service
US20120063668A1 (en) Spatial accuracy assessment of digital mapping imagery
CN111556226A (zh) 一种相机系统
Conover et al. Using sensor web protocols for environmental data acquisition and management
US20220244072A1 (en) Sensor synchronization
CN114760077B (zh) 基于区块链的异常数据检测方法、装置、存储介质及网关
Jornet-Monteverde et al. Design and implementation of a wireless recorder system for seismic noise array measurements
Chen et al. An automatic SWILC classification and extraction for the AntSDI under a Sensor Web environment
Hare Standards-based collation tools for geospatial metadata in support of the planetary domain
Louys et al. Observation Data Model Core Components and its Implementation in the Table Access Protocol
Zhu et al. Long-periodic analysis of boresight misalignment of Ziyuan3-01 three-line camera
Groben et al. Neural virtual sensors for adaptive magnetic localization of autonomous dataloggers
Baumann On the analysis-readiness of spatio-temporal Earth data and suggestions for its enhancement
Baumann et al. Taming twisted cubes
Woolf et al. Standards-based data interoperability in the climate sciences
Dost et al. Seismic data formats, archival and exchange
Uppenkamp et al. Open-source-based architecture for layered sensing applications

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 06845081

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 06845081

Country of ref document: EP

Kind code of ref document: A1