CA2177611C - Gradient reflector location sensing system - Google Patents
Gradient reflector location sensing system Download PDFInfo
- Publication number
- CA2177611C CA2177611C CA002177611A CA2177611A CA2177611C CA 2177611 C CA2177611 C CA 2177611C CA 002177611 A CA002177611 A CA 002177611A CA 2177611 A CA2177611 A CA 2177611A CA 2177611 C CA2177611 C CA 2177611C
- Authority
- CA
- Canada
- Prior art keywords
- location
- helmet
- image
- gradient
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/66—Tracking systems using electromagnetic waves other than radio waves
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/87—Combinations of systems using electromagnetic waves other than radio waves
- G01S17/875—Combinations of systems using electromagnetic waves other than radio waves for determining attitude
Landscapes
- Physics & Mathematics (AREA)
- Electromagnetism (AREA)
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Processing (AREA)
- Position Input By Displaying (AREA)
- Optical Radar Systems And Details Thereof (AREA)
- Image Analysis (AREA)
- Aiming, Guidance, Guns With A Light Source, Armor, Camouflage, And Targets (AREA)
Abstract
A multi-band high-speed videometric head tracking system (10) having a gradient reflector array (12) attached to a helmet (13) on a person's head, ultraviolet light source (14) emitting light which is reflected by the reflector array to a video camera (11) which provides an image to a spot location estimator (29) providing accurate locations of spots in the image representing the reflectors' reflecting light to image location accuracies within a pixel of the image. The location information from the spot location estimator goes to a track point 3-D
location and helmet LOS estimator (33) that provides location and orientation information to a Kalmam Filter (15) that accurately estimates and predicts the helmet position at faster rates than would be possible for a process that uses just the image-based measurements of the helmet.
location and helmet LOS estimator (33) that provides location and orientation information to a Kalmam Filter (15) that accurately estimates and predicts the helmet position at faster rates than would be possible for a process that uses just the image-based measurements of the helmet.
Description
GRADIENT REFLECTOR LOCATION SENSING SYSTEM
BACKGROUND OF THE TNVEN'TION
The invention relates to helmet mounted display systems and particularly to helmet or head tracking systems.
Helmet-mounted display systems frequently have both gimbaled sensors and weapons that must follow the line of sight of the pilot and thus require accurate head tracking. There are various kinds of optical and magnetic helmet tracking systems. The magnetic tracking systems are presently the more accurate' and robust systems.
Several disadvantages of magnetic head trackers are the need to map the metal in the cockpit of the craft and the limited update rates. Magnetic head tracking systems work in areas where the amount of metal structure is limited. Application of such systems for combat vehicles or tanks is impractical because the metal structurE; of the tank or combat vehicle results in the magnetic head trackers as being untenable. 1~urther, the confined, vibrational environment proves existing optical systems to be as unlikely to solve the combat vehicle or tank head tracking problem.
Relevant art includes European patent document G 388 618 A3 published 26 September 1990. This document discloses a system for locating an object having reflectors with radiation emitted toward the object. The reflectors on the object reflect the radiation which is received by a videooptic sensor and the resultant signals are processed for location information about the object.
SUMMARY OF TIC INVENTION
The present invention is a high-speed videometric head-tracking system that incorporates a gradient reflector array, rigid-body Kalman filter motion predictor algorithm processor and a low-cost camera which senses source light reflected by the gradient reflector array. The reflectors include intensity gradient information as well as position information. The subpixel spot location estimator algorithm uses the gradient information from the reflectors to increase the accuracy of the reflector locator algorithm. The Kalman filter estimates the location of the helmet and the helmet line of sight to a greater accuracy than is attainable by image measurements alone by accounting for the natural coupling between the translational and rotational motions of the head and --also reduces the noise in the system and the motion predictor algorithm extrapolates the helmet location and line of sight at a higher rate than the measurement (update) rate.
A~!E~IDED SHEET
la The present invention is both accurate and cost effective. It is not affected by heavy magnetic environments and does not use elaborate optics. The system estimates boresight to a marking with a digital signal-s processing algorithm. Helmet rotation and translation are calculated using the traditional principles of perspective images.
In accordance with this invention there is provided a videometric tracking system comprising: a light source for emitting light; a constellation of at least three reflectors situated on a helmet for reflecting the light from said light source; and a video camera for detecting the light reflected from said constellation; characterized by:
a subpixel spot location estimator algorithmic processor implemented in a processor which utilizes video signals from said camera to establish image plane coordinates of said constellation to within a fraction of a pixel; a helmet three-dimensional (3-D) location and line of sight (LOS) calculation algorithmic processor which utilizes the image plane coordinates calculated by said subpixel spot location estimator algorithmic processor; and a filter implemented in a processor, connected to said 3-D location and LOS
calculation algorithmic processor for determining location of said constellation to a certain degree of precision and estimating the helmet 3-D location and body coordinate LOS
at a rate faster than an actual measurement rate, via a state propagation.
WO 95/15505 217 l 61 1 pCT~s94/13672 Figure 1 is a layout of the basic components of the tracker system.
Figures 2a and 2b illustrate a gradient reflector.
Figure 3 reveals a view of a constellation of gradient reflectors.
Figure 4 is a diagram of a rigid-body geometric configuration of the head, neck and torso.
Figure 5 shows the geometry of a head tracker box.
Figure 6 is a diagram showing the relative locations of the camera and vehicle coordinate systems.
Figure 7 is a block diagram of the subpixel spot location estimator algorithm and the track point 3-D location and helmet line of sight calculation.
Figure 8 shows hardware and the corresponding functions of the tracker.
Figure 9 shows another version of hardware and the corresponding functions of the tracker.
1 S Figure 10 is a flow diagram of the Kalman filter for the tracker.
Videometric head tracker system 10 in Figure 1 has m ultraviolet (LJV) sensitive camera 11 that senses reflections from gradient reflectors 12 which are situated on helmet 13. Light source 14 illuminates gradient reflectors 12 so as to become sources or spots 12 of light. The output of camera 11 goes to a subpixel spot location algorithmic estimator 29. Estimator 29 averages out the errors of the feature or location estimates to provide better than single pixel resolution, that is, "subpixel" spot location, of spots 12.
The output of estimator 29 is estimated image plane 28 (of Fiigure 6) coordinates of the centroid of each gradient reflector 12 of the configuration of five gradient reflectors (which may be another number of reflectors such as three or four). The estimated image plane 28 coordinates of each gradient reflector 12 are transferred to a track point three dimensional (3-D) location and helmet line-of sight (LOS) calculation algorithmic processor 33. LOS is the orientation of helmet 13. The tract; point 3-D
location and helmet line-of sight data, calculated using the equations of perspective, go to a rigid 3() body kinematics Kalman filter/predictor 15. Kalman filter/predictor 15 is a processor that provides representations of the general six-degree-of freedom kinematics of helmet 13 and the pilot's head. The estimated states include location, velocity, angle, and wo ~snssos 217 l 611 p~~S94113672 angular rates of rigid-body (helmet) 13 as a function of external forces, specifically gravity and the forces exerted by the muscles in the pilot's head. System 10 relates the image measurement error for spot location and angle of orientation to the Kalman filter residuals using a unique spot design in which the intensity of the reflective signal varies across one axis of gradient reflector 12 as shown by graph 16 of Figure 2a.
Spots 12 have contrast gradients that peak in the center of the reflector structure in order to assure the most accurate derivation of spot location.
Subpixel spot location estimator 29 operates with analog serial image 28 data, in an RS 170 format, from camera 11, from an analog-to-digital converter in a DATACUBE MAXVIDEO 20 processor. The digital signal from the converter goes to a DATACUBE card 56, model MAXVIDEO 20, far processing to determine threshold image 28 intensities at the 99.5 percent level in section 46 (see Figure 7) and to section 47 to establish the location of the strongest orthogonal edges 77 relative to roof edges 76 of the detected gradient patterns 16 of reflectors 12, using a Canny edge operator for image feature detection that detects edges of gradient patterns 16 or spots 12. Only one spot 12 is operated on at a time. The output from section 46 goes to a model card made by an Australian Company, Vision Systems International, Pty., Ltd.
The APA512 card 57 is utilized to calculate the minimum orthogonal distance squared error line for roof edges 76 of gradient patterns 16 of reflectors 12, in section 48. The outputs of sections 47 and 48 go to section 49 of INTEL i860 processor card 58, to identify a parallel pair of lines 78 that has the largest average gradient magnitude. An approximation line 80 is superimposed. The output of section 49 goes to a decision section 50 which passes the output on to section 52 if such pair of parallel lines 78 is identified, or to section 51 if such pair of parallel lines is not identified.
A pair of lines 79 (Figure 2b) may be found but these would be a false detect because they would not be the pair of lines having the largest average gradient. If the pair of parallel lixies is found, then section 51 establishes a gradient pattern center location as an image plane location midway between the intersection points of the minimum orthogonal distance line as calculated in section 48 of card 57 and the orthogonal edges 77 as established in section 47 of card 56. If the pair of parallel lines is not found, then section 52 establishes a conventional gradient pattern center location as a centroid of a binarized gradient pattern image. An output from section 51 or 52 goes to section 53 to find the wo ~issos 217 7 61 1 pCT~s94/13672 left-most gradient pattern and then to identify corner points via a leftward bearing search. In addition to section 49, sections 50, 51, 52 and 53 are incorporated in INTEL
i860 card 58. The output of section 53 goes to section 31 to calculate helmet coordinates and line of sight. Section 31 is in INTEL i860 card 58 as part of track point 3-D location and helmet line of sight algorithmic processor 33. Application-dependent parameters for the track point 3-D location and helmet line of sight algothimic processor 33 are output from section 54, including field of view, image plane dimensions, gradient pattern constellation dimensions and focal length, is entered into section 31.
These application-dependent parameters are stored in the memory of card 58.
The six reference coordinates (3 coordinates plus 3 line of sight orientation coordinates) of the unique reference 38, of constellation 45, and orientation of helmet 13 is calculated. The output of section 31 of device 33 goes to Kalman filter/predictor 15.
Filter/predictor 15 is mapped to a portion of INTEL i860 card 58.
3M 198 high-reflectance or retroreflectance tape may be used for retroreflectors 12. The retroreflectance tape absorbs in the ultraviolet range due to a polymer overcoat.
Applying varying thicknesses of polymer overcoat to the high reflectance materials results in a symmetrical gradient reflector whose center has no polymer overcoat and thus has the highest reflectance. The pattern is printed on a cellophane-like material and then is bonded to a resin circuit board. The reflectance is supposed to decrease linearly from the center by applying appropriately thicker absorbing polymer overcoat across reflector 12. Actually, the curve showing the change of reflectance along one dimension of reflector 12 appears to have a quadratic shape 16 as indicated in Figure 2a.
The location of the gradient patterns of constellation 45 of five spots or reflectors 12, shown in Figure 3, is used for calculation of helmet 13 three dimensional (3-D) location and orientation. The estimates of the 3-D location and coordination frame orientation are transferred to a rigid-body Kalman filter/predictor 15 to obtain smooth, minimum mean-square-error estimates of parameter data or, in other words, temporal- or time-averaged estimates of parameter data. The gradient pattern of center reflector 12 is orthogonal to the gradient patterns of the other four reflectors 12.
Camera 11 may be one of various kinds of cameras. System 10 is based on a 60-Hz non-interlaced CCD camera that is sensitive to the ultraviolet (UV) (250-nanometers (nm)) and has a standard RS-170 video output. The silicon lens of camera 5 2 l l 7 61 ~ p~~S94113672 11 is replaced with a quartz lens for UV detection. A TEXAS INSTRUMENTS INC.
(TI) MC780 PHU camera is a preferred camera 11 in system 10. The MC-780 PHU
has 755 x 488 active photosites with a resolution of 500 x 350 TV lines. The manufacturer, TI, substituted the standard CCD glass window with a quartz {fused Si02) window to S provide the necessary ultraviolet sensitivity. TI camera 11 is a solid-state monochrome television camera that employs TI TC245 frame-transfer charge-coupled image sensor.
TI MC780 PHU camera 11 has a 25% quantum efficiency (QE) which extends down to 220 nm. The TI CCD is also rated for a 25% QE at 365 nm, which makes it useful at the latter wave length as well. Another preferred and similar camera is the remote head, MC-780 PHR, having the same specifications as the MC-780 PHU, but with smaller dimensions.
Light source 14 provides ultraviolet illumination for system 10. Both 254 nm and 365 nm sources have been examined. The 365 nm source has several advantages over the shorter wave length source. First, the 365 nm source has a potentially safer wave length for human interaction (i.e., eyes and skin exposure), and second, it provides less stringent requirements for the optics in that the lens designs are easier to achieve and are less expensive than the lens systems for the 254 nm saurce. Even though the 254 nm wave length system has the advantage of a lower noise background, experiments show the signal-to-noise ratio feature to be minimal in view of the additional features provided by the 365-nm wave length system. The 365-nm lamp of choice for source 14 is an ultraviolet quartz pencil lamp having a length of 2-1/8 inches with a 3/8 inch outside diameter and a weight of 2 ounces. The lamp irradiates microwatts per square centimeter at 6 inches and provides enough light to distinguish the targets on a monochrome monitor. A typical lamp is rated for 5000 hours of operation. The fused quartz envelopes have excellent ultraviolet transmission properties, with the isolated spectral lines being strong and well separated.
Such a lamp is also cool burning.
The diffused and specular reflectance of a standard combat vehicle crew (CVC) helmet was measured with a CARP spectrophotometer with an integrating sphere attachment. The average reflectance is 4.4% in the ultraviolet (200-400 nm) waveband.
A highly diffuse reflector or retroreflector with a 60% reflectivity will give a signal-to-WO 95/15505 21 l 7 61 1 pCT~s94/13672 noise ratio of 60/4.4 or 13.6. A signal to noise ratio of 10 is sufficient for the spot positioning algorithms of subpixel spot location estimator 29.
The following table shows f numbers (f# vs. aperture opening in mm) calculated for TI MC780 PHU (755 x 488) camera 11, which has a diagonal 8-mm active area, as a function of range (R) in cm, focal length (fL) in mm, and field of view angle (theta) in degrees.
R(cm) Theta tL f#(10)f#(15) f#(20) f#(25)f#(30) (deg.) 15.2 90.19 3.98 0.40 0.27 0.20 0.16 0.13 20.3 73.84 5.32 0.53 0.35 0.27 0.21 0.18 25.4 61.97 6.65 0.67 0.44 0.33 0.27 0.22 30.5 53.13 7.99 0.80 0.53 0.40 0.32 0.27 35.6 46.38 9.33 0.93 0.62 0.47 0.37 0.31 40.6 41.18 10.64 1.06 0.71 0.53 0.43 0.35 45.7 36.91 11.97 1.20 0.80 0.60 0.48 0.40 The body-axis or coordinate system establishing the helmet 13 location and orientation is located at a point 17 between the pilot's shoulders at the base of the neck as shown in Figure 4. The head and neck, to a first-order approximation are characterized as a pendulum 20 that is attached to the torso at the base of the neck by a pivot joint 17. Figure 4 is the rigid-body geometric configuration of the head, neck and torso used for trajectory file calculation. Since the head and neck are extrapolated to be a rigid body and if the location and line-of sight (LOS) orientation of the body-axis coordination system are known, the coordinates and orientation of thin-film spots 12 applied to helmet 13 are established uniquely. The first order approximation for the dynamics of torso 18 is for the assumption that torso 18 is a rigid body attached to the hips by a pivot joint 19, and that the hips are stationary relative to the vehicle coordinate frame because of the seat safety restraints. The radial dimensions of the pendulums 18 and 20 used by Kalman filter/predictor 15 to establish trajectories/orientations from image feature location measurements are obtained from tables of average human frame dimensions. For males, the average height of the shoulders above the hips is centimeters or 1.94 feet and the average height of eye level above the hips is WO 95/15505 217 l 611 p~T~s94/13672 centimeters or 2.59 feet. Helmet 13 is approximated as a sphere of a radius of centimeters (5.9 inches) with its center 55 located at the extreme end of the second pendulum 20, which represents the head and the neck, respectively.
The geometry of a head tracker box 26 is shown in Figure 5. Dimensions 34 are 12 inches and dimensions 36 are 6 inches. Camera 11 is located so that the optical axis 40 intersects the point [0, 0, 0]T at location 32, and point [0, 0.23, -0.5]T
feet at location 38. The camera 11 depression angle 44 is -25°.
The relative location of camera 11 coordinate system (x~, y~, z~) at origin 43 with respect to the coordinate system (xr, y,., zr) at origin 32 of the vehicle is shown diagrammatically in Figure 6. Figure 6 depicts the nominal imaging configuration of camera 11. Ultraviolet camera 11 is oriented so that it views center 38 of head tracker box 26 of Figure 5. Origin 43 of the camera 11 coordinate system is located at the center of image plane 28, the x~ and z~ axes are square to the orthogonal edges of image plane 28, and the y~ axis is parallel to the optical axis 40. Distance 41 is the focal length. Origin 43 of camera 11 coordinate reference frame is located at a distance 42 from origin 32 of the reference coordinate system (x,., yr, z,.), where distance 42 is a user-specified parameter. Distance 60 is the length between origin 32 and constellation 45 reference point 38. The coordinates (x~, y~, z~) of the image for each vertex of gradient reflectors 12 is calculated using the equations of perspective.
The image plane 28 coordinates r;' = [x~i~]T of a three-dimensional location r;' _ [x~y~z~]T in the camera 11 coordinate system, are obtained with the following equation:
x~F
r ~ xc yc - F
-z~F, c y~ - F
where z~ is the x coordinate of the camera 11 coordinate system of the image of the three-dimensional coordinate r,~, z~ is the z coordinate of the camera 11 coordinate system of the image of the three-dimensional coordinate r;', x~ is the x coordinate of the camera 11 coordinate system of the three-dimensional coordinate, y~ is the y coordinate of the camera 11 coordinate system of the three-dimensional coordinate, z~ is the z coordinate of the camera 11 coordinate system of the three-dimensional coordinate, F is wo mssos 21 l 7 61 1 pCT~s94/13672 the focal length of camera 11, and the transpose of a vector is denoted by the superscript T.
Center 43 of detector array 45 is located with the midpoint of the array in the x and z directions coincident with the location where the optical axis 40 intersects image plane 28. The edges of image plane 28 detector array are square with camera 11 coordinates (xc, yc, zc).
Figure 7 is a block diagram of subpixel spot location algorithmic estimator 29 and track point 3-D location and helmet line of sight calculation 33. The function of estimator 29 is applied separately to each spot 12 of constellation 45. The base line approach uses the uniquely identifiable attributes of gradient patterns 16 to estimate their locations with greater accuracy and reliability than is possible with a traditional centroid estimation approach.
Shown in the feature data calculation stage 30 of Figure 7, the images are transformed into feature (iconic) data by two generic, that is, customary or traditional, feature detection stages: thresholding 46 and edge detection 47. The purpose of the thresholding operation 46 is to isolate the gradient pattern from background noise and to obtain a first-order estimate for the orientation of the ridge of gradient pattern 16.
Operation 46 identifies the 0.5 percent of the scene's light intensity that is from each reflector 12, at a detection level that excludes the remaining 99.5 percent of the light over the gray scale of the image. The Canny edge operator 47 (an estimator of image gradient) is used to detect the abrupt orthogonal edges 77 to roof edge 76 of each gradient pattern 16. The aggregation of features obtained from these stages can be used to calculate the centroid of gradient pattern 16 with greater accuracy than might be possible by mapping the image of gradient pattern 16 into a binary image and conventionally calculating the centroid of the thus obtained region for the following reasons. Because the imaging system is required to have a wide field of view to view the entire head tracker box 26, the effects of perspective distort the images of gradient patterns 16. When an image of a square region is obtained with a perspective imaging system, the region is distorted into the shape of a parallelogram. Centroid estimates for distorted gradient patterns are not guaranteed to estimate the true centroid due to pixel quantization and noise. Thus, an alternate approach that identifies unique and high signal-to-noise ratio attributes of gradient patterns 16 is required.
WO 95/15505 217 7 611 pC'lyUS94I13G72 Next stage 48 of transforming the image into features is to calculate the image plane 28 orientation of the minimum squared orthogonal distance metric line for the binarized images of gradient patterns 16. The orthogonal distance metric is defined as the perpendicular distance from each quantized image location of the digitized or binarized image to the minimum error line or minimal orthogonal distance error line.
This approximation line 80 on the image provides the location and slope (i.e., orientation) of a detected reflector 12. The slope and reference coordinates for this line are determined by calculating the scatter matrix (which is a squared error matrix of the spatial dispersion of the binary region of the roof or orthogonal edge of the gradient pattern 16) of the binarized region and establishing the orientation of the line as that eigenvector (i.e., a mathematical term for statistical behavior of spatial or binarized region of the matrix) of the scatter matrix that corresponds to the largest eigenvalue (which is a blending percentage of the eigenvector for the principal component decomposition of the matrix). The classic relation is (M -I~,;)v = 0, where M
is the matrix, I is the identity matrix, ~, is the eigenvalue and v is the eigenvector. The reference coordinate of the line is always the centroid of the binarized region. This approach to estimation of gradient reflector ridge orientation is used because the derived image plane orientation is invariant to location (the x and z coordinates of the quantized image region), whereas approximations calculated by linear regression are not.
The next step 49 of feature detection is to establish which lines of the array of lines, for instance, pairs 78 and 79 of lines, cued by the Canny edge operator 47 are the parallel pair of abrupt edges of gradient patterns 16 that are orthogonal to roof edge 76.
The criteria used to distinguish these lines are restraints on angular offset from true parallel, restraints on angular offset from square with roof edge 76, and restraints on length differences between the pairs of lines. Border edges 77 have the largest edge strength so as to eliminate false detects.
The next step 50 is to determine whether a pair of parallel lines can be found. If a pair of parallel lines, such as pair 78 (since pair 79 is a false detect), is found which is compatible with the specific restraints applied, then, in step 51, the location of the intersection of the roof edge approximation line 80 with the pair of lines 78 is calculated. The gradient 16 reflector 12 centroid coordinate is obtained as the image plane location midway between the intersection points, along roof edge 76 WO 95/15505 21 l 7 611 PCTIUS94I13672 approximation line 80. If a pair of parallel lines is not found, then, in step 52, the centroid of the binarized image component for gradient pattern 16 is used, using conventional calculation techniques which use no data of the gradient 16 information developed above.
In step 53, the mapping from image coordinates of the five gradient patterns to the pyramid configuration that the algorithms use is obtained by scanning the coordinates of the detected gradient patterns for that gradient pattern whose x coordinate is the minimum of the five coordinates. Because the five-gradient-pattern configuration is restrained to be oriented at an angle of 0 to 45 degrees with respect to image plane 28, 10 the gradient pattern 81 cued as being furthest left must be a corner point of the base plane of constellation 45 (in Figure 3). The corner point is used to reference the current frame to a previous frame to match reflector 12 (spot) locations of the respective frames of images 28. The remaining three corner points are discriminated by executing a leftward bearing search from the first corner point. The gradient pattern not identified as a corner point is the gradient pattern 82 at the pyramid's peak; the pyramid corner points are defined by gradient patterns 16. The particular gradient pattern 16 designated as the furthest left pattern 81 may be a different reflector 12 in another frame or time.
Step 31 incorporates the calculation of the helmet 13 three-dimensional coordinates and line of sight, with information from step 53 and parameters from source 54. The estimated helmet 13 coordinate system translation and LOS are output as a serial list of floating-point numbers [x,., yr, zr, psi, theta, phi], at each frame time, where r = [x,., yr, z,.], psi, theta, phi which represent the reference coordinate frame coordinates, yaw, pitch, and roll, respectively, of the body coordinate system relative to the reference coordinate system.
The single-frame-derived positions and angles are filtered to reduce the effects of random measurement error and to allow extrapolation of the positions and angles between video frames by the Kalman filter/predictor 15. These extrapolated quantities are used to increase the apparent output data rate from the 60 Hz video rate to approximately 400 Hz. The extrapolated quantities also allow the image plane position of a gradient reflector 12 to be accurately predicted on the basis of past video frames, reducing the image processing throughput requirements.
wo mssos 21 l 7 611 pCTIUS94/13672 Figure 8 shows the invention and its fimctions in conjunction with corresponding hardware. Item 56 is a DATACUBE MAXVIDEO 20. Function 47 contains Gaussian window smoothing 61, Sobel operator 62, histogram thresholding 63 and feature extraction 64. Gaussian window smoothing blurs the image to eliminate the noise; the Sobel operator is an edge detector; histogram thresholding determines the 99.5 percentile of the total distribution of gray levels of the image; and the Sobel operator, histogram thresholding and feature extraction constitute a Canny edge operator 82. Function 46 contains histogram thresholding. Item 57, denoted APA512p, in Figure 8, where subscripts are used to distinguish among multiple boards of the same type, is a Vision Systems International Pty., Ltd. APA 512 which contains feature extraction 48 which picks out the roof edge of spot 12. Feature extraction 64 is carried out by item 65, APA 5121. The outputs of items S7 and 65 go into device 58 which encompasses block 66 which involves the calculation of minimal orthogonal distance line coefficients from information out of device 57; and function 67 for calculation of minimal orthogonal distance line coefficients from information out of device 65.
Feature extraction 64 provides the orthogonal edges of reflectors 12.
Functions 66 and 67 transform image coordinates of binarized regions of features to an edge equation or line representation, or map features to a linear edge representation. The outputs of function 66 and 67 go into calculate bore sight portion 51 and portion 52, respectively.
Functions 51 and 52 provide the angular coordinates of the center of each spot 12.
There are five blocks 58 each of which are mapped to a separate i860, where individual hardware components are distinguished by subscripts which designate processing for each spot in Figure 8, for the five spots of constellation 45 of reflectors 12. The outputs go into function 33 for calculating helmet 13 location and orientation and line of sight of the helmet. The resulting output of item 33 goes to Kalman filter 15. Block 33 and Kalman filter 15 are executed by a dedicated i860. The output of the system 10 is from Kalman filter 15 to provide inputs to a fire control computer or to imaging system servos.
Figure 9 is similar to Figure 8 except that calculation of minimal orthogonal distance line coefficients 66, 67 and calculation of boresight 51, 52 are performed by the same hardware for each spot 12. Also, function 33 for calculating helmet location and WO 95/15505 21 l 7 611 PCTIUS94/13672 line of sight, and Kalman filter 15 are on portion 68 of device MVME 147 which is a MOTOROLA single board 68030 computer.
Figure 10 is a functional flow diagram of the Kalman filter 15 process.
Triggering signal 69 from track point 3-D location and helmet LOS 33, initializes S Kalman filter 15 at block 70. The output goes to state propagation block 71 for determining the time state that the system is in, or receiving a time increment signal from block 72. Output of block 71 goes to decision symbol 73 which determines whether a new video frame is present. If not and t of block 72 is incremented to the next time state with an output to block 71, then states are propagated forward and x k ~k _ ~ is output from block 71. If the answer is yes, then a signal goes to the covariance propagation block 74 which has an output onto measurement processing block 75.
Note that state propagation block 71 outputs position estimates zk, which are the output of Kalman filter 15; however, zk = xk~k when the estimate is based on a measurement, and xk - xk~k-1 when the estimate is based on an estimate. The other input to block 75 includes measurement signals ymk from track point 3-D location and helmet LOS
33.
In Figure 10, subscript "k" is the indication of a time state for the video frame.
Subscript "k~k" indicates a revised estimate as a result of current measurements. The capital letters are matrices, the small letters are vectors except for the t's of block 72, and the superscripts and the subscripts. Superscript "-1" indicates matrix inverse and superscript "T" indicates transposed matrix. Capital K is a Kalman filter gain factor.
The carat "~" over the "x" or "y" symbol means that the x or y is an estimate.
"H" is an observation matrix. Underlined "~" or "y" means the term is a multidimensional vector such as a 3-D spatial coordinate. Subscript "HI" means high; "LO" means low.
As in any filtering application, selection of the filter bandwidth represents a tradeoff between too little filtering, which does not adequately attenuate random measurement errors, and too much filtering, which causes excessive lag in the estimated quantities. For system 10 of camera 11, the filter design parameters are chosen so as to introduce no more than 0.1 second lag into the filtered outputs. Given that six video frames are collected every 0.1 second, the effect of the filtering on measurement errors that vary randomly from frame to frame is to reduce them by a factor of ~, or about 2.4. Errors that are correlated from frame to frame will see less attenuation.
The WO 95/15505 217 7 611 p~~s94/13672 following table shows head centroid position and Euler angle errors (rms) for the unfiltered and filtered approaches.
Two approaches to signal filtering were evaluated, which are denoted the "decoupled" and "coupled" approaches. In the decoupled approach, a two-state Kalman s estimator of the random velocity type was used for each of the six inputs (x-y-z positions and the three Euler angles, roll-pitch-yaw). In the coupled-filter approach, the algorithm accounts for the natural coupling between translational and rotational motions of the head. In other words, this model attributes a large portion of the translational motion to a rotation about a point 17 (Figure 4) representing the base of the neck. The dynamical model used to represent this filter design are a process model and a measurement model. The process model, as a basis of Kalman filter 1 s, defines new data (estimates) as a function of estimates to establish ~. The measurement model, as a basis of Kalman filter 1 s, defines new data (estimates) as a function of measurements to establish H. The following lays out the models.
1 s Process Model -x = vx + ~0.2~cos v.~r sin B cosh - sin yr sin ~~ + 0.13 cos yr cosA~w,~
+ ~0.2 sin ~r cosA cosh - 0.13 sin ~r sinA~we + ~0.2[cos~cos~-siny~sin8sinc~~~w~
vx = rlvx E['~'Ivx (t)'~'lvx ('~)~ = 9vs(t - ~) Y=Vy +~0.2[sinyrsin9cos~+cosyrsin~~+0.13sinyrsinA~wy, + 0.13 cos yi sinA - 0.2 cos yr cosA cos~~wg + ~0.2[cos ~ sinA sink + sin yr cos~]~w~
Vy ='~~
2s E~rI,~, (r)rt~ ('~)~ = fvS(r -'~) Z=VZ -[0.2sinAcos~+0.13cosA~wg -0. 2 cos 8 sin ~w~
WO 95115505 21 l 7 61 1 p~/(Jg94/13672 vz = rlvz E~'~lvz ~t)rlvz O)~ = 9vs~r - T) y~ = ww wy, _ 'rl,t,v~
E['rlw~ ~t)'~'l wyf ~~)] = qws~t -'~) 6=wg w6 = rl w8 E~'~lwA ~t)'~'lw8 ~~)~ = 9wb~t - T) ~=w~
v'v~ _ ~l w~
E['~'lw~ ~t)'~'lw~ O)] = qwb~t - ~) Measurement Model -Xm = X + rlX
EL'rlx2 ~ = EL'rly2 ~ = E~Tlz2 ~ = rxyz Ym = Y ~' rlY
Zm = Z'~''~lz LV m = W '~ '~1 tV
E[~1y~2 ] = E[~192, = E['~~2, = r~e~
em =a+~e ~m =~+~~
Where the state vector is:
x=[x~.Y~Zo~e~VfWXyovz~wyw6,wI~I,T
xr is the x coordinate of the reference point of the configuration of five gradient reflectors, measured relative to reference or vehicle coordinates;
wo 9snssos 217 7 611 PL'1'~594/13672 yr is the y coordinate of the reference point, measured relative to reference coordinates;
zr is the z coordinate of the reference point, measured relative to reference coordinates;
5 c~ is the roll of the body coordinate system, measured relative to reference coordinates;
A is the pitch of the body coordinate system, measured relative to reference coordinates;
yr is the yaw of the body coordinate system, measured relative to reference 10 coordinates;
vx is the x component of the velocity of the base 17 of the neck;
vy is the y component of the velocity of the base 17 of the neck;
vZ is the z component of the velocity of the base 17 of the neck;
w~ is the angular roll rate of the body coordinate system, measured relative to 15 reference coordinates;
wg is the angular pitch rate of the body coordinate system, measured relative to reference coordinates; and w~ is the angular yaw rate of the body coordinate system, measured relative to reference coordinates.
New estimates of the state vector xk+~ are calculated from the current time estimate zk with the following state propagation equation:
xk+1 - CHI xk The covariance matrix Pk+~ is calculated with the covariance propagation equation:
1'k+1 -~LOpk~LOT +Q
where t o 0 0 0 o ero 0 0 o o t o 0 0 0 o er o 0 0 0 o t o 0 0 0 o er o 0 0 0 o t o 0 0 0 o ero 0 0 0 o t o 0 0 0 o er o 0 0 0 0 o t o 0 0 0 o CHI - e~
DLO -p 0 0 0 0 0 1 0 0 0 0 Ot = 1 /60 second for c~LO, et = 1 /420 second for ~Hh Q=diag{O,O,O,O,O,O,Rv~9v~9v~Rw~9w~9w~'~t~
qv = velocity random walk parameter = [S(cm l sec) / sec J , qw = angular rate random walk parameter = [1(radl sec) / sec]Z, et = propagation time step, zo =~o,o,...,o~T, P =diag{a2 ,a2 ,a2 ,62 ,a2 ,a2 ,6 o pos pos pos angle angle angle vel vel vel rate rate rate]' Epos ' (lOcm)2, dangle = (lrad.)2, ave! _ (l Ocm l sec)2 , ' 6ro,e = (lrad.l sec.)2, r~,Z = (O.OScm)2 , r~,g~ _ (O.OOSrad.)2, and R = diag{rxyZ ,rxyz ,rxyZ ,r~re~,r~yg~,rwg~, To evaluate the performance of the two filter design approaches, we used a sample trajectory simulating repeated head transitions between a head-down orientation wo mssos 21 l l 611 ~T~S9'u13672 (looking at instruments) and a head-up orientation (looking out). Expected sensing errors were superimposed on the truth trajectory data to provide measurements of y and z position and the Euler angle 8. The filter designs used this input data to estimate these three quantities and their derivatives.
The following table shows a summary of the root-mean-square estimation error results obtained with the coupled and decoupled filter designs. Since the coupled filter design depends on the knowledge of the distance between the base of the neck (pivot point 17) and the reference point 38 on the head, the sensitivity of mismatch between the true lever arm and the filter's assumed value was also investigated. This table also lists the one-sigma errors on the position and angle measurements provided as input to the filters. Several observations can be made based on the data in the table.
First, the coupled filter substantially outperforms the decoupled design, particularly with regard to the translational velocity estimates. Second, the coupled filter is fairly sensitive to error in the assumed base-of neck-to-reference-point lever arm 20 length; however, even with 1 S a 20 percent mismatch between the assumed and true values, performance was noticeably better than the decoupled design. Third, expected accuracies are better than 0.5 cm in position and 0.5 degree in angle (one-sigma errors).
y y z ~ a Case (cm) (cm/sec)(cm) (cmJsec)(cm) (deg/sec) Unfiltered 0.45 -- 0.05 -- 0..36 --(inputs) Decoapled 0.44 4.6 0.05 1.9 0.42 12.5 filters Coupled filter0.44 1.9 0.04 0.7 0.38 10.8 Coupled, 10% 0 2 0 9 0 11 mismatch 44 4 04 0 38 0 in neck lever_ . . . . .
arm Coupled, 20% 0 1 0.04 1 0 11.2 mismatch 44 3 1 39 in neck lever, . . .
arm The following listing is a program for Kalman filter 1 S. Part I shows the program data allocation or storage specification. Several instances of terms include xv(ns) as a state vector, p(ns,ns) as a current estimate of covariance of Kalman filter states, q(ns) as a measurement variance, h(ns) as an observation matrix, rm(nm) as a residual estimate for measurements, and ym(nm) as actual measurements. Part II
is a WO 95/15505 217 7 611 p~~g9~13672 miscellaneous initialization where various values are inserted. Part III
provides for the in/out interface of the Kalman filter. Part IV provides sensor data values for simulation.
Part V has values provided for various time constants of each of the states of the process. Part VI reveals initialization of the states of the Kalman filter and the covariance matrices. The process noise covariance matrix is set for a fixed propagation time step and nonvarying elements are set for the state transition matrix.
Part VII
involves making initial estimates of noise variance in order to set the initial state of Kalman filter 15. The time loop is started in part VIII and the measurement data (ym) and the truth data (xt) are measured. If the system is in a simulated data mode then simulated truth and measurement data are generated. Otherwise, actual data is read in this part. Part IX involves state propagation and calculates new states based on prior information without the new measurements. Part X provides for covariance propagation. Part XI involves measurement processing (at video rates) wherein the degree of dependency between the sensor data with actual estimates of system states is established and a new observation matrix is calculated. Part XII sets forth a covariance matrix for predicted measurements. Part XIII involves the transferring out of already made calculations via an output array. The x-out symbol is a predicted measurement. It is the quantity which may be used to predict image plane coordinates of the helmet-mounted reflectors 12. x-out is part of the array hxout. Other elements of hxout are relevant only to the simulations such as truth data, errors relative to truth, and estimates of process noise.
Subpart A of part XIV reveals a subroutine which is called by the program and uses a mathematical approach to create noise. Subpart B is another subroutine which is a mathematical matrix utility which multiplies a square matrix by a line vector or linear chain of data. Subpart C is a mathematical matrix utility that multiplies matrices.
Subpart D is a subroutine that transposes one matrix and then multiplies it with another.
Subpart E is a subroutine that provides the dot product of vectors.
wo mssos 217 7 611 p~/pgg4/13672 w I. program htkfl2 parameter (ns=12) parameter (nm=6) double precision time character*128 argl,arg2 dimension xv(ns),p(ns,ns),q(ns),h(ns),rm(nm),ym(nm) dimension phi hi(ns,ns),phi_to(ns,ns),ck(ns),tv(ns),ta(ns,ns) dimension hxout(5,6),xt(6) dimension x_out(6),cc(6,6),p out(6,6),error(3,6) data rad2deg/57.29578/
data twopi/6.28319/
II. Miscellaneous Initialization dt hi = 1./420.
dt_lo = 1./60.
tmax = 5.0 rn = 20.
k=6 isimdat = 0 III. File I/O
if(isimdat.eq.0) then call getarg( l ,arg 1 ) 3 5 open( l ,file=arg l ,form='unformatted') read(1) call getarg(2,arg2) open(2,file=arg2,form='unformatted') read(2) endif open(3,file='out',form='unformatted') write(3) 5,6,1,1 open(4,file='err',form='unformatted') write(4) 3,6,1,1 wo 9snssos 21 l 7 611 PCT/US94113672 open(7,file='states',form='unformatted') write(7) 1,12,1,1 IV. Simulated Sensor Data Parameters (sinusoids in x/y/z/phi/tht/psi) (simulated random error one-sigma values in cm, radians) if(isimdat.eq.l) then amp x = 5.
amp_y = 5.
amp z = 2.
amp~hi = 20./rad2deg 1 S amp tht = 20./rad2deg amp-psi = 20./rad2deg w x = twopi/2.
w~ = twopi/2.
w z = twopi/2.
w~hi = twopi/2.
w tht = twopi/2.
w-psi = twopi/2.
phs x = 0./rad2deg phs_,y = 60./rad2deg phs_z = 120./rad2deg phs~hi = 180./rad2deg phs tht = 240./rad2deg phs~si = 300./rad2deg sig~os = 0.05 sig_ang = 0.005 endif V. These values for process and meal. noise give about 0.02 sec lag for measurements processed every 1 /60 sec.
vrw = 5.*25.
raterw = 1.0*25.
pos rand = 0.05 ang rand = 0.005 These values for process and meas. noise give about 0.5 sec lag for measurements processed every 1 /60 sec.
vrw = 5./25.
raterw = 1.0/25.
pos_rand = 0.05 ang_rand = 0.005 wo mssos 217 7 611 p~'~S'~13672 These values for process and measurement noise give about 0.1 sec lag for measurements processed every 1/60 sec.
vrw = 5.
raterw = 1.0 pos rand = 0.05 ang rand = 0.005 VI. Initialize Kalman Filter State Vector (xv) and Covariance Matrix (p).
Set Process Noise Covariance Matrix (q) for Fixed Propagation Time Step.
Define State Transition Matrix (phi_hi for high rate, phi_lo for video frame rate).
do 10 i=l,ns xv(i) = 0.
q(i) = 0.
do 5 j=l,ns p(i~j) = 0.
phi hi(i,j) = 0.
phi_lo(i,j) = 0.
5 continue 10 continue spos = 10.
svel = 10.
sangle = 1.0 srate = 1.0 p(1,1) = spos*spos p(2,2) = spos*spos p(3,3) = spos*spos p(4,4) = sangle*sangle p(5,5) = sangle*sangle p(6,6) = sangle*sangle p(7,7) = svel*svel p(8,8) = svel*svel p(9,9) = svel*svel p( 10,10) = srate* srate p( 11,11 ) = srate* srate p(12,12) = srate*srate q(7) = vrw*vrw*dt to q(8) = vrw*vrw*dt to q(9) = vrw*vrw*dt to q(10) = raterw*raterw*dt to q(11) =raterw*raterw*dt to q(12) = raterw*raterw*dt to WO 95/15505 217 7 611 p~ypS941136'72 do 20 i=l,ns phi hi(i,i) = 1.0 phi lo(i,i) = 1.0 S 20 continue phi hi(1,7) = dt hi phi hi(2,8) = dt hi phi hi(3,9) = dt hi phi hi(4,10) = dt hi phi hi(5,11 ) = dt hi phi hi(6,12) = dt hi phi 10(1,7) = dt to phi-lo(2,8) = dt to phi 10(3,9) = dt to phi 10(4,10) = dt to phi lo(5,11) = dt to phi 10(6,12) = dt to VII. Measurement Noise Variance rm(1) = pos rand**2 rm(2) = pos rand**2 rm(3) = pos rand**2 rm(4) = ang rand**2 rm(5) = ang rand**2 rm(6) = ang _rand**2 VIII. Start Time Loop; Read Measurement Data (ym) and Truth Data (xt) (if in simulated data mode, generate simulated truth and measurement data) 3 5 3 0 time = time + dt_hi k=k+1 if(isimdat.eq.l) then xs = amp x*cos(w x*time + phs x) ys = amps*cos(w_y*time + phs~) zs = amp z*cos(w z*time + phs_z) phis = amp~hi*cos(w_phi*time + phs~hi) thts = amp tht*cos(w tht*time + phs tht) psis = amp~si*cos(w~si*time + phs~si) sphis = sin(phis) cphis = cos(phis) sthts = sin(thts) cthts = cos(thts) wo 9snssos 217 l 611 ~'CT~s94/13672 spsis = sin(psis) cpsis = cos(psis) xt(1) = xs - 0.13 * (cpsi*sphi*stht-spsi*ctht) +
* 0.2 * (cpsi*sphi*ctht+spsi*stht) S xt(2) = ys - 0.13 * (spsi*sphi*stht+cpsi*ctht) +
* 0.2 * (spsi*sphi*ctht-cpsi*stht) xt(3 ) = zs - 0.13 * cphi * stht + 0.2 * cphi * ctht xt(4) = phis xt(5) = thts xt(6) = psis if (k.ge.7) then k=0 imeas = 1 ym( 1 ) = xt( 1 ) + sig~os*rnormQ .
1 S ym(2) = xt(2) + sig_pos*rnorm() ym(3) = xt(3) + sig~os*rnormQ
ym(4) = xt(4) + sig ang*rnormQ
ym(5) = xt(5) + sig_ang*rnorm() ym(6) = xt(6) + sig ang*rnorm() else imeas = 0 endif else if (k.ge.7) then k=0 imeas = 1 read(l,end=200) dum,(ym(n),n=1,6) read(2,end=200) dum,(xt(n),n=1,6) else imeas = 0 endif endif IX. State Propagation (at high rate; 420 Hz here) call mvau (phi hi,xv,tv,ns,ns,ns,ns,ns) do 50 i=l,ns xv(i) = tv(i) continue X. Covariance Propagation (only done at video update rate) (Note: real-time application may be used as fixed gain filter, with enormous savings in throughput requirements. If not, two improvements can be considered: ( 1 ) change to U-D mechanization for better wo mssos 217 7 611 PCT/IJS94/13672 numerical properties, and (2) take advantage of all the sparseness in the state transition matrix for reduced throughput needs.) if(imeas.eq.l) then S
call mab (phi lo,p,ta,ns,ns,ns,ns,ns,ns) call mabt(ta,phi_lo,p,ns,ns,ns,ns,ns,ns) do 60 i=l,ns p(i,i) = p(i,i) + q(i) 60 continue XI. Measurement Processing (at video rate) do 110 m=l,nm do 70 i=l,ns h(i) = 0.
70 continue if(m.le.3) then phi = xv(4) tht = xv(5) psi = xv(6) sphi = sin(phi) cphi = cos(phi) stht = sin(tht) ctht = cos(tht) spsi = sin(psi) cpsi = cos(psi) endif if(m.eq.l) then yhat = xv(1) - 0.13 * (cpsi*sphi*stht-spsi*ctht) +
* 0.2 * (cpsi*sphi*ctht+spsi*stht) h(1) = 1.0 h(4) _ -0.13 * (cpsi*sphi*ctht+sphi*stht) +
* 0.2 * (-cpsi*sphi*stht+spsi*ctht) h(S) _ -0.13 * (cpsi*cphi*stht) +
* 0.2 * (cpsi*cphi*ctht) h(6) = 0.13 * (spsi*sphi*stht+cphi*ctht) +
* 0.2 * (-spsi*sphi*ctht+cpsi*stht) elseif(m.eq.2) then yhat = xv(2) - 0.13 * (spsi*sphi*stht+cpsi*ctht) +
* 0.2 * (spsi*sphi*ctht-cpsi*stht) wo 95/1sS05 217 l 611 PGT/US94/13672 h(2) = 1.0 h(4) _ - 0.13 * (spsi*sphi*ctht-cpsi*stht) +
* 0.2 * (-spsi * sphi * stht-cpsi * ctht) h(S) _ - 0.13 * (sphi*cphi*stht) +
* 0.2 * (spsi*cphi*ctht) h(6) _ - 0.13 * (cphi*sphi*stht-spsi*ctht) +
* 0.2 * (cpsi*sphi*ctht+spsi*stht) elseif(m.eq.3) then yhat = xv(3 ) - 0.13 * cphi * stht + 0.2 * cphi * ctht h(3) = 1.0 h(4) _ - 0.13 * cphi * ctht - 0.2 * cphi * stht h(5) = 0.13 * sphi*stht - 0.2 * sphi*ctht elseif(m.eq.4) then yhat = xv(4) h(4) = 1.0 elseif(m.eq.5) then yhat = xv(5) h(5) = 1.0 elseif(m.eq.6) then yhat = xv(6) h(6) = 1.0 endif do 80 j=1,6 cc(m~j) = h(j) 80 continue r = rm(m) res = ym(m) - yhat call mvau (p,h,tv,ns,ns,ns,ns,ns) rescov = dotuv(h,tv,ns,ns,ns) + r do 90 i=l,ns ck(i) = tv(i)/rescov xv(i) = xv(i) + ck(i)*res 90 continue wo mssos 217 l 611 PCT/US94/136?2 do 100 i=l,ns do 95 j=l,ns p(i~j) = P(i~l) - ri'(i)*ck(j) 95 continue if(p(i,i) .le. 0.0) write(6,*)'Neg Cov, i =',i 100 continue 110 continue XII. Covariance matrix for predicted measurements (p-out = cc * p * cc') call mab(cc,p,ta,6,6,6,6,ns,ns) call mabt(ta,cc,p out,6,6,6,ns,6,6) endif XIII. Output Arrays (x_out is the predicted measurement. It is the quantity which would be used to predict image plane coordinates of the helmet-mounted reflectors.) x out is part of the array hxout. The other elements of hxout are really only relevant to simulations (ie. truth data, errors relative to truth, etc.) phi = xv(4) tht = xv(5) psi = xv(6) sphi = sin(phi) cphi = cos(phi) stht = sin(tht) ctht = cos(tht) spsi = sin(psi) cpsi = cos(psi) x out(1) = xv(1) - 0.13 * (cpsi*sphi*stht-spsi*ctht) +
* 0.2 * (cpsi*sphi*ctht+spsi*stht) x out(2) = xv(2) - 0.13 * (spsi*sphi*stht+cpsi*ctht) +
* 0.2 * (spsi*sphi*ctht-cpsi*stht) x out(3) = xv(3) - 0.13 * cphi*stht + 0.2 * cphi*ctht x out(4) = xv(4) x_out(S) = xv(5) x out(6) = xv(6) do 120 j=1,6 hxout( 1 ~j ) = x out(j ) hxout(2~j) = xt(j) hxout(3 ~j ) = ym(j ) WO 95/15505 217 7 611 pG'1'~594~13672 hxout(4~j) = x out(j) - xt(j) hxout(5 ~j ) = ym(j ) - xt(j ) 120 continue S c if(imeas.eq. l ) write(3) sngl(time),hxout write(3) sngl(time),hxout if(imeas.eq. l ) write(7) sngl(time),xv do 130 j=1,6 error( 1 ~j ) = x out(j ) - xt(j ) 130 continue do 140 j=1,6 error(2~j) = sqrt(p_out(j~j)) error(3 ~j ) =-sqrt(p-out(j ~j )) 140 continue if(imeas.eq. l ) write(4) sngl(time),error if(time.lt.tmax) go to 30 200 continue stop end XIV. rnorm A. function rnorm () This function generates zero-mean, unit-variance, uncorrelated Gaussian random numbers using the UNIX uniform [0,1 ] random number generator 'rand'.
rnorm = sqrt(-2.*alog(rand(0)))*cos(6.283185307*rand(0)) return end B. subroutine mvau Function : multiply a matrix and a vector to produce a vector:
v=a*u Note: v cannot be u.
Inputs pCTIUS94113672 wo 9siissos 217 7 611 a input matrix a input vector m row dimension of input matrix and output vector effective in the operation n column dimension of input matrix and dimension of input vector effective in the operation mra actual row dimension of input matrix mru actual dimension of input vector mrv actual dimension of output vector Outputs v output vector subroutine mvau (a,u,v,m,n,mra,mru,mrv) dimension a(mra, l ),u(mru),v(mrv) double precision sum do 20 i = l,m sum = O.dO
dolOj=l,n sum = sum + a(i~j)*u(j) 10 continue v(i) = sum 20 continue return end C. subroutine mab Function : perform matrix multiplication c = a * b Inputs a input matrix b input matrix m row dimension of a for the purpose of matrix multiplication 1 column dimension of a also row dimension of b for the purpose of matrix multiplication n column dimension of b for the purpose of matrix multiplication mra actual row dimension of a mrb actual row dimension of b mrc actual row dimension of c Outputs c matrix product of a and b Note:
S c cannot be a or b.
subroutine mab (a,b,c,m,l,n,mra,mrb,mrc) dimension a(mra, l ),b(mrb, l ),c(mrc, l ) double precision sum do30i=l,m do20j=l,n sum = O.dO
do 10 k = 1,1 sum = sum + a(i,k)*b(k~j) 10 continue c(i~j) = sum continue continue return end D. subroutine mabt Function : perform matrix multiplication c = a * trans(b) Inputs a input matrix b input matrix m row dimension of a for the purpose of matrix multiplication 1 column dimension of a also row dimension of b for the purpose of matrix multiplication n column dimension of b for the purpose of matrix multiplication mra actual row dimension of a mrb actual row dimension of b mrc actual row dimension of c Outputs c matrix product of a and txans(b) Note:
c cannot be a or b.
subroutine mabt (a,b,c,m,l,n,mra,mrb,mrc) dimension a(mra, l ),b(mrb, l ),c(mrc, l ) double precision sum 5 do30i=l,m do20j=l,n sum = O.dO
do 10 k = 1,1 sum = sum + a(i,k)*b(j,k) 10 10 continue c(i~j) = sum 20 continue 30 continue 15 return end E. function dotuv Function : perform the dot product of two vectors Inputs a input vector v input vector n dimension of u,v over which dot product is performed mru dimension of a mrv dimension of v Outputs value of function = dot product (u,v) function dotuv (u,v,n,mru,mrv) dimension u(mru),v(mrv) double precision sum sum = O.dO
do 10 i = l ,n sum = sum + u(i)*v(i) 10 continue dotuv = sum return end
BACKGROUND OF THE TNVEN'TION
The invention relates to helmet mounted display systems and particularly to helmet or head tracking systems.
Helmet-mounted display systems frequently have both gimbaled sensors and weapons that must follow the line of sight of the pilot and thus require accurate head tracking. There are various kinds of optical and magnetic helmet tracking systems. The magnetic tracking systems are presently the more accurate' and robust systems.
Several disadvantages of magnetic head trackers are the need to map the metal in the cockpit of the craft and the limited update rates. Magnetic head tracking systems work in areas where the amount of metal structure is limited. Application of such systems for combat vehicles or tanks is impractical because the metal structurE; of the tank or combat vehicle results in the magnetic head trackers as being untenable. 1~urther, the confined, vibrational environment proves existing optical systems to be as unlikely to solve the combat vehicle or tank head tracking problem.
Relevant art includes European patent document G 388 618 A3 published 26 September 1990. This document discloses a system for locating an object having reflectors with radiation emitted toward the object. The reflectors on the object reflect the radiation which is received by a videooptic sensor and the resultant signals are processed for location information about the object.
SUMMARY OF TIC INVENTION
The present invention is a high-speed videometric head-tracking system that incorporates a gradient reflector array, rigid-body Kalman filter motion predictor algorithm processor and a low-cost camera which senses source light reflected by the gradient reflector array. The reflectors include intensity gradient information as well as position information. The subpixel spot location estimator algorithm uses the gradient information from the reflectors to increase the accuracy of the reflector locator algorithm. The Kalman filter estimates the location of the helmet and the helmet line of sight to a greater accuracy than is attainable by image measurements alone by accounting for the natural coupling between the translational and rotational motions of the head and --also reduces the noise in the system and the motion predictor algorithm extrapolates the helmet location and line of sight at a higher rate than the measurement (update) rate.
A~!E~IDED SHEET
la The present invention is both accurate and cost effective. It is not affected by heavy magnetic environments and does not use elaborate optics. The system estimates boresight to a marking with a digital signal-s processing algorithm. Helmet rotation and translation are calculated using the traditional principles of perspective images.
In accordance with this invention there is provided a videometric tracking system comprising: a light source for emitting light; a constellation of at least three reflectors situated on a helmet for reflecting the light from said light source; and a video camera for detecting the light reflected from said constellation; characterized by:
a subpixel spot location estimator algorithmic processor implemented in a processor which utilizes video signals from said camera to establish image plane coordinates of said constellation to within a fraction of a pixel; a helmet three-dimensional (3-D) location and line of sight (LOS) calculation algorithmic processor which utilizes the image plane coordinates calculated by said subpixel spot location estimator algorithmic processor; and a filter implemented in a processor, connected to said 3-D location and LOS
calculation algorithmic processor for determining location of said constellation to a certain degree of precision and estimating the helmet 3-D location and body coordinate LOS
at a rate faster than an actual measurement rate, via a state propagation.
WO 95/15505 217 l 61 1 pCT~s94/13672 Figure 1 is a layout of the basic components of the tracker system.
Figures 2a and 2b illustrate a gradient reflector.
Figure 3 reveals a view of a constellation of gradient reflectors.
Figure 4 is a diagram of a rigid-body geometric configuration of the head, neck and torso.
Figure 5 shows the geometry of a head tracker box.
Figure 6 is a diagram showing the relative locations of the camera and vehicle coordinate systems.
Figure 7 is a block diagram of the subpixel spot location estimator algorithm and the track point 3-D location and helmet line of sight calculation.
Figure 8 shows hardware and the corresponding functions of the tracker.
Figure 9 shows another version of hardware and the corresponding functions of the tracker.
1 S Figure 10 is a flow diagram of the Kalman filter for the tracker.
Videometric head tracker system 10 in Figure 1 has m ultraviolet (LJV) sensitive camera 11 that senses reflections from gradient reflectors 12 which are situated on helmet 13. Light source 14 illuminates gradient reflectors 12 so as to become sources or spots 12 of light. The output of camera 11 goes to a subpixel spot location algorithmic estimator 29. Estimator 29 averages out the errors of the feature or location estimates to provide better than single pixel resolution, that is, "subpixel" spot location, of spots 12.
The output of estimator 29 is estimated image plane 28 (of Fiigure 6) coordinates of the centroid of each gradient reflector 12 of the configuration of five gradient reflectors (which may be another number of reflectors such as three or four). The estimated image plane 28 coordinates of each gradient reflector 12 are transferred to a track point three dimensional (3-D) location and helmet line-of sight (LOS) calculation algorithmic processor 33. LOS is the orientation of helmet 13. The tract; point 3-D
location and helmet line-of sight data, calculated using the equations of perspective, go to a rigid 3() body kinematics Kalman filter/predictor 15. Kalman filter/predictor 15 is a processor that provides representations of the general six-degree-of freedom kinematics of helmet 13 and the pilot's head. The estimated states include location, velocity, angle, and wo ~snssos 217 l 611 p~~S94113672 angular rates of rigid-body (helmet) 13 as a function of external forces, specifically gravity and the forces exerted by the muscles in the pilot's head. System 10 relates the image measurement error for spot location and angle of orientation to the Kalman filter residuals using a unique spot design in which the intensity of the reflective signal varies across one axis of gradient reflector 12 as shown by graph 16 of Figure 2a.
Spots 12 have contrast gradients that peak in the center of the reflector structure in order to assure the most accurate derivation of spot location.
Subpixel spot location estimator 29 operates with analog serial image 28 data, in an RS 170 format, from camera 11, from an analog-to-digital converter in a DATACUBE MAXVIDEO 20 processor. The digital signal from the converter goes to a DATACUBE card 56, model MAXVIDEO 20, far processing to determine threshold image 28 intensities at the 99.5 percent level in section 46 (see Figure 7) and to section 47 to establish the location of the strongest orthogonal edges 77 relative to roof edges 76 of the detected gradient patterns 16 of reflectors 12, using a Canny edge operator for image feature detection that detects edges of gradient patterns 16 or spots 12. Only one spot 12 is operated on at a time. The output from section 46 goes to a model card made by an Australian Company, Vision Systems International, Pty., Ltd.
The APA512 card 57 is utilized to calculate the minimum orthogonal distance squared error line for roof edges 76 of gradient patterns 16 of reflectors 12, in section 48. The outputs of sections 47 and 48 go to section 49 of INTEL i860 processor card 58, to identify a parallel pair of lines 78 that has the largest average gradient magnitude. An approximation line 80 is superimposed. The output of section 49 goes to a decision section 50 which passes the output on to section 52 if such pair of parallel lines 78 is identified, or to section 51 if such pair of parallel lines is not identified.
A pair of lines 79 (Figure 2b) may be found but these would be a false detect because they would not be the pair of lines having the largest average gradient. If the pair of parallel lixies is found, then section 51 establishes a gradient pattern center location as an image plane location midway between the intersection points of the minimum orthogonal distance line as calculated in section 48 of card 57 and the orthogonal edges 77 as established in section 47 of card 56. If the pair of parallel lines is not found, then section 52 establishes a conventional gradient pattern center location as a centroid of a binarized gradient pattern image. An output from section 51 or 52 goes to section 53 to find the wo ~issos 217 7 61 1 pCT~s94/13672 left-most gradient pattern and then to identify corner points via a leftward bearing search. In addition to section 49, sections 50, 51, 52 and 53 are incorporated in INTEL
i860 card 58. The output of section 53 goes to section 31 to calculate helmet coordinates and line of sight. Section 31 is in INTEL i860 card 58 as part of track point 3-D location and helmet line of sight algorithmic processor 33. Application-dependent parameters for the track point 3-D location and helmet line of sight algothimic processor 33 are output from section 54, including field of view, image plane dimensions, gradient pattern constellation dimensions and focal length, is entered into section 31.
These application-dependent parameters are stored in the memory of card 58.
The six reference coordinates (3 coordinates plus 3 line of sight orientation coordinates) of the unique reference 38, of constellation 45, and orientation of helmet 13 is calculated. The output of section 31 of device 33 goes to Kalman filter/predictor 15.
Filter/predictor 15 is mapped to a portion of INTEL i860 card 58.
3M 198 high-reflectance or retroreflectance tape may be used for retroreflectors 12. The retroreflectance tape absorbs in the ultraviolet range due to a polymer overcoat.
Applying varying thicknesses of polymer overcoat to the high reflectance materials results in a symmetrical gradient reflector whose center has no polymer overcoat and thus has the highest reflectance. The pattern is printed on a cellophane-like material and then is bonded to a resin circuit board. The reflectance is supposed to decrease linearly from the center by applying appropriately thicker absorbing polymer overcoat across reflector 12. Actually, the curve showing the change of reflectance along one dimension of reflector 12 appears to have a quadratic shape 16 as indicated in Figure 2a.
The location of the gradient patterns of constellation 45 of five spots or reflectors 12, shown in Figure 3, is used for calculation of helmet 13 three dimensional (3-D) location and orientation. The estimates of the 3-D location and coordination frame orientation are transferred to a rigid-body Kalman filter/predictor 15 to obtain smooth, minimum mean-square-error estimates of parameter data or, in other words, temporal- or time-averaged estimates of parameter data. The gradient pattern of center reflector 12 is orthogonal to the gradient patterns of the other four reflectors 12.
Camera 11 may be one of various kinds of cameras. System 10 is based on a 60-Hz non-interlaced CCD camera that is sensitive to the ultraviolet (UV) (250-nanometers (nm)) and has a standard RS-170 video output. The silicon lens of camera 5 2 l l 7 61 ~ p~~S94113672 11 is replaced with a quartz lens for UV detection. A TEXAS INSTRUMENTS INC.
(TI) MC780 PHU camera is a preferred camera 11 in system 10. The MC-780 PHU
has 755 x 488 active photosites with a resolution of 500 x 350 TV lines. The manufacturer, TI, substituted the standard CCD glass window with a quartz {fused Si02) window to S provide the necessary ultraviolet sensitivity. TI camera 11 is a solid-state monochrome television camera that employs TI TC245 frame-transfer charge-coupled image sensor.
TI MC780 PHU camera 11 has a 25% quantum efficiency (QE) which extends down to 220 nm. The TI CCD is also rated for a 25% QE at 365 nm, which makes it useful at the latter wave length as well. Another preferred and similar camera is the remote head, MC-780 PHR, having the same specifications as the MC-780 PHU, but with smaller dimensions.
Light source 14 provides ultraviolet illumination for system 10. Both 254 nm and 365 nm sources have been examined. The 365 nm source has several advantages over the shorter wave length source. First, the 365 nm source has a potentially safer wave length for human interaction (i.e., eyes and skin exposure), and second, it provides less stringent requirements for the optics in that the lens designs are easier to achieve and are less expensive than the lens systems for the 254 nm saurce. Even though the 254 nm wave length system has the advantage of a lower noise background, experiments show the signal-to-noise ratio feature to be minimal in view of the additional features provided by the 365-nm wave length system. The 365-nm lamp of choice for source 14 is an ultraviolet quartz pencil lamp having a length of 2-1/8 inches with a 3/8 inch outside diameter and a weight of 2 ounces. The lamp irradiates microwatts per square centimeter at 6 inches and provides enough light to distinguish the targets on a monochrome monitor. A typical lamp is rated for 5000 hours of operation. The fused quartz envelopes have excellent ultraviolet transmission properties, with the isolated spectral lines being strong and well separated.
Such a lamp is also cool burning.
The diffused and specular reflectance of a standard combat vehicle crew (CVC) helmet was measured with a CARP spectrophotometer with an integrating sphere attachment. The average reflectance is 4.4% in the ultraviolet (200-400 nm) waveband.
A highly diffuse reflector or retroreflector with a 60% reflectivity will give a signal-to-WO 95/15505 21 l 7 61 1 pCT~s94/13672 noise ratio of 60/4.4 or 13.6. A signal to noise ratio of 10 is sufficient for the spot positioning algorithms of subpixel spot location estimator 29.
The following table shows f numbers (f# vs. aperture opening in mm) calculated for TI MC780 PHU (755 x 488) camera 11, which has a diagonal 8-mm active area, as a function of range (R) in cm, focal length (fL) in mm, and field of view angle (theta) in degrees.
R(cm) Theta tL f#(10)f#(15) f#(20) f#(25)f#(30) (deg.) 15.2 90.19 3.98 0.40 0.27 0.20 0.16 0.13 20.3 73.84 5.32 0.53 0.35 0.27 0.21 0.18 25.4 61.97 6.65 0.67 0.44 0.33 0.27 0.22 30.5 53.13 7.99 0.80 0.53 0.40 0.32 0.27 35.6 46.38 9.33 0.93 0.62 0.47 0.37 0.31 40.6 41.18 10.64 1.06 0.71 0.53 0.43 0.35 45.7 36.91 11.97 1.20 0.80 0.60 0.48 0.40 The body-axis or coordinate system establishing the helmet 13 location and orientation is located at a point 17 between the pilot's shoulders at the base of the neck as shown in Figure 4. The head and neck, to a first-order approximation are characterized as a pendulum 20 that is attached to the torso at the base of the neck by a pivot joint 17. Figure 4 is the rigid-body geometric configuration of the head, neck and torso used for trajectory file calculation. Since the head and neck are extrapolated to be a rigid body and if the location and line-of sight (LOS) orientation of the body-axis coordination system are known, the coordinates and orientation of thin-film spots 12 applied to helmet 13 are established uniquely. The first order approximation for the dynamics of torso 18 is for the assumption that torso 18 is a rigid body attached to the hips by a pivot joint 19, and that the hips are stationary relative to the vehicle coordinate frame because of the seat safety restraints. The radial dimensions of the pendulums 18 and 20 used by Kalman filter/predictor 15 to establish trajectories/orientations from image feature location measurements are obtained from tables of average human frame dimensions. For males, the average height of the shoulders above the hips is centimeters or 1.94 feet and the average height of eye level above the hips is WO 95/15505 217 l 611 p~T~s94/13672 centimeters or 2.59 feet. Helmet 13 is approximated as a sphere of a radius of centimeters (5.9 inches) with its center 55 located at the extreme end of the second pendulum 20, which represents the head and the neck, respectively.
The geometry of a head tracker box 26 is shown in Figure 5. Dimensions 34 are 12 inches and dimensions 36 are 6 inches. Camera 11 is located so that the optical axis 40 intersects the point [0, 0, 0]T at location 32, and point [0, 0.23, -0.5]T
feet at location 38. The camera 11 depression angle 44 is -25°.
The relative location of camera 11 coordinate system (x~, y~, z~) at origin 43 with respect to the coordinate system (xr, y,., zr) at origin 32 of the vehicle is shown diagrammatically in Figure 6. Figure 6 depicts the nominal imaging configuration of camera 11. Ultraviolet camera 11 is oriented so that it views center 38 of head tracker box 26 of Figure 5. Origin 43 of the camera 11 coordinate system is located at the center of image plane 28, the x~ and z~ axes are square to the orthogonal edges of image plane 28, and the y~ axis is parallel to the optical axis 40. Distance 41 is the focal length. Origin 43 of camera 11 coordinate reference frame is located at a distance 42 from origin 32 of the reference coordinate system (x,., yr, z,.), where distance 42 is a user-specified parameter. Distance 60 is the length between origin 32 and constellation 45 reference point 38. The coordinates (x~, y~, z~) of the image for each vertex of gradient reflectors 12 is calculated using the equations of perspective.
The image plane 28 coordinates r;' = [x~i~]T of a three-dimensional location r;' _ [x~y~z~]T in the camera 11 coordinate system, are obtained with the following equation:
x~F
r ~ xc yc - F
-z~F, c y~ - F
where z~ is the x coordinate of the camera 11 coordinate system of the image of the three-dimensional coordinate r,~, z~ is the z coordinate of the camera 11 coordinate system of the image of the three-dimensional coordinate r;', x~ is the x coordinate of the camera 11 coordinate system of the three-dimensional coordinate, y~ is the y coordinate of the camera 11 coordinate system of the three-dimensional coordinate, z~ is the z coordinate of the camera 11 coordinate system of the three-dimensional coordinate, F is wo mssos 21 l 7 61 1 pCT~s94/13672 the focal length of camera 11, and the transpose of a vector is denoted by the superscript T.
Center 43 of detector array 45 is located with the midpoint of the array in the x and z directions coincident with the location where the optical axis 40 intersects image plane 28. The edges of image plane 28 detector array are square with camera 11 coordinates (xc, yc, zc).
Figure 7 is a block diagram of subpixel spot location algorithmic estimator 29 and track point 3-D location and helmet line of sight calculation 33. The function of estimator 29 is applied separately to each spot 12 of constellation 45. The base line approach uses the uniquely identifiable attributes of gradient patterns 16 to estimate their locations with greater accuracy and reliability than is possible with a traditional centroid estimation approach.
Shown in the feature data calculation stage 30 of Figure 7, the images are transformed into feature (iconic) data by two generic, that is, customary or traditional, feature detection stages: thresholding 46 and edge detection 47. The purpose of the thresholding operation 46 is to isolate the gradient pattern from background noise and to obtain a first-order estimate for the orientation of the ridge of gradient pattern 16.
Operation 46 identifies the 0.5 percent of the scene's light intensity that is from each reflector 12, at a detection level that excludes the remaining 99.5 percent of the light over the gray scale of the image. The Canny edge operator 47 (an estimator of image gradient) is used to detect the abrupt orthogonal edges 77 to roof edge 76 of each gradient pattern 16. The aggregation of features obtained from these stages can be used to calculate the centroid of gradient pattern 16 with greater accuracy than might be possible by mapping the image of gradient pattern 16 into a binary image and conventionally calculating the centroid of the thus obtained region for the following reasons. Because the imaging system is required to have a wide field of view to view the entire head tracker box 26, the effects of perspective distort the images of gradient patterns 16. When an image of a square region is obtained with a perspective imaging system, the region is distorted into the shape of a parallelogram. Centroid estimates for distorted gradient patterns are not guaranteed to estimate the true centroid due to pixel quantization and noise. Thus, an alternate approach that identifies unique and high signal-to-noise ratio attributes of gradient patterns 16 is required.
WO 95/15505 217 7 611 pC'lyUS94I13G72 Next stage 48 of transforming the image into features is to calculate the image plane 28 orientation of the minimum squared orthogonal distance metric line for the binarized images of gradient patterns 16. The orthogonal distance metric is defined as the perpendicular distance from each quantized image location of the digitized or binarized image to the minimum error line or minimal orthogonal distance error line.
This approximation line 80 on the image provides the location and slope (i.e., orientation) of a detected reflector 12. The slope and reference coordinates for this line are determined by calculating the scatter matrix (which is a squared error matrix of the spatial dispersion of the binary region of the roof or orthogonal edge of the gradient pattern 16) of the binarized region and establishing the orientation of the line as that eigenvector (i.e., a mathematical term for statistical behavior of spatial or binarized region of the matrix) of the scatter matrix that corresponds to the largest eigenvalue (which is a blending percentage of the eigenvector for the principal component decomposition of the matrix). The classic relation is (M -I~,;)v = 0, where M
is the matrix, I is the identity matrix, ~, is the eigenvalue and v is the eigenvector. The reference coordinate of the line is always the centroid of the binarized region. This approach to estimation of gradient reflector ridge orientation is used because the derived image plane orientation is invariant to location (the x and z coordinates of the quantized image region), whereas approximations calculated by linear regression are not.
The next step 49 of feature detection is to establish which lines of the array of lines, for instance, pairs 78 and 79 of lines, cued by the Canny edge operator 47 are the parallel pair of abrupt edges of gradient patterns 16 that are orthogonal to roof edge 76.
The criteria used to distinguish these lines are restraints on angular offset from true parallel, restraints on angular offset from square with roof edge 76, and restraints on length differences between the pairs of lines. Border edges 77 have the largest edge strength so as to eliminate false detects.
The next step 50 is to determine whether a pair of parallel lines can be found. If a pair of parallel lines, such as pair 78 (since pair 79 is a false detect), is found which is compatible with the specific restraints applied, then, in step 51, the location of the intersection of the roof edge approximation line 80 with the pair of lines 78 is calculated. The gradient 16 reflector 12 centroid coordinate is obtained as the image plane location midway between the intersection points, along roof edge 76 WO 95/15505 21 l 7 611 PCTIUS94I13672 approximation line 80. If a pair of parallel lines is not found, then, in step 52, the centroid of the binarized image component for gradient pattern 16 is used, using conventional calculation techniques which use no data of the gradient 16 information developed above.
In step 53, the mapping from image coordinates of the five gradient patterns to the pyramid configuration that the algorithms use is obtained by scanning the coordinates of the detected gradient patterns for that gradient pattern whose x coordinate is the minimum of the five coordinates. Because the five-gradient-pattern configuration is restrained to be oriented at an angle of 0 to 45 degrees with respect to image plane 28, 10 the gradient pattern 81 cued as being furthest left must be a corner point of the base plane of constellation 45 (in Figure 3). The corner point is used to reference the current frame to a previous frame to match reflector 12 (spot) locations of the respective frames of images 28. The remaining three corner points are discriminated by executing a leftward bearing search from the first corner point. The gradient pattern not identified as a corner point is the gradient pattern 82 at the pyramid's peak; the pyramid corner points are defined by gradient patterns 16. The particular gradient pattern 16 designated as the furthest left pattern 81 may be a different reflector 12 in another frame or time.
Step 31 incorporates the calculation of the helmet 13 three-dimensional coordinates and line of sight, with information from step 53 and parameters from source 54. The estimated helmet 13 coordinate system translation and LOS are output as a serial list of floating-point numbers [x,., yr, zr, psi, theta, phi], at each frame time, where r = [x,., yr, z,.], psi, theta, phi which represent the reference coordinate frame coordinates, yaw, pitch, and roll, respectively, of the body coordinate system relative to the reference coordinate system.
The single-frame-derived positions and angles are filtered to reduce the effects of random measurement error and to allow extrapolation of the positions and angles between video frames by the Kalman filter/predictor 15. These extrapolated quantities are used to increase the apparent output data rate from the 60 Hz video rate to approximately 400 Hz. The extrapolated quantities also allow the image plane position of a gradient reflector 12 to be accurately predicted on the basis of past video frames, reducing the image processing throughput requirements.
wo mssos 21 l 7 611 pCTIUS94/13672 Figure 8 shows the invention and its fimctions in conjunction with corresponding hardware. Item 56 is a DATACUBE MAXVIDEO 20. Function 47 contains Gaussian window smoothing 61, Sobel operator 62, histogram thresholding 63 and feature extraction 64. Gaussian window smoothing blurs the image to eliminate the noise; the Sobel operator is an edge detector; histogram thresholding determines the 99.5 percentile of the total distribution of gray levels of the image; and the Sobel operator, histogram thresholding and feature extraction constitute a Canny edge operator 82. Function 46 contains histogram thresholding. Item 57, denoted APA512p, in Figure 8, where subscripts are used to distinguish among multiple boards of the same type, is a Vision Systems International Pty., Ltd. APA 512 which contains feature extraction 48 which picks out the roof edge of spot 12. Feature extraction 64 is carried out by item 65, APA 5121. The outputs of items S7 and 65 go into device 58 which encompasses block 66 which involves the calculation of minimal orthogonal distance line coefficients from information out of device 57; and function 67 for calculation of minimal orthogonal distance line coefficients from information out of device 65.
Feature extraction 64 provides the orthogonal edges of reflectors 12.
Functions 66 and 67 transform image coordinates of binarized regions of features to an edge equation or line representation, or map features to a linear edge representation. The outputs of function 66 and 67 go into calculate bore sight portion 51 and portion 52, respectively.
Functions 51 and 52 provide the angular coordinates of the center of each spot 12.
There are five blocks 58 each of which are mapped to a separate i860, where individual hardware components are distinguished by subscripts which designate processing for each spot in Figure 8, for the five spots of constellation 45 of reflectors 12. The outputs go into function 33 for calculating helmet 13 location and orientation and line of sight of the helmet. The resulting output of item 33 goes to Kalman filter 15. Block 33 and Kalman filter 15 are executed by a dedicated i860. The output of the system 10 is from Kalman filter 15 to provide inputs to a fire control computer or to imaging system servos.
Figure 9 is similar to Figure 8 except that calculation of minimal orthogonal distance line coefficients 66, 67 and calculation of boresight 51, 52 are performed by the same hardware for each spot 12. Also, function 33 for calculating helmet location and WO 95/15505 21 l 7 611 PCTIUS94/13672 line of sight, and Kalman filter 15 are on portion 68 of device MVME 147 which is a MOTOROLA single board 68030 computer.
Figure 10 is a functional flow diagram of the Kalman filter 15 process.
Triggering signal 69 from track point 3-D location and helmet LOS 33, initializes S Kalman filter 15 at block 70. The output goes to state propagation block 71 for determining the time state that the system is in, or receiving a time increment signal from block 72. Output of block 71 goes to decision symbol 73 which determines whether a new video frame is present. If not and t of block 72 is incremented to the next time state with an output to block 71, then states are propagated forward and x k ~k _ ~ is output from block 71. If the answer is yes, then a signal goes to the covariance propagation block 74 which has an output onto measurement processing block 75.
Note that state propagation block 71 outputs position estimates zk, which are the output of Kalman filter 15; however, zk = xk~k when the estimate is based on a measurement, and xk - xk~k-1 when the estimate is based on an estimate. The other input to block 75 includes measurement signals ymk from track point 3-D location and helmet LOS
33.
In Figure 10, subscript "k" is the indication of a time state for the video frame.
Subscript "k~k" indicates a revised estimate as a result of current measurements. The capital letters are matrices, the small letters are vectors except for the t's of block 72, and the superscripts and the subscripts. Superscript "-1" indicates matrix inverse and superscript "T" indicates transposed matrix. Capital K is a Kalman filter gain factor.
The carat "~" over the "x" or "y" symbol means that the x or y is an estimate.
"H" is an observation matrix. Underlined "~" or "y" means the term is a multidimensional vector such as a 3-D spatial coordinate. Subscript "HI" means high; "LO" means low.
As in any filtering application, selection of the filter bandwidth represents a tradeoff between too little filtering, which does not adequately attenuate random measurement errors, and too much filtering, which causes excessive lag in the estimated quantities. For system 10 of camera 11, the filter design parameters are chosen so as to introduce no more than 0.1 second lag into the filtered outputs. Given that six video frames are collected every 0.1 second, the effect of the filtering on measurement errors that vary randomly from frame to frame is to reduce them by a factor of ~, or about 2.4. Errors that are correlated from frame to frame will see less attenuation.
The WO 95/15505 217 7 611 p~~s94/13672 following table shows head centroid position and Euler angle errors (rms) for the unfiltered and filtered approaches.
Two approaches to signal filtering were evaluated, which are denoted the "decoupled" and "coupled" approaches. In the decoupled approach, a two-state Kalman s estimator of the random velocity type was used for each of the six inputs (x-y-z positions and the three Euler angles, roll-pitch-yaw). In the coupled-filter approach, the algorithm accounts for the natural coupling between translational and rotational motions of the head. In other words, this model attributes a large portion of the translational motion to a rotation about a point 17 (Figure 4) representing the base of the neck. The dynamical model used to represent this filter design are a process model and a measurement model. The process model, as a basis of Kalman filter 1 s, defines new data (estimates) as a function of estimates to establish ~. The measurement model, as a basis of Kalman filter 1 s, defines new data (estimates) as a function of measurements to establish H. The following lays out the models.
1 s Process Model -x = vx + ~0.2~cos v.~r sin B cosh - sin yr sin ~~ + 0.13 cos yr cosA~w,~
+ ~0.2 sin ~r cosA cosh - 0.13 sin ~r sinA~we + ~0.2[cos~cos~-siny~sin8sinc~~~w~
vx = rlvx E['~'Ivx (t)'~'lvx ('~)~ = 9vs(t - ~) Y=Vy +~0.2[sinyrsin9cos~+cosyrsin~~+0.13sinyrsinA~wy, + 0.13 cos yi sinA - 0.2 cos yr cosA cos~~wg + ~0.2[cos ~ sinA sink + sin yr cos~]~w~
Vy ='~~
2s E~rI,~, (r)rt~ ('~)~ = fvS(r -'~) Z=VZ -[0.2sinAcos~+0.13cosA~wg -0. 2 cos 8 sin ~w~
WO 95115505 21 l 7 61 1 p~/(Jg94/13672 vz = rlvz E~'~lvz ~t)rlvz O)~ = 9vs~r - T) y~ = ww wy, _ 'rl,t,v~
E['rlw~ ~t)'~'l wyf ~~)] = qws~t -'~) 6=wg w6 = rl w8 E~'~lwA ~t)'~'lw8 ~~)~ = 9wb~t - T) ~=w~
v'v~ _ ~l w~
E['~'lw~ ~t)'~'lw~ O)] = qwb~t - ~) Measurement Model -Xm = X + rlX
EL'rlx2 ~ = EL'rly2 ~ = E~Tlz2 ~ = rxyz Ym = Y ~' rlY
Zm = Z'~''~lz LV m = W '~ '~1 tV
E[~1y~2 ] = E[~192, = E['~~2, = r~e~
em =a+~e ~m =~+~~
Where the state vector is:
x=[x~.Y~Zo~e~VfWXyovz~wyw6,wI~I,T
xr is the x coordinate of the reference point of the configuration of five gradient reflectors, measured relative to reference or vehicle coordinates;
wo 9snssos 217 7 611 PL'1'~594/13672 yr is the y coordinate of the reference point, measured relative to reference coordinates;
zr is the z coordinate of the reference point, measured relative to reference coordinates;
5 c~ is the roll of the body coordinate system, measured relative to reference coordinates;
A is the pitch of the body coordinate system, measured relative to reference coordinates;
yr is the yaw of the body coordinate system, measured relative to reference 10 coordinates;
vx is the x component of the velocity of the base 17 of the neck;
vy is the y component of the velocity of the base 17 of the neck;
vZ is the z component of the velocity of the base 17 of the neck;
w~ is the angular roll rate of the body coordinate system, measured relative to 15 reference coordinates;
wg is the angular pitch rate of the body coordinate system, measured relative to reference coordinates; and w~ is the angular yaw rate of the body coordinate system, measured relative to reference coordinates.
New estimates of the state vector xk+~ are calculated from the current time estimate zk with the following state propagation equation:
xk+1 - CHI xk The covariance matrix Pk+~ is calculated with the covariance propagation equation:
1'k+1 -~LOpk~LOT +Q
where t o 0 0 0 o ero 0 0 o o t o 0 0 0 o er o 0 0 0 o t o 0 0 0 o er o 0 0 0 o t o 0 0 0 o ero 0 0 0 o t o 0 0 0 o er o 0 0 0 0 o t o 0 0 0 o CHI - e~
DLO -p 0 0 0 0 0 1 0 0 0 0 Ot = 1 /60 second for c~LO, et = 1 /420 second for ~Hh Q=diag{O,O,O,O,O,O,Rv~9v~9v~Rw~9w~9w~'~t~
qv = velocity random walk parameter = [S(cm l sec) / sec J , qw = angular rate random walk parameter = [1(radl sec) / sec]Z, et = propagation time step, zo =~o,o,...,o~T, P =diag{a2 ,a2 ,a2 ,62 ,a2 ,a2 ,6 o pos pos pos angle angle angle vel vel vel rate rate rate]' Epos ' (lOcm)2, dangle = (lrad.)2, ave! _ (l Ocm l sec)2 , ' 6ro,e = (lrad.l sec.)2, r~,Z = (O.OScm)2 , r~,g~ _ (O.OOSrad.)2, and R = diag{rxyZ ,rxyz ,rxyZ ,r~re~,r~yg~,rwg~, To evaluate the performance of the two filter design approaches, we used a sample trajectory simulating repeated head transitions between a head-down orientation wo mssos 21 l l 611 ~T~S9'u13672 (looking at instruments) and a head-up orientation (looking out). Expected sensing errors were superimposed on the truth trajectory data to provide measurements of y and z position and the Euler angle 8. The filter designs used this input data to estimate these three quantities and their derivatives.
The following table shows a summary of the root-mean-square estimation error results obtained with the coupled and decoupled filter designs. Since the coupled filter design depends on the knowledge of the distance between the base of the neck (pivot point 17) and the reference point 38 on the head, the sensitivity of mismatch between the true lever arm and the filter's assumed value was also investigated. This table also lists the one-sigma errors on the position and angle measurements provided as input to the filters. Several observations can be made based on the data in the table.
First, the coupled filter substantially outperforms the decoupled design, particularly with regard to the translational velocity estimates. Second, the coupled filter is fairly sensitive to error in the assumed base-of neck-to-reference-point lever arm 20 length; however, even with 1 S a 20 percent mismatch between the assumed and true values, performance was noticeably better than the decoupled design. Third, expected accuracies are better than 0.5 cm in position and 0.5 degree in angle (one-sigma errors).
y y z ~ a Case (cm) (cm/sec)(cm) (cmJsec)(cm) (deg/sec) Unfiltered 0.45 -- 0.05 -- 0..36 --(inputs) Decoapled 0.44 4.6 0.05 1.9 0.42 12.5 filters Coupled filter0.44 1.9 0.04 0.7 0.38 10.8 Coupled, 10% 0 2 0 9 0 11 mismatch 44 4 04 0 38 0 in neck lever_ . . . . .
arm Coupled, 20% 0 1 0.04 1 0 11.2 mismatch 44 3 1 39 in neck lever, . . .
arm The following listing is a program for Kalman filter 1 S. Part I shows the program data allocation or storage specification. Several instances of terms include xv(ns) as a state vector, p(ns,ns) as a current estimate of covariance of Kalman filter states, q(ns) as a measurement variance, h(ns) as an observation matrix, rm(nm) as a residual estimate for measurements, and ym(nm) as actual measurements. Part II
is a WO 95/15505 217 7 611 p~~g9~13672 miscellaneous initialization where various values are inserted. Part III
provides for the in/out interface of the Kalman filter. Part IV provides sensor data values for simulation.
Part V has values provided for various time constants of each of the states of the process. Part VI reveals initialization of the states of the Kalman filter and the covariance matrices. The process noise covariance matrix is set for a fixed propagation time step and nonvarying elements are set for the state transition matrix.
Part VII
involves making initial estimates of noise variance in order to set the initial state of Kalman filter 15. The time loop is started in part VIII and the measurement data (ym) and the truth data (xt) are measured. If the system is in a simulated data mode then simulated truth and measurement data are generated. Otherwise, actual data is read in this part. Part IX involves state propagation and calculates new states based on prior information without the new measurements. Part X provides for covariance propagation. Part XI involves measurement processing (at video rates) wherein the degree of dependency between the sensor data with actual estimates of system states is established and a new observation matrix is calculated. Part XII sets forth a covariance matrix for predicted measurements. Part XIII involves the transferring out of already made calculations via an output array. The x-out symbol is a predicted measurement. It is the quantity which may be used to predict image plane coordinates of the helmet-mounted reflectors 12. x-out is part of the array hxout. Other elements of hxout are relevant only to the simulations such as truth data, errors relative to truth, and estimates of process noise.
Subpart A of part XIV reveals a subroutine which is called by the program and uses a mathematical approach to create noise. Subpart B is another subroutine which is a mathematical matrix utility which multiplies a square matrix by a line vector or linear chain of data. Subpart C is a mathematical matrix utility that multiplies matrices.
Subpart D is a subroutine that transposes one matrix and then multiplies it with another.
Subpart E is a subroutine that provides the dot product of vectors.
wo mssos 217 7 611 p~/pgg4/13672 w I. program htkfl2 parameter (ns=12) parameter (nm=6) double precision time character*128 argl,arg2 dimension xv(ns),p(ns,ns),q(ns),h(ns),rm(nm),ym(nm) dimension phi hi(ns,ns),phi_to(ns,ns),ck(ns),tv(ns),ta(ns,ns) dimension hxout(5,6),xt(6) dimension x_out(6),cc(6,6),p out(6,6),error(3,6) data rad2deg/57.29578/
data twopi/6.28319/
II. Miscellaneous Initialization dt hi = 1./420.
dt_lo = 1./60.
tmax = 5.0 rn = 20.
k=6 isimdat = 0 III. File I/O
if(isimdat.eq.0) then call getarg( l ,arg 1 ) 3 5 open( l ,file=arg l ,form='unformatted') read(1) call getarg(2,arg2) open(2,file=arg2,form='unformatted') read(2) endif open(3,file='out',form='unformatted') write(3) 5,6,1,1 open(4,file='err',form='unformatted') write(4) 3,6,1,1 wo 9snssos 21 l 7 611 PCT/US94113672 open(7,file='states',form='unformatted') write(7) 1,12,1,1 IV. Simulated Sensor Data Parameters (sinusoids in x/y/z/phi/tht/psi) (simulated random error one-sigma values in cm, radians) if(isimdat.eq.l) then amp x = 5.
amp_y = 5.
amp z = 2.
amp~hi = 20./rad2deg 1 S amp tht = 20./rad2deg amp-psi = 20./rad2deg w x = twopi/2.
w~ = twopi/2.
w z = twopi/2.
w~hi = twopi/2.
w tht = twopi/2.
w-psi = twopi/2.
phs x = 0./rad2deg phs_,y = 60./rad2deg phs_z = 120./rad2deg phs~hi = 180./rad2deg phs tht = 240./rad2deg phs~si = 300./rad2deg sig~os = 0.05 sig_ang = 0.005 endif V. These values for process and meal. noise give about 0.02 sec lag for measurements processed every 1 /60 sec.
vrw = 5.*25.
raterw = 1.0*25.
pos rand = 0.05 ang rand = 0.005 These values for process and meas. noise give about 0.5 sec lag for measurements processed every 1 /60 sec.
vrw = 5./25.
raterw = 1.0/25.
pos_rand = 0.05 ang_rand = 0.005 wo mssos 217 7 611 p~'~S'~13672 These values for process and measurement noise give about 0.1 sec lag for measurements processed every 1/60 sec.
vrw = 5.
raterw = 1.0 pos rand = 0.05 ang rand = 0.005 VI. Initialize Kalman Filter State Vector (xv) and Covariance Matrix (p).
Set Process Noise Covariance Matrix (q) for Fixed Propagation Time Step.
Define State Transition Matrix (phi_hi for high rate, phi_lo for video frame rate).
do 10 i=l,ns xv(i) = 0.
q(i) = 0.
do 5 j=l,ns p(i~j) = 0.
phi hi(i,j) = 0.
phi_lo(i,j) = 0.
5 continue 10 continue spos = 10.
svel = 10.
sangle = 1.0 srate = 1.0 p(1,1) = spos*spos p(2,2) = spos*spos p(3,3) = spos*spos p(4,4) = sangle*sangle p(5,5) = sangle*sangle p(6,6) = sangle*sangle p(7,7) = svel*svel p(8,8) = svel*svel p(9,9) = svel*svel p( 10,10) = srate* srate p( 11,11 ) = srate* srate p(12,12) = srate*srate q(7) = vrw*vrw*dt to q(8) = vrw*vrw*dt to q(9) = vrw*vrw*dt to q(10) = raterw*raterw*dt to q(11) =raterw*raterw*dt to q(12) = raterw*raterw*dt to WO 95/15505 217 7 611 p~ypS941136'72 do 20 i=l,ns phi hi(i,i) = 1.0 phi lo(i,i) = 1.0 S 20 continue phi hi(1,7) = dt hi phi hi(2,8) = dt hi phi hi(3,9) = dt hi phi hi(4,10) = dt hi phi hi(5,11 ) = dt hi phi hi(6,12) = dt hi phi 10(1,7) = dt to phi-lo(2,8) = dt to phi 10(3,9) = dt to phi 10(4,10) = dt to phi lo(5,11) = dt to phi 10(6,12) = dt to VII. Measurement Noise Variance rm(1) = pos rand**2 rm(2) = pos rand**2 rm(3) = pos rand**2 rm(4) = ang rand**2 rm(5) = ang rand**2 rm(6) = ang _rand**2 VIII. Start Time Loop; Read Measurement Data (ym) and Truth Data (xt) (if in simulated data mode, generate simulated truth and measurement data) 3 5 3 0 time = time + dt_hi k=k+1 if(isimdat.eq.l) then xs = amp x*cos(w x*time + phs x) ys = amps*cos(w_y*time + phs~) zs = amp z*cos(w z*time + phs_z) phis = amp~hi*cos(w_phi*time + phs~hi) thts = amp tht*cos(w tht*time + phs tht) psis = amp~si*cos(w~si*time + phs~si) sphis = sin(phis) cphis = cos(phis) sthts = sin(thts) cthts = cos(thts) wo 9snssos 217 l 611 ~'CT~s94/13672 spsis = sin(psis) cpsis = cos(psis) xt(1) = xs - 0.13 * (cpsi*sphi*stht-spsi*ctht) +
* 0.2 * (cpsi*sphi*ctht+spsi*stht) S xt(2) = ys - 0.13 * (spsi*sphi*stht+cpsi*ctht) +
* 0.2 * (spsi*sphi*ctht-cpsi*stht) xt(3 ) = zs - 0.13 * cphi * stht + 0.2 * cphi * ctht xt(4) = phis xt(5) = thts xt(6) = psis if (k.ge.7) then k=0 imeas = 1 ym( 1 ) = xt( 1 ) + sig~os*rnormQ .
1 S ym(2) = xt(2) + sig_pos*rnorm() ym(3) = xt(3) + sig~os*rnormQ
ym(4) = xt(4) + sig ang*rnormQ
ym(5) = xt(5) + sig_ang*rnorm() ym(6) = xt(6) + sig ang*rnorm() else imeas = 0 endif else if (k.ge.7) then k=0 imeas = 1 read(l,end=200) dum,(ym(n),n=1,6) read(2,end=200) dum,(xt(n),n=1,6) else imeas = 0 endif endif IX. State Propagation (at high rate; 420 Hz here) call mvau (phi hi,xv,tv,ns,ns,ns,ns,ns) do 50 i=l,ns xv(i) = tv(i) continue X. Covariance Propagation (only done at video update rate) (Note: real-time application may be used as fixed gain filter, with enormous savings in throughput requirements. If not, two improvements can be considered: ( 1 ) change to U-D mechanization for better wo mssos 217 7 611 PCT/IJS94/13672 numerical properties, and (2) take advantage of all the sparseness in the state transition matrix for reduced throughput needs.) if(imeas.eq.l) then S
call mab (phi lo,p,ta,ns,ns,ns,ns,ns,ns) call mabt(ta,phi_lo,p,ns,ns,ns,ns,ns,ns) do 60 i=l,ns p(i,i) = p(i,i) + q(i) 60 continue XI. Measurement Processing (at video rate) do 110 m=l,nm do 70 i=l,ns h(i) = 0.
70 continue if(m.le.3) then phi = xv(4) tht = xv(5) psi = xv(6) sphi = sin(phi) cphi = cos(phi) stht = sin(tht) ctht = cos(tht) spsi = sin(psi) cpsi = cos(psi) endif if(m.eq.l) then yhat = xv(1) - 0.13 * (cpsi*sphi*stht-spsi*ctht) +
* 0.2 * (cpsi*sphi*ctht+spsi*stht) h(1) = 1.0 h(4) _ -0.13 * (cpsi*sphi*ctht+sphi*stht) +
* 0.2 * (-cpsi*sphi*stht+spsi*ctht) h(S) _ -0.13 * (cpsi*cphi*stht) +
* 0.2 * (cpsi*cphi*ctht) h(6) = 0.13 * (spsi*sphi*stht+cphi*ctht) +
* 0.2 * (-spsi*sphi*ctht+cpsi*stht) elseif(m.eq.2) then yhat = xv(2) - 0.13 * (spsi*sphi*stht+cpsi*ctht) +
* 0.2 * (spsi*sphi*ctht-cpsi*stht) wo 95/1sS05 217 l 611 PGT/US94/13672 h(2) = 1.0 h(4) _ - 0.13 * (spsi*sphi*ctht-cpsi*stht) +
* 0.2 * (-spsi * sphi * stht-cpsi * ctht) h(S) _ - 0.13 * (sphi*cphi*stht) +
* 0.2 * (spsi*cphi*ctht) h(6) _ - 0.13 * (cphi*sphi*stht-spsi*ctht) +
* 0.2 * (cpsi*sphi*ctht+spsi*stht) elseif(m.eq.3) then yhat = xv(3 ) - 0.13 * cphi * stht + 0.2 * cphi * ctht h(3) = 1.0 h(4) _ - 0.13 * cphi * ctht - 0.2 * cphi * stht h(5) = 0.13 * sphi*stht - 0.2 * sphi*ctht elseif(m.eq.4) then yhat = xv(4) h(4) = 1.0 elseif(m.eq.5) then yhat = xv(5) h(5) = 1.0 elseif(m.eq.6) then yhat = xv(6) h(6) = 1.0 endif do 80 j=1,6 cc(m~j) = h(j) 80 continue r = rm(m) res = ym(m) - yhat call mvau (p,h,tv,ns,ns,ns,ns,ns) rescov = dotuv(h,tv,ns,ns,ns) + r do 90 i=l,ns ck(i) = tv(i)/rescov xv(i) = xv(i) + ck(i)*res 90 continue wo mssos 217 l 611 PCT/US94/136?2 do 100 i=l,ns do 95 j=l,ns p(i~j) = P(i~l) - ri'(i)*ck(j) 95 continue if(p(i,i) .le. 0.0) write(6,*)'Neg Cov, i =',i 100 continue 110 continue XII. Covariance matrix for predicted measurements (p-out = cc * p * cc') call mab(cc,p,ta,6,6,6,6,ns,ns) call mabt(ta,cc,p out,6,6,6,ns,6,6) endif XIII. Output Arrays (x_out is the predicted measurement. It is the quantity which would be used to predict image plane coordinates of the helmet-mounted reflectors.) x out is part of the array hxout. The other elements of hxout are really only relevant to simulations (ie. truth data, errors relative to truth, etc.) phi = xv(4) tht = xv(5) psi = xv(6) sphi = sin(phi) cphi = cos(phi) stht = sin(tht) ctht = cos(tht) spsi = sin(psi) cpsi = cos(psi) x out(1) = xv(1) - 0.13 * (cpsi*sphi*stht-spsi*ctht) +
* 0.2 * (cpsi*sphi*ctht+spsi*stht) x out(2) = xv(2) - 0.13 * (spsi*sphi*stht+cpsi*ctht) +
* 0.2 * (spsi*sphi*ctht-cpsi*stht) x out(3) = xv(3) - 0.13 * cphi*stht + 0.2 * cphi*ctht x out(4) = xv(4) x_out(S) = xv(5) x out(6) = xv(6) do 120 j=1,6 hxout( 1 ~j ) = x out(j ) hxout(2~j) = xt(j) hxout(3 ~j ) = ym(j ) WO 95/15505 217 7 611 pG'1'~594~13672 hxout(4~j) = x out(j) - xt(j) hxout(5 ~j ) = ym(j ) - xt(j ) 120 continue S c if(imeas.eq. l ) write(3) sngl(time),hxout write(3) sngl(time),hxout if(imeas.eq. l ) write(7) sngl(time),xv do 130 j=1,6 error( 1 ~j ) = x out(j ) - xt(j ) 130 continue do 140 j=1,6 error(2~j) = sqrt(p_out(j~j)) error(3 ~j ) =-sqrt(p-out(j ~j )) 140 continue if(imeas.eq. l ) write(4) sngl(time),error if(time.lt.tmax) go to 30 200 continue stop end XIV. rnorm A. function rnorm () This function generates zero-mean, unit-variance, uncorrelated Gaussian random numbers using the UNIX uniform [0,1 ] random number generator 'rand'.
rnorm = sqrt(-2.*alog(rand(0)))*cos(6.283185307*rand(0)) return end B. subroutine mvau Function : multiply a matrix and a vector to produce a vector:
v=a*u Note: v cannot be u.
Inputs pCTIUS94113672 wo 9siissos 217 7 611 a input matrix a input vector m row dimension of input matrix and output vector effective in the operation n column dimension of input matrix and dimension of input vector effective in the operation mra actual row dimension of input matrix mru actual dimension of input vector mrv actual dimension of output vector Outputs v output vector subroutine mvau (a,u,v,m,n,mra,mru,mrv) dimension a(mra, l ),u(mru),v(mrv) double precision sum do 20 i = l,m sum = O.dO
dolOj=l,n sum = sum + a(i~j)*u(j) 10 continue v(i) = sum 20 continue return end C. subroutine mab Function : perform matrix multiplication c = a * b Inputs a input matrix b input matrix m row dimension of a for the purpose of matrix multiplication 1 column dimension of a also row dimension of b for the purpose of matrix multiplication n column dimension of b for the purpose of matrix multiplication mra actual row dimension of a mrb actual row dimension of b mrc actual row dimension of c Outputs c matrix product of a and b Note:
S c cannot be a or b.
subroutine mab (a,b,c,m,l,n,mra,mrb,mrc) dimension a(mra, l ),b(mrb, l ),c(mrc, l ) double precision sum do30i=l,m do20j=l,n sum = O.dO
do 10 k = 1,1 sum = sum + a(i,k)*b(k~j) 10 continue c(i~j) = sum continue continue return end D. subroutine mabt Function : perform matrix multiplication c = a * trans(b) Inputs a input matrix b input matrix m row dimension of a for the purpose of matrix multiplication 1 column dimension of a also row dimension of b for the purpose of matrix multiplication n column dimension of b for the purpose of matrix multiplication mra actual row dimension of a mrb actual row dimension of b mrc actual row dimension of c Outputs c matrix product of a and txans(b) Note:
c cannot be a or b.
subroutine mabt (a,b,c,m,l,n,mra,mrb,mrc) dimension a(mra, l ),b(mrb, l ),c(mrc, l ) double precision sum 5 do30i=l,m do20j=l,n sum = O.dO
do 10 k = 1,1 sum = sum + a(i,k)*b(j,k) 10 10 continue c(i~j) = sum 20 continue 30 continue 15 return end E. function dotuv Function : perform the dot product of two vectors Inputs a input vector v input vector n dimension of u,v over which dot product is performed mru dimension of a mrv dimension of v Outputs value of function = dot product (u,v) function dotuv (u,v,n,mru,mrv) dimension u(mru),v(mrv) double precision sum sum = O.dO
do 10 i = l ,n sum = sum + u(i)*v(i) 10 continue dotuv = sum return end
Claims
1. A videometric tracking system (10) comprising:
a light source (14) for emitting light;
a constellation (45) of at least three reflectors (12) situated on a helmet (13), for reflecting the light from said light source (14); and a video camera for detecting the light reflected from said constellation (45);
characterized by:
a subpixel spot location estimator algorithmic processor (29), implemented in a processor which utilizes video signals from said camera (11) to establish image plane coordinates of said constellation (45) to within a fraction of a pixel;
a helmet (13) three-dimensional (3-D) location and line of sight (LOS) calculation algorithmic processor (33) which utilizes the image plane coordinates calculated by said subpixel spot location estimator algorithmic processor (29); and a filter (15) implemented in a processor, connected to said 3-D location and LOS
calculation algorithmic processor (33), for determining location of said constellation (45) to a certain degree of precision and estimating the helmet (13) 3-D location and body coordinate LOS at a rate faster than an actual measurement rate, via a state propagation.
a light source (14) for emitting light;
a constellation (45) of at least three reflectors (12) situated on a helmet (13), for reflecting the light from said light source (14); and a video camera for detecting the light reflected from said constellation (45);
characterized by:
a subpixel spot location estimator algorithmic processor (29), implemented in a processor which utilizes video signals from said camera (11) to establish image plane coordinates of said constellation (45) to within a fraction of a pixel;
a helmet (13) three-dimensional (3-D) location and line of sight (LOS) calculation algorithmic processor (33) which utilizes the image plane coordinates calculated by said subpixel spot location estimator algorithmic processor (29); and a filter (15) implemented in a processor, connected to said 3-D location and LOS
calculation algorithmic processor (33), for determining location of said constellation (45) to a certain degree of precision and estimating the helmet (13) 3-D location and body coordinate LOS at a rate faster than an actual measurement rate, via a state propagation.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US08/159,391 | 1993-11-30 | ||
US08/159,391 US5424556A (en) | 1993-11-30 | 1993-11-30 | Gradient reflector location sensing system |
PCT/US1994/013672 WO1995015505A1 (en) | 1993-11-30 | 1994-11-30 | Gradient reflector location sensing system |
Publications (2)
Publication Number | Publication Date |
---|---|
CA2177611A1 CA2177611A1 (en) | 1995-06-08 |
CA2177611C true CA2177611C (en) | 2004-11-16 |
Family
ID=22572418
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA002177611A Expired - Fee Related CA2177611C (en) | 1993-11-30 | 1994-11-30 | Gradient reflector location sensing system |
Country Status (6)
Country | Link |
---|---|
US (1) | US5424556A (en) |
EP (1) | EP0731926B1 (en) |
JP (1) | JP3875991B2 (en) |
CA (1) | CA2177611C (en) |
DE (1) | DE69417058T2 (en) |
WO (1) | WO1995015505A1 (en) |
Families Citing this family (46)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3415270B2 (en) | 1994-06-03 | 2003-06-09 | ソニー株式会社 | Image signal encoding method and decoding method |
US5796386A (en) * | 1995-01-23 | 1998-08-18 | International Business Machines Corporation | Precise calibration procedure for sensor-based view point control system |
US5684498A (en) * | 1995-06-26 | 1997-11-04 | Cae Electronics Ltd. | Field sequential color head mounted display with suppressed color break-up |
US5764202A (en) * | 1995-06-26 | 1998-06-09 | Cae Electronics Ltd. | Suppressing image breakup in helmut mounted displays which use temporally separated bit planes to achieve grey scale |
US5812266A (en) * | 1995-12-15 | 1998-09-22 | Hewlett-Packard Company | Non-contact position sensor |
US5886787A (en) * | 1995-12-15 | 1999-03-23 | Hewlett-Packard Company | Displacement sensor and method for producing target feature thereof |
EP0815535B1 (en) * | 1995-12-21 | 2002-10-09 | Koninklijke Philips Electronics N.V. | Directional adaptive noise reduction |
US5861846A (en) * | 1996-02-15 | 1999-01-19 | Minter; Jerry B | Aviation pilot collision alert |
US6459492B1 (en) | 1997-03-14 | 2002-10-01 | Agilent Technologies, Inc. | Non-contact position sensor |
US6292213B1 (en) | 1997-03-30 | 2001-09-18 | Michael J. Jones | Micro video camera usage and usage monitoring |
US6278479B1 (en) | 1998-02-24 | 2001-08-21 | Wilson, Hewitt & Associates, Inc. | Dual reality system |
JP2000012864A (en) * | 1998-06-22 | 2000-01-14 | Semiconductor Energy Lab Co Ltd | Manufacture of semiconductor device |
DE60000537T2 (en) * | 1999-03-01 | 2003-01-30 | Bae Sys Electronics Ltd | HEAD MOTION TRACKING SYSTEM |
DE19930393B4 (en) * | 1999-07-01 | 2004-11-18 | Stanke, Jürgen, Dipl.-Ing. | Procedure for monitoring an area for abnormalities by one or more observers |
US6437759B1 (en) * | 2000-03-07 | 2002-08-20 | L-3 Communications Corporation | Vehicle simulator having head-up display |
US6552698B1 (en) * | 2000-05-02 | 2003-04-22 | Agilent Technologies, Inc. | Immersive display system |
US6822648B2 (en) * | 2001-04-17 | 2004-11-23 | Information Decision Technologies, Llc | Method for occlusion of movable objects and people in augmented reality scenes |
US6597443B2 (en) * | 2001-06-27 | 2003-07-22 | Duane Boman | Spatial tracking system |
US20030169339A1 (en) * | 2001-10-01 | 2003-09-11 | Digeo. Inc. | System and method for tracking an object during video communication |
AU2002337944A1 (en) * | 2001-10-22 | 2003-05-06 | University Of Southern | Extendable tracking by line auto-calibration |
US20030210812A1 (en) * | 2002-02-26 | 2003-11-13 | Ali Khamene | Apparatus and method for surgical navigation |
DE10226398B4 (en) * | 2002-06-13 | 2012-12-06 | Carl Zeiss Ag | Method and device for detecting the position of an object in space |
US7034927B1 (en) | 2002-06-28 | 2006-04-25 | Digeo, Inc. | System and method for identifying an object using invisible light |
US6972787B1 (en) | 2002-06-28 | 2005-12-06 | Digeo, Inc. | System and method for tracking an object with multiple cameras |
US7872635B2 (en) * | 2003-05-15 | 2011-01-18 | Optimetrics, Inc. | Foveated display eye-tracking system and method |
SG115546A1 (en) * | 2003-06-23 | 2005-10-28 | Affineon Technologies Pte Ltd | Computer input device tracking six degrees of freedom |
DE10358017A1 (en) * | 2003-12-11 | 2005-07-21 | Siemens Ag | 3D camera control |
US20060055786A1 (en) * | 2004-03-09 | 2006-03-16 | Viosport | Portable camera and wiring harness |
US7796116B2 (en) | 2005-01-12 | 2010-09-14 | Thinkoptics, Inc. | Electronic equipment for handheld vision based absolute pointing system |
JP4692250B2 (en) * | 2005-12-01 | 2011-06-01 | 株式会社島津製作所 | Head motion tracker |
US8913003B2 (en) | 2006-07-17 | 2014-12-16 | Thinkoptics, Inc. | Free-space multi-dimensional absolute pointer using a projection marker system |
JP4656017B2 (en) * | 2006-07-25 | 2011-03-23 | 株式会社島津製作所 | Head motion tracker device |
US9176598B2 (en) | 2007-05-08 | 2015-11-03 | Thinkoptics, Inc. | Free-space multi-dimensional absolute pointer with improved performance |
JP5594944B2 (en) * | 2008-08-05 | 2014-09-24 | 株式会社島津製作所 | Motion tracker device |
US8963804B2 (en) * | 2008-10-30 | 2015-02-24 | Honeywell International Inc. | Method and system for operating a near-to-eye display |
FR2953604B1 (en) * | 2009-12-04 | 2011-12-02 | Thales Sa | OPTICAL REFLECTOR WITH SEMI-REFLECTIVE BLADES FOR HELMET POSITION DETECTION DEVICE AND HELMET COMPRISING SUCH A DEVICE |
FR2972316B1 (en) * | 2011-03-03 | 2013-03-22 | Thales Sa | ELECTROMAGNETIC TRANSMITTER SIMULTANEOUSLY EMITTING INTO THREE ORTHOGONAL AXES FOR DETECTION OF POSITION AND OBJECT ORIENTATION |
EP2503356A1 (en) * | 2011-03-23 | 2012-09-26 | BAE Systems Plc | Object Tracking |
WO2012127220A1 (en) * | 2011-03-23 | 2012-09-27 | Bae Systems Plc | Object tracking |
US10061058B2 (en) | 2014-05-21 | 2018-08-28 | Universal City Studios Llc | Tracking system and method for use in surveying amusement park equipment |
US10207193B2 (en) | 2014-05-21 | 2019-02-19 | Universal City Studios Llc | Optical tracking system for automation of amusement park elements |
US9600999B2 (en) | 2014-05-21 | 2017-03-21 | Universal City Studios Llc | Amusement park element tracking system |
KR20160069806A (en) * | 2014-12-09 | 2016-06-17 | 한화테크윈 주식회사 | Distance measuring apparatus and distance measuring method |
CN110140151B (en) * | 2016-12-06 | 2023-08-29 | 皇家飞利浦有限公司 | Apparatus and method for generating light intensity image |
WO2019053721A1 (en) * | 2017-09-14 | 2019-03-21 | Everysight Ltd. | System and method for position and orientation tracking |
WO2019107150A1 (en) * | 2017-11-30 | 2019-06-06 | 株式会社ニコン | Detection device, processing device, installation object, detection method, and detection program |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3917412A (en) * | 1972-04-11 | 1975-11-04 | Us Navy | Advanced helmet tracker using lateral photodetection and light-emitting diodes |
GB1520154A (en) * | 1976-02-24 | 1978-08-02 | Elliott Brothers London Ltd | Apparatus for measuring the angular displacement of a bod |
FR2433760A1 (en) * | 1978-08-17 | 1980-03-14 | Thomson Csf | Detector for position of pilot's helmet - uses opto-electronic system giving line of sight for arming system |
FR2487077A1 (en) * | 1980-07-18 | 1982-01-22 | Trt Telecom Radio Electr | DEVICE FOR REMOTELY DETERMINING THE POSITION IN THE SPACE OF AN OBJECT PERFORMING ROTATION MOVEMENTS |
US4649504A (en) * | 1984-05-22 | 1987-03-10 | Cae Electronics, Ltd. | Optical position and orientation measurement techniques |
IL77610A (en) * | 1986-01-15 | 1994-01-25 | Technion Res & Dev Foundation | Single camera three-dimensional head position sensing system |
DE3909762C2 (en) * | 1989-03-23 | 2003-02-06 | Msts Logistik Gmbh & Co | Garbage collection system with a garbage collection vehicle and garbage containers |
US5208641A (en) * | 1990-09-28 | 1993-05-04 | Honeywell Inc. | Laser cavity helmet mounted sight |
IL99385A (en) * | 1990-09-28 | 1994-04-12 | Honeywell Inc | Laser cavity helmet mounted sight |
-
1993
- 1993-11-30 US US08/159,391 patent/US5424556A/en not_active Expired - Lifetime
-
1994
- 1994-11-30 DE DE69417058T patent/DE69417058T2/en not_active Expired - Fee Related
- 1994-11-30 EP EP95903622A patent/EP0731926B1/en not_active Expired - Lifetime
- 1994-11-30 CA CA002177611A patent/CA2177611C/en not_active Expired - Fee Related
- 1994-11-30 WO PCT/US1994/013672 patent/WO1995015505A1/en active IP Right Grant
- 1994-11-30 JP JP51570895A patent/JP3875991B2/en not_active Expired - Fee Related
Also Published As
Publication number | Publication date |
---|---|
JPH09506194A (en) | 1997-06-17 |
DE69417058T2 (en) | 1999-10-07 |
CA2177611A1 (en) | 1995-06-08 |
EP0731926A1 (en) | 1996-09-18 |
US5424556A (en) | 1995-06-13 |
DE69417058D1 (en) | 1999-04-15 |
EP0731926B1 (en) | 1999-03-10 |
JP3875991B2 (en) | 2007-01-31 |
WO1995015505A1 (en) | 1995-06-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CA2177611C (en) | Gradient reflector location sensing system | |
US9720087B2 (en) | Method and device for determining an orientation of an object | |
US5602760A (en) | Image-based detection and tracking system and processing method employing clutter measurements and signal-to-clutter ratios | |
US6704621B1 (en) | System and method for estimating ego-motion of a moving vehicle using successive images recorded along the vehicle's path of motion | |
JP4917206B2 (en) | SAR radar system | |
US6052485A (en) | Fractal features used with nearest neighbor clustering for identifying clutter in sonar images | |
Yi et al. | Optimal sensor and light source positioning for machine vision | |
US20080158226A1 (en) | Imaging model and apparatus | |
WO2004044683A2 (en) | Method for detection and recognition of fog presence within an aircraft compartment using video images | |
Castro et al. | Laser-to-radar sensing redundancy for resilient perception in adverse environmental conditions | |
US20040071316A1 (en) | Method for image recognition in motor vehicles | |
US7400416B2 (en) | Accurate target orientation measuring system | |
Cortés-Pérez et al. | A mirror-based active vision system for underwater robots: From the design to active object tracking application | |
Santos-Victor et al. | Visual-based obstacle detection: a purposive approach using the normal ow | |
Ziraknejad et al. | The effect of Time-of-Flight camera integration time on vehicle driver head pose tracking accuracy | |
Krotkov et al. | Adaptive control of cooperating sensors: Focus and stereo ranging with an agile camera system | |
EP3674693B1 (en) | Systems and methods for measuring reflectivity of a painted object | |
Caccia | Vision-based linear motion estimation for unmanned underwater vehicles | |
Clark et al. | An integral formulation for differential photometric stereo | |
Park et al. | Specularity elimination in range sensing for accurate 3D modeling of specular objects | |
Ikeuchi et al. | Modeling sensor detectability and reliability in the configuration space for model-based vision | |
Ryan et al. | Development of techniques required for the application of a laser to three dimensional visual sensing | |
Morgan et al. | A sensor algorithm expert system | |
Bhat et al. | Binocular stereo in the presence of specular reflection | |
Kierkegaard | Estimating circular shape using binocular stereopsis and two-source active illumination |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
EEER | Examination request | ||
MKLA | Lapsed |