CA2613252A1 - Method and apparatus for determining a location associated with an image - Google Patents
Method and apparatus for determining a location associated with an image Download PDFInfo
- Publication number
- CA2613252A1 CA2613252A1 CA002613252A CA2613252A CA2613252A1 CA 2613252 A1 CA2613252 A1 CA 2613252A1 CA 002613252 A CA002613252 A CA 002613252A CA 2613252 A CA2613252 A CA 2613252A CA 2613252 A1 CA2613252 A1 CA 2613252A1
- Authority
- CA
- Canada
- Prior art keywords
- image
- information
- determining
- imaging system
- location
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title claims description 29
- 238000003384 imaging method Methods 0.000 claims abstract description 131
- 238000005259 measurement Methods 0.000 claims description 31
- 238000004891 communication Methods 0.000 claims description 5
- 230000000694 effects Effects 0.000 abstract description 6
- 230000002411 adverse Effects 0.000 abstract description 2
- 238000009877 rendering Methods 0.000 abstract description 2
- 238000005516 engineering process Methods 0.000 description 5
- 238000013459 approach Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 4
- 230000000737 periodic effect Effects 0.000 description 4
- 238000004422 calculation algorithm Methods 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 239000013598 vector Substances 0.000 description 3
- 238000012935 Averaging Methods 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 2
- 238000009795 derivation Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000005855 radiation Effects 0.000 description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Chemical compound O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 2
- 239000000443 aerosol Substances 0.000 description 1
- 230000032683 aging Effects 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 210000001217 buttock Anatomy 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 239000003344 environmental pollutant Substances 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000010438 heat treatment Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 231100000719 pollutant Toxicity 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Position Fixing By Use Of Radio Waves (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Studio Devices (AREA)
- Navigation (AREA)
Abstract
The adverse effects of various sources of error present in satellite imaging when determining ground location information are reduced to provide more accurate ground location information for imagery, thereby rendering the information more useful for various entities utilizing the images. The determination of ground location coordinates associated with one or more pixels/sub-pixels of an image acquired by an imaging system includes obtaining a reference image, obtaining a target image associated with an earth view and the target image does not overlap the reference image. Known location information associated with the reference image is used to determine location information associated with the target image.
Description
METHOD AND APPARATUS FOR DETERMINING A
LOCATION ASSOCIATED WITH AN IMAGE
FIELD OF THE INVENTION
[0001] The present invention is directed to the determination of ground coorcli.nates associated with imagery and more particularly to the translation of compensated coordinate information from one or more images to other images produced by an imaging system.
BACKGROUND
LOCATION ASSOCIATED WITH AN IMAGE
FIELD OF THE INVENTION
[0001] The present invention is directed to the determination of ground coorcli.nates associated with imagery and more particularly to the translation of compensated coordinate information from one or more images to other images produced by an imaging system.
BACKGROUND
[0002] Remote sensing systems in present day satellite and airborne applications generally provide images that may be processed to include rows of pixels that make up an image frame.
In many applications, it is desirable to know the ground location of one or more pixels within an image. For example, it may be desirable to have the ground location of the image pixels expressed in geographic terms such as longitude, latitude and elevation. A
number of conventions are used to express the precise ground location of a point.
Typically, a reference projection, such as Universal Transverse Mercator (UTIV), is specified along with various horizontal and vertical datums, such as the North American Datum of 1927 (NAD27), the North American Datum of 1983 (NAD83), and the World Geodetic System of 1984 (WGS84). Furtherinore, for images within the United States, it may be desirable to express the location of pixels or objects within an image in PLSS (Public Land Survey System) coordinates, such as township/range/section within a particular state or county.
In many applications, it is desirable to know the ground location of one or more pixels within an image. For example, it may be desirable to have the ground location of the image pixels expressed in geographic terms such as longitude, latitude and elevation. A
number of conventions are used to express the precise ground location of a point.
Typically, a reference projection, such as Universal Transverse Mercator (UTIV), is specified along with various horizontal and vertical datums, such as the North American Datum of 1927 (NAD27), the North American Datum of 1983 (NAD83), and the World Geodetic System of 1984 (WGS84). Furtherinore, for images within the United States, it may be desirable to express the location of pixels or objects within an image in PLSS (Public Land Survey System) coordinates, such as township/range/section within a particular state or county.
[0003] In order to derive accurate ground location information for an image collected by a remote vnaging system and then express it in one of the above Iisted standards, or other standards, the state of the imaging system at the titne of image collection must be known to some degree of certainty. There are numerous variables comprising the state of the itnaging system that determine the precise area imaged by the imaging system. For example, in a satellite imaging application, the orbital position of the satellite, the attitude of imaging system, and various other factors including atmospheric effects and thermal distortion of the satellite or its imaging system, all contribute to the precision to which the area imaged by the imaging system can be determined. Error in the knowledge of each of these factors results in inaccuracies in determining the ground location of areas imaged by the imaging system.
SUMMARY
SUMMARY
[0004] The present invention has recognized that many, if not all, of the factors used to generate ground location information for raw itnage data collected by remote sensing platforms are subject to errors that lead to the derivation of inaccurate ground location information for a related image.
[0005] The present invention reduces the adverse effects of at least one source of error and provides for derivation of more accurate ground location information for imagery, thereby rendering the infori.nation more useful for various entities utilizing the images.
Consequently, if an interested entity receives a ground itnage, the locations of various features within the ground image are known with increased accuracy, thereby facilitating the ability to use such images for a wider variety of applications.
Consequently, if an interested entity receives a ground itnage, the locations of various features within the ground image are known with increased accuracy, thereby facilitating the ability to use such images for a wider variety of applications.
[0006] In one embodiment, the present invention provides a method for determining ground location coordinates for pixels within a satellite image. The method includes the steps of (a) obtaining at least one reference image; (b) locating at least a first pixel in the at least one reference image, the first pixel corresponding to a point having known earth location coordinates; (c) determining an expected pixel location of the point in the first image using at least one of attitude, position and distortion information available for the satellite; (d) calculating at least one compensation factor based on a comparison between the expected pixel location of the point and the known location of the ftrst pixel; (e) obtaining at least one target image of an earth view, the at least one target image not overlapping the at least one reference image; and (f) determining earth location coordinates for at least one pixel within the at least one target image using the compensation factor in conjunction with the attitude, position and distortion information available for the satellite.
[0007] The compensation factor may be calculated by solving a set of equations relating platform position, attitude, distortion and ground location information for an image, whereby adjustments to one or more of position, attitude and distortion are obtained.
The position, attitude, distortion and ground location information are known to varying levels of accuracy prior to adjustment. Solving the set of equations may also be augmented with a priori accuracy uncertainties in the form of covariance matrices, in order to obtain adjustments to position, attitude and distortion relative to their respective levels of accuracy prior to adjustment. The adjusted attitude, adjusted position or adjusted distortion inforination, or any combination thereof, is then used as the compensation factor when determining location coordinates for the target image. The target image may be collected by the imaging system before or after the collection of the reference image.
The position, attitude, distortion and ground location information are known to varying levels of accuracy prior to adjustment. Solving the set of equations may also be augmented with a priori accuracy uncertainties in the form of covariance matrices, in order to obtain adjustments to position, attitude and distortion relative to their respective levels of accuracy prior to adjustment. The adjusted attitude, adjusted position or adjusted distortion inforination, or any combination thereof, is then used as the compensation factor when determining location coordinates for the target image. The target image may be collected by the imaging system before or after the collection of the reference image.
[0008] Another embodiment of the invention provides a method for determining location inforination of an earth image from a remote imaging platform. The method includes the steps of: (a) obtaining at least one reference image; (b) obtaining at least one target image associated with an earth view, the at least one target image not overlapping tlie at least one reference image; and (c) using known location information associated with the at least one reference image to determine location information associated with the at least one target image.
[0009] Yet another embodiment of the invention provides an image of an earth area comprising a plurality of pixels and earth location coordinates of at least one of the pixels.
The pixels and coordinates obtained by the steps of: (a) obtaining at least one reference image, the at least one reference image comprising a plurality of pixels; (b) locating at least a first pixel in the at least one reference image associated with a point, the point having known earth location coordinates; (c) calculating a compensation factor based on a comparison between an expected pixel Iocation of the point within the at least one reference itnage and the known location of the first pixel within the first at least one reference image; (d) obtaining at least one target image from an earth view, the at least one target image comprising a plurality of pixels and not overlapping the at least one reference image; and (e) determining earth location coordinates for at least one pixel of the at least one target itnage based on the compensation factor.
The pixels and coordinates obtained by the steps of: (a) obtaining at least one reference image, the at least one reference image comprising a plurality of pixels; (b) locating at least a first pixel in the at least one reference image associated with a point, the point having known earth location coordinates; (c) calculating a compensation factor based on a comparison between an expected pixel Iocation of the point within the at least one reference itnage and the known location of the first pixel within the first at least one reference image; (d) obtaining at least one target image from an earth view, the at least one target image comprising a plurality of pixels and not overlapping the at least one reference image; and (e) determining earth location coordinates for at least one pixel of the at least one target itnage based on the compensation factor.
[0010] A further embodiment provides a inethod for transporting an image towards an interested entity over a communications network. The metliod comprises the conveying, over a portion of the communication network, a digital image that includes a plurality of pixels, at least one of the pixels having associated ground location information derived based on a compensation factor that has been determined based on at least one ground point from at least one reference image, wherein the at least one reference image is different than the digital image and the at least one reference image does not overlap the digital image.
BRIEF DESCRIPTION OF THE DRAWINGS
BRIEF DESCRIPTION OF THE DRAWINGS
[0011] FIG. 1 is a diagrammatic illustration of a satellite in an earth orbit obtaining an image of the earth;
[0012] FIG. 2 is a block diagram representation of a satellite of an embodiment of the present invention;
10013] FIG. 3 is a flow chart illustration of the operational steps for determirung location coordinates associated with a satellite image for an embod'unent of the present invention;
[0014] FIG. 4 is an illustration of a reference image covering points whose precise locations are known; and [0015] FIG. 5 is an illustration of a path containing several imaged areas for an embodiment of the present invention.
DETAILED DESCRIPTION
[0016] Generally, the present invention is directed to the determination of ground location information associated with at least one pixel of an image acquired by an imaging systeln aboard a satellite or other remote sensing platform. The process involved in producing the gxound location information includes (a) obtaining one or more images (reference images) of areas covering points whose locations are precisely known, (b) predicting the locations of these points using time varying position, attitude, and distortion information available for the imaging system, (c) coinparing the predicted locations with the known locations using a data fitting algorithm to derive one or inore compensation factors, (d) interpolating or extrapolating the compensation factor(s) to other instants in time, and then (e) applying the compensation factor(s) to one or inore other images (target images) of areas that are not covering points with the precisely known locations of the reference images.
The process can be applied to a target image that does not overlap the reference image, and inay also be applied to a target image that does overlap the reference image.
[0017] Having generall.y described the process for producing the itnage and ground location information, an emboditnent of the process is described in greater detail. Referring to FIG. 1, an illustration of a satellite 100 orbiting a planet 104 is now described. At the outset, it is noted that, when referring to the earth herein, reference is made to any celestial body of which it may be desirable to acquire images or other remote sensing information having a related location associated with the body. Furthermore, when referring to a satellite herein, reference is made to any spacecraft, satellite, aircraft or other remote sensing platform that is capable of acquiring images. It is also noted that none of the drawing figures contained herein are drawn to scale, and that such figures are for the purposes of illustration only.
[0018] As illustrated in FIG. 1, the satellite 100 orbits the earth 104 following orbital path 108. The position of the satellite 100 along the orbital path 108 may be defined by several variables, including the in-track location, cross-track location, and radial distance location. In-track location relates to the position of the satellite along the orbital path 108 as it orbits the earth 104. Cross-track location relates to the lateral position of satellite 100 relative to the direction of motion in the orbit 108 (relative to FIG. 1, this would be in and out of the page).
Radial distance relates to the radial distance of the satellite 100 from the center of the earth 104. These factors related to the physical position of the satellite are collectively referred to as the ephemeris of the satellite. When referring to "position" of a satellite herein, reference is made to these factors. Also, relative to the orbital path, the satellite 100 may have pitch, yaw, and roll orientations tliat are collectively referred to as the attitude of the satellite 100.
An imaging system aboard the satellite 100 is capable of acqurring an image 112 that includes a portion the surface of the earth 104. The image 112 is comprised of a pluralitjT of pixels.
[0019] When tlie satellite 100 is acquiring images of the surface of the earth 104, the associated ground location of any particular image pixel(s) may be calculated based on information related to the state of the imaging system, including the position of the system, attitude of the system, and distortion information, as will be described in more detail below.
The ground location may be calculated in terms of latitude, longitude and elevation, or in terins of any other applicable coordinate system. It is often desirable to have knowledge of the location of one or more features associated with an image from such a satellite, and, furthermore, to have a relatively accurate knowledge of the location of each image pixel.
Images collected from the satellite may be used in commercial and non-commercial applications. The number of applications for which an image 112 inay be useful increases with higher resolution of the imaging system, and is further increased when the ground location of one or more pixels contained in the image 112 is known to higher accuracy.
[0020] Referring now to Fig 2, a block diagram representation of an imaging satellite 100 of an embodiment of the present invention is described. The imaging sate]]ite 100 includes a number of insttuments, including a position measurement system 116, an attitude measurement system 120, a thermal measurement system 124, transmit/receive circuitry 128, a satellite movement system 132, a power system 136 and an imaging system 140.
The position measurement system 116 of this embodiment includes a Global Positioning System (GPS) receiver that receives position information from a plurality of GPS
satellites, and is well understood in the art. The position measurement system 116 obtains information from the GPS satellites at periodic intervals. If the position of the satellite 100 is desired to be determined for a point in time between the periodic intervals, the GPS
information from the position measurement system is combined witli other information related to the orbit of the satellite to generate the satellite position for that particular point in time. As is typical in such a system, the position of the satellite 100 obtained from the position ineasurement system 116 contains some amount of error, resulting from the limitations of the position measurement system 116 and associated GPS satellites. In one embodiment, the position of the satellite 100, using data derived and refined from the position measurement system 116 data, is known to within several meters. While this error is small, it is often a relatively significant contributor to uncertainty in ground location associated with pixels in the ground itnage.
[0021] The attitude measurement system 120 is used in determining attitude information for the imaging system 140. In one embodi.ment, the attitude measurement system 120 includes one or more gyroscopes that measure angular rate and one or more star trackers that obtain images of various celestial bodies. The location of the celestial bodies within the images obtained by the star trackers is used to determine the attitude of the imaging system 140. The star trackers, in an embodiment, are placed to provide roll, pitch and yaw orientation information for a reference coordinate system fixed to the imaging system 140.
Similarly as described above with respect to the position measurement system 116, the star trackers of the attitude measurement system operate to obtain images at periodic intervals.
The attitude of the imaging system 140 can, and often does, change between these periodic intervals. For example, in one embodiment, the star trackers collect images at a rate of about Hz, although the frequency may be increased or decreased. In this embodiment, the imaging systein 140 operates to obtain images at line rates between 7 kHz and 24 kHz, although these frequencies inay also be increased or decreased. In any event, the imaging system 140 generally operates at a higher rate than the star trackers, resulting in numerous ground itnage pixels being acquired between successive attitude measurements from the star trackers. The attitude of the imaging system 140 for time periods between successive images of the star trackers is determined using star tracker inforination along with additional information, such as angular rate information from the gyroscopes, to predict the attitude of the imaging system 140. The gyroscopes are used to detect tlie angular rates of the imaging system 140, with this information used to adjust the attitude information for the imaging system 140. The attitude measurement system 120, also has limitations on the accuracy of information provided, resulting in errors in the predicted attitude of tb.e imaging system 140.
While this error is generally small, it is often a relatively significant contributor to uncertainty in ground location associated with pixels in the ground image.
[0022] The thermal measurement system 124 is used in determining thermal characteristics of the itnaging system 140. Thermal characteristics are used, in this embodiment, to compensate for thermal distortion in the imaging system 140. As is well understood, a source of error when determining ground location associated with an image collected by such a satellite-based imaging system 140 is distortion in the imaging system.
Thermal variations monitored by the thermal ineasurement system 124 are used in this embodiment to coinpensate for distortions in the imaging system 140. Such thermal variations occur, for example, when the satellite 100, or portions of the satellite 100, move in or out of sunlight due to shadows cast by the earth or other portions of the satellite 100. The difference in energy received at the components of the imaging system 140 results in the components being heated, thereby resulting in distortion of the imaging system 140 and/or changes in the alignments between the imaging system 140 and the position and attitude measurement systems 116 and 120. Such energy changes may occur when, for example, a solar panel of the satellite 100 changes orientation relative to the satellite body and results in the imaging system components being subject to additional radiation from the sun. In addition to reflections from component parts of the satellite 100, and to the satellite 100 moving into and out of the earth's shadow, the reflected energy froin the earth itself may cause thermal variations in the imaging system 140. For example, if the portion of the earth that is reflecting light to the imaging system 140 is particularly cloudy, more energy is received at the satellite 100 relative to the energy received over a non-cloudy area, thus resulting in additional thermal distortions. The thermal measurement system 124 monitors changing thermal characteristics, and this information is used to compensate for such thermal distortions. The thermal measurement system 124, has limitations on the accuracy of information provided, resulting in errors in the thermal compensation of the imaging system 140 of the satellite 100. While this error is generally relatively small, when used in determining the ground location of pixels within an image that includes a portion of the surface of the earth, this error also contributes to uncertainty in ground location.
[0023] In addition to thermal distortions from the imaging system 140, atmospheric distortions that increase the error of the imaging system 140 may also be present. Such atmospheric distortions may be caused by a variety of sources within the atmosphere associated with the area being imaged, including heating, water vapor, pollutants, and a relatively high or low concentration of aerosols, to name a few. The image distortions resulting from these atmospheric distortions are a further coinponent of error when determining ground location information associated with an area being imaged by the imaging system 140. Furthermore, in addition to the errors in position, attitude and distortion information, the velocity of the satellite 100 results in relativistic distortions in information received. In one embodiment, the satellite 100 travels at a velocity of about seven and one-half kilometers per second. At this velocity, relativistic considerations, while relatively small, are nonetheless present and in one einbodiment, images collected at the satellite 100 are compensated to reflect such considerations. Although this compensation is performed to a relatively high degree of accuracy, some error still is present because of the relativistic changes. While this error is generally small, it is often a relatively significant contributor to uncertainty in ground location associated with pixels in the ground image.
[0024] The added error of the position measurement system 116, the attitude measurement system 120, thermal measureinent system 124, atrnospheric distortion and relativistic changes result in ground location calculations having a degree of uncertainty that, in one embodiment, is about 20 meters. While this uncertainty is relatively small for typical satellite imaging systems, further reduction of tlvs uncertainty would increase the utility of the ground images for a large number of users, and also enable the images to be used in a larger nuinber of applications.
[0025] The transtnit/receive circuitry 128 in this embodiment includes well known con-iponents for communications with the satellite 100 and ground stations and/or other satellites. The satellite 100 generally receives command information related to controlling the positioning of the satellite 100 and the pointing of the imaging system 140, various transmit/receive antennas and/or solar panels. The satellite 100 generally transmits image data along with satellite information from the position measurement system 116, attitude measurement system 120, thermal measurement system 124 and other in.formation used for the monitoring and control of the satellite system 100.
[0026] The movement system 132 contains a number of momentum devices and thrust devices. The momentum devices are utilized in control of the satellite 100 by providing inertial attitude control, as is well understood in the art. As is also understood in the art, satellite positions are controlled by thrust devices mounted on the satellite that operate to position the satellite 100 in various orbital positions. The movement system may be used to change the satellite position and to compensate for various pertu.rbations that result from a number of enviLomnental factors such as solar array or antenna movement, atmospheric drag, solar radiation pressure, gravity gradient effects or other external or internal forces.
[0027] The satellite system 100 also contains a power system 136. The power system may be any power system used in generating power for a satellite. In one embodiment, the power system includes solar panels (not shown) having a plurality of solar cells that operate to generate electricity from light received at the solar panels. The solar panels are connected to the remainder of the power system, which includes a battery, a power regulator, a power supply, and circuitry that operates to change the relative orientation of the solar panels with respect to the satellite system 100 in order to enhance power output from the solar panels by maintaining proper aligntnent with the sun.
[0028] The imaging system 140, as mentioned above, is used to collect images that include all or a portion of the earth's land or water surface. These images may contain one or more natural or manmade features including, but not limited to, buildings, roads, vehicles, geological landmarks, agricultural eleinents, watercraft or platforms. The imaging system 140, in one embodiment, util.izes a pushbroom type imager operating to collect lines of pixels at an adjustable frequency between 7 kHz and 24 kHz. The imaging system 140 may include a plurality of imagers that operate to collect images in different wavelength bands. In one embodiment, the imaging system 140 includes imagers for red, green, blue and near infrared bands. The images collected from these bands may be combined in order to produce a color image of visible light reflected from the surface being imaged. Similarly, the images from any one band, or combination of bands, may be utilized to obtain various types of information related to the imaged surface, such as agiicultural information, air quality information and the like. While four bands of unagery are described above, other embodiments may collect data from sensors with more or fewer bands. In addition, embodiments may collect data from other sensor types, from sensors using active or passive collection technologies, from combinations of sensor types, from sensors with different collection modes or from any remote sensing device from whose data location information can be derived or upon whose data location information can be applied. Examples of sensor types include, but are not limited to, infrared sensors, ultraviolet sensors, radar sensors, lidar sensors and thermal band sensors. Furthermore, embodiments may use these sensor types with active or passive collection technologies. For example, one embodiment may collect radar imagery using active, bi-static radar technology while another embodiment may collect i.nfrared imagery using passive, CCD imaging technology. In one embodiment, the imaging system comprises a combination of sensor types whose orientations relative to each other are known or measured to a predetermined precision. Other embodiments employ sensors with different collection modes, including but not limited to, spot scanners, whiskbroom imagers, body-mounted fraine cameras and frame cameras using one-axis or two-axis steering lnirrors.
The sensor types, collection technologies, combinations of sensor types and collection modes employed in a given embodiment will depend upon the applications that use the data. In one embodiment, the imaging system 140 includes imagers comprising an array of CCD
pixels, wherein each pixel is capable of acquiring up to 2048 levels of brightness and then representing this brightness with 11 bits of data for each pixel in the image.
In another embodiment, one band of the imaging system 140 is used to image features, such as reefs or other natural or manmade structures, that are on, above or beneatli the ocean surface.
[0029] The control that is registered to the acquired imagery may be at a geopositional accuracy of less than a pixel, and the matching of that control to the acquired imagery may also be at a sub-pixel accuracy. For example, the pixel size of the imagery may be 0.6 meters by 0.6 meters as projected to the ground, and the control on the ground is known to a horizontal accuracy of 0.3 meters CE90 (90th-percentile circular error) and a vertical accuracy of 0.3 meters LE90 (90th-percentile linear error). This accuracy knowledge may be derived from the accuracy of the GPS (Global Positioning System) survey of the location on the ground. That ground location of the control may then be defined on a separate "control chip" itnage of the immediate area surrounding and including the control, where the control chip is itself a small image with pixels that may be of a size of 0.6 meters by 0.6 meters. The defined location of the control on the control chip image may be defined at a sub-pixel level, and so the accuracy of the placement of the control feature on the control chip will be at a sub-pixel level. The control chip is then registered (matched) to the acquired imagery using common feature information in the ovexlap of the chip to the acquired imagery.
The ability of a matcher to do a correlation between the control chip and the acquired imagery over the full coinmon area allows inatching accuracy to be less than the acquired image pixel size.
With the matching of the acquired itnage to the chip at a sub-pixel accuracy level, the accuracy of the placement of the control feature onto the control chip at a sub-pixel accuracy level, and the accuracy of the ground control at a sub-pixel accuracy level, the resulting accuracy of the registration of the control to the image may be at a sub-pixel level. The compensation factor derived from this registration is therefore at a sub-pixel accuracy, and the level of error of subsequently acquired target images will be at a sub-pixel level for some deterinined length of time before and after the capture of reference image.
[0030] Referring now to FIG. 3, the operational steps used in the determination of ground location information for an area imaged by a satellite system are described for an embodiment of the invention. In one embodiment, the satellite collects multiple images along its orbital path. Simultaneously, information is contiguously collected at pre-determined intervals from the position measureinent system, attitude measurement system and thermal measw-ement system. These images, along with position, attitude and thermal information, are sent via one or more ground stations to an image production system where the images and associated position, attitude, and distortion information are processed along with any other known information related to the satellite system. The processing may occur at any time, and may be done at near real-time. In this embodiment, the images include both reference images and target images. As mentioned previously, reference images are images
10013] FIG. 3 is a flow chart illustration of the operational steps for determirung location coordinates associated with a satellite image for an embod'unent of the present invention;
[0014] FIG. 4 is an illustration of a reference image covering points whose precise locations are known; and [0015] FIG. 5 is an illustration of a path containing several imaged areas for an embodiment of the present invention.
DETAILED DESCRIPTION
[0016] Generally, the present invention is directed to the determination of ground location information associated with at least one pixel of an image acquired by an imaging systeln aboard a satellite or other remote sensing platform. The process involved in producing the gxound location information includes (a) obtaining one or more images (reference images) of areas covering points whose locations are precisely known, (b) predicting the locations of these points using time varying position, attitude, and distortion information available for the imaging system, (c) coinparing the predicted locations with the known locations using a data fitting algorithm to derive one or inore compensation factors, (d) interpolating or extrapolating the compensation factor(s) to other instants in time, and then (e) applying the compensation factor(s) to one or inore other images (target images) of areas that are not covering points with the precisely known locations of the reference images.
The process can be applied to a target image that does not overlap the reference image, and inay also be applied to a target image that does overlap the reference image.
[0017] Having generall.y described the process for producing the itnage and ground location information, an emboditnent of the process is described in greater detail. Referring to FIG. 1, an illustration of a satellite 100 orbiting a planet 104 is now described. At the outset, it is noted that, when referring to the earth herein, reference is made to any celestial body of which it may be desirable to acquire images or other remote sensing information having a related location associated with the body. Furthermore, when referring to a satellite herein, reference is made to any spacecraft, satellite, aircraft or other remote sensing platform that is capable of acquiring images. It is also noted that none of the drawing figures contained herein are drawn to scale, and that such figures are for the purposes of illustration only.
[0018] As illustrated in FIG. 1, the satellite 100 orbits the earth 104 following orbital path 108. The position of the satellite 100 along the orbital path 108 may be defined by several variables, including the in-track location, cross-track location, and radial distance location. In-track location relates to the position of the satellite along the orbital path 108 as it orbits the earth 104. Cross-track location relates to the lateral position of satellite 100 relative to the direction of motion in the orbit 108 (relative to FIG. 1, this would be in and out of the page).
Radial distance relates to the radial distance of the satellite 100 from the center of the earth 104. These factors related to the physical position of the satellite are collectively referred to as the ephemeris of the satellite. When referring to "position" of a satellite herein, reference is made to these factors. Also, relative to the orbital path, the satellite 100 may have pitch, yaw, and roll orientations tliat are collectively referred to as the attitude of the satellite 100.
An imaging system aboard the satellite 100 is capable of acqurring an image 112 that includes a portion the surface of the earth 104. The image 112 is comprised of a pluralitjT of pixels.
[0019] When tlie satellite 100 is acquiring images of the surface of the earth 104, the associated ground location of any particular image pixel(s) may be calculated based on information related to the state of the imaging system, including the position of the system, attitude of the system, and distortion information, as will be described in more detail below.
The ground location may be calculated in terms of latitude, longitude and elevation, or in terins of any other applicable coordinate system. It is often desirable to have knowledge of the location of one or more features associated with an image from such a satellite, and, furthermore, to have a relatively accurate knowledge of the location of each image pixel.
Images collected from the satellite may be used in commercial and non-commercial applications. The number of applications for which an image 112 inay be useful increases with higher resolution of the imaging system, and is further increased when the ground location of one or more pixels contained in the image 112 is known to higher accuracy.
[0020] Referring now to Fig 2, a block diagram representation of an imaging satellite 100 of an embodiment of the present invention is described. The imaging sate]]ite 100 includes a number of insttuments, including a position measurement system 116, an attitude measurement system 120, a thermal measurement system 124, transmit/receive circuitry 128, a satellite movement system 132, a power system 136 and an imaging system 140.
The position measurement system 116 of this embodiment includes a Global Positioning System (GPS) receiver that receives position information from a plurality of GPS
satellites, and is well understood in the art. The position measurement system 116 obtains information from the GPS satellites at periodic intervals. If the position of the satellite 100 is desired to be determined for a point in time between the periodic intervals, the GPS
information from the position measurement system is combined witli other information related to the orbit of the satellite to generate the satellite position for that particular point in time. As is typical in such a system, the position of the satellite 100 obtained from the position ineasurement system 116 contains some amount of error, resulting from the limitations of the position measurement system 116 and associated GPS satellites. In one embodiment, the position of the satellite 100, using data derived and refined from the position measurement system 116 data, is known to within several meters. While this error is small, it is often a relatively significant contributor to uncertainty in ground location associated with pixels in the ground itnage.
[0021] The attitude measurement system 120 is used in determining attitude information for the imaging system 140. In one embodi.ment, the attitude measurement system 120 includes one or more gyroscopes that measure angular rate and one or more star trackers that obtain images of various celestial bodies. The location of the celestial bodies within the images obtained by the star trackers is used to determine the attitude of the imaging system 140. The star trackers, in an embodiment, are placed to provide roll, pitch and yaw orientation information for a reference coordinate system fixed to the imaging system 140.
Similarly as described above with respect to the position measurement system 116, the star trackers of the attitude measurement system operate to obtain images at periodic intervals.
The attitude of the imaging system 140 can, and often does, change between these periodic intervals. For example, in one embodiment, the star trackers collect images at a rate of about Hz, although the frequency may be increased or decreased. In this embodiment, the imaging systein 140 operates to obtain images at line rates between 7 kHz and 24 kHz, although these frequencies inay also be increased or decreased. In any event, the imaging system 140 generally operates at a higher rate than the star trackers, resulting in numerous ground itnage pixels being acquired between successive attitude measurements from the star trackers. The attitude of the imaging system 140 for time periods between successive images of the star trackers is determined using star tracker inforination along with additional information, such as angular rate information from the gyroscopes, to predict the attitude of the imaging system 140. The gyroscopes are used to detect tlie angular rates of the imaging system 140, with this information used to adjust the attitude information for the imaging system 140. The attitude measurement system 120, also has limitations on the accuracy of information provided, resulting in errors in the predicted attitude of tb.e imaging system 140.
While this error is generally small, it is often a relatively significant contributor to uncertainty in ground location associated with pixels in the ground image.
[0022] The thermal measurement system 124 is used in determining thermal characteristics of the itnaging system 140. Thermal characteristics are used, in this embodiment, to compensate for thermal distortion in the imaging system 140. As is well understood, a source of error when determining ground location associated with an image collected by such a satellite-based imaging system 140 is distortion in the imaging system.
Thermal variations monitored by the thermal ineasurement system 124 are used in this embodiment to coinpensate for distortions in the imaging system 140. Such thermal variations occur, for example, when the satellite 100, or portions of the satellite 100, move in or out of sunlight due to shadows cast by the earth or other portions of the satellite 100. The difference in energy received at the components of the imaging system 140 results in the components being heated, thereby resulting in distortion of the imaging system 140 and/or changes in the alignments between the imaging system 140 and the position and attitude measurement systems 116 and 120. Such energy changes may occur when, for example, a solar panel of the satellite 100 changes orientation relative to the satellite body and results in the imaging system components being subject to additional radiation from the sun. In addition to reflections from component parts of the satellite 100, and to the satellite 100 moving into and out of the earth's shadow, the reflected energy froin the earth itself may cause thermal variations in the imaging system 140. For example, if the portion of the earth that is reflecting light to the imaging system 140 is particularly cloudy, more energy is received at the satellite 100 relative to the energy received over a non-cloudy area, thus resulting in additional thermal distortions. The thermal measurement system 124 monitors changing thermal characteristics, and this information is used to compensate for such thermal distortions. The thermal measurement system 124, has limitations on the accuracy of information provided, resulting in errors in the thermal compensation of the imaging system 140 of the satellite 100. While this error is generally relatively small, when used in determining the ground location of pixels within an image that includes a portion of the surface of the earth, this error also contributes to uncertainty in ground location.
[0023] In addition to thermal distortions from the imaging system 140, atmospheric distortions that increase the error of the imaging system 140 may also be present. Such atmospheric distortions may be caused by a variety of sources within the atmosphere associated with the area being imaged, including heating, water vapor, pollutants, and a relatively high or low concentration of aerosols, to name a few. The image distortions resulting from these atmospheric distortions are a further coinponent of error when determining ground location information associated with an area being imaged by the imaging system 140. Furthermore, in addition to the errors in position, attitude and distortion information, the velocity of the satellite 100 results in relativistic distortions in information received. In one embodiment, the satellite 100 travels at a velocity of about seven and one-half kilometers per second. At this velocity, relativistic considerations, while relatively small, are nonetheless present and in one einbodiment, images collected at the satellite 100 are compensated to reflect such considerations. Although this compensation is performed to a relatively high degree of accuracy, some error still is present because of the relativistic changes. While this error is generally small, it is often a relatively significant contributor to uncertainty in ground location associated with pixels in the ground image.
[0024] The added error of the position measurement system 116, the attitude measurement system 120, thermal measureinent system 124, atrnospheric distortion and relativistic changes result in ground location calculations having a degree of uncertainty that, in one embodiment, is about 20 meters. While this uncertainty is relatively small for typical satellite imaging systems, further reduction of tlvs uncertainty would increase the utility of the ground images for a large number of users, and also enable the images to be used in a larger nuinber of applications.
[0025] The transtnit/receive circuitry 128 in this embodiment includes well known con-iponents for communications with the satellite 100 and ground stations and/or other satellites. The satellite 100 generally receives command information related to controlling the positioning of the satellite 100 and the pointing of the imaging system 140, various transmit/receive antennas and/or solar panels. The satellite 100 generally transmits image data along with satellite information from the position measurement system 116, attitude measurement system 120, thermal measurement system 124 and other in.formation used for the monitoring and control of the satellite system 100.
[0026] The movement system 132 contains a number of momentum devices and thrust devices. The momentum devices are utilized in control of the satellite 100 by providing inertial attitude control, as is well understood in the art. As is also understood in the art, satellite positions are controlled by thrust devices mounted on the satellite that operate to position the satellite 100 in various orbital positions. The movement system may be used to change the satellite position and to compensate for various pertu.rbations that result from a number of enviLomnental factors such as solar array or antenna movement, atmospheric drag, solar radiation pressure, gravity gradient effects or other external or internal forces.
[0027] The satellite system 100 also contains a power system 136. The power system may be any power system used in generating power for a satellite. In one embodiment, the power system includes solar panels (not shown) having a plurality of solar cells that operate to generate electricity from light received at the solar panels. The solar panels are connected to the remainder of the power system, which includes a battery, a power regulator, a power supply, and circuitry that operates to change the relative orientation of the solar panels with respect to the satellite system 100 in order to enhance power output from the solar panels by maintaining proper aligntnent with the sun.
[0028] The imaging system 140, as mentioned above, is used to collect images that include all or a portion of the earth's land or water surface. These images may contain one or more natural or manmade features including, but not limited to, buildings, roads, vehicles, geological landmarks, agricultural eleinents, watercraft or platforms. The imaging system 140, in one embodiment, util.izes a pushbroom type imager operating to collect lines of pixels at an adjustable frequency between 7 kHz and 24 kHz. The imaging system 140 may include a plurality of imagers that operate to collect images in different wavelength bands. In one embodiment, the imaging system 140 includes imagers for red, green, blue and near infrared bands. The images collected from these bands may be combined in order to produce a color image of visible light reflected from the surface being imaged. Similarly, the images from any one band, or combination of bands, may be utilized to obtain various types of information related to the imaged surface, such as agiicultural information, air quality information and the like. While four bands of unagery are described above, other embodiments may collect data from sensors with more or fewer bands. In addition, embodiments may collect data from other sensor types, from sensors using active or passive collection technologies, from combinations of sensor types, from sensors with different collection modes or from any remote sensing device from whose data location information can be derived or upon whose data location information can be applied. Examples of sensor types include, but are not limited to, infrared sensors, ultraviolet sensors, radar sensors, lidar sensors and thermal band sensors. Furthermore, embodiments may use these sensor types with active or passive collection technologies. For example, one embodiment may collect radar imagery using active, bi-static radar technology while another embodiment may collect i.nfrared imagery using passive, CCD imaging technology. In one embodiment, the imaging system comprises a combination of sensor types whose orientations relative to each other are known or measured to a predetermined precision. Other embodiments employ sensors with different collection modes, including but not limited to, spot scanners, whiskbroom imagers, body-mounted fraine cameras and frame cameras using one-axis or two-axis steering lnirrors.
The sensor types, collection technologies, combinations of sensor types and collection modes employed in a given embodiment will depend upon the applications that use the data. In one embodiment, the imaging system 140 includes imagers comprising an array of CCD
pixels, wherein each pixel is capable of acquiring up to 2048 levels of brightness and then representing this brightness with 11 bits of data for each pixel in the image.
In another embodiment, one band of the imaging system 140 is used to image features, such as reefs or other natural or manmade structures, that are on, above or beneatli the ocean surface.
[0029] The control that is registered to the acquired imagery may be at a geopositional accuracy of less than a pixel, and the matching of that control to the acquired imagery may also be at a sub-pixel accuracy. For example, the pixel size of the imagery may be 0.6 meters by 0.6 meters as projected to the ground, and the control on the ground is known to a horizontal accuracy of 0.3 meters CE90 (90th-percentile circular error) and a vertical accuracy of 0.3 meters LE90 (90th-percentile linear error). This accuracy knowledge may be derived from the accuracy of the GPS (Global Positioning System) survey of the location on the ground. That ground location of the control may then be defined on a separate "control chip" itnage of the immediate area surrounding and including the control, where the control chip is itself a small image with pixels that may be of a size of 0.6 meters by 0.6 meters. The defined location of the control on the control chip image may be defined at a sub-pixel level, and so the accuracy of the placement of the control feature on the control chip will be at a sub-pixel level. The control chip is then registered (matched) to the acquired imagery using common feature information in the ovexlap of the chip to the acquired imagery.
The ability of a matcher to do a correlation between the control chip and the acquired imagery over the full coinmon area allows inatching accuracy to be less than the acquired image pixel size.
With the matching of the acquired itnage to the chip at a sub-pixel accuracy level, the accuracy of the placement of the control feature onto the control chip at a sub-pixel accuracy level, and the accuracy of the ground control at a sub-pixel accuracy level, the resulting accuracy of the registration of the control to the image may be at a sub-pixel level. The compensation factor derived from this registration is therefore at a sub-pixel accuracy, and the level of error of subsequently acquired target images will be at a sub-pixel level for some deterinined length of time before and after the capture of reference image.
[0030] Referring now to FIG. 3, the operational steps used in the determination of ground location information for an area imaged by a satellite system are described for an embodiment of the invention. In one embodiment, the satellite collects multiple images along its orbital path. Simultaneously, information is contiguously collected at pre-determined intervals from the position measureinent system, attitude measurement system and thermal measw-ement system. These images, along with position, attitude and thermal information, are sent via one or more ground stations to an image production system where the images and associated position, attitude, and distortion information are processed along with any other known information related to the satellite system. The processing may occur at any time, and may be done at near real-time. In this embodiment, the images include both reference images and target images. As mentioned previously, reference images are images
-13-that overlap one or more ground points having location coordinates that are known to a high degree of accuracy, and target images are images that do not overlap ground points having location coordinates that are known to a high degree of accuracy. In the embodiment of FIG. 3, the position of the satellite is determined for a first reference image, as indicated at block 200. The position, as described above, includes information related to the orbital position of the satellite at the time the first reference image was collected, and includes in-track information, cross-track information, and radial distance information.
The position may be determined using information from the position measureinent system and other ground information used to improve the overaIl position knowledge. At block 204, the attitude information for the imaging system is determined. The attitude of the imaging system, as previously discussed, includes the pitch, roll and yaw orientation of the imaging system relative to the orbital path of a reference coordinate system of the itnaging system. When determining the attitude information, information is collected froin various attitude measurement systein coinponents. This information is analyzed to determine the attitude of the imaging system. At block 208, the distortion information for the imaging system is determined. The distortion information includes known variances in the optic components of the imaging system, along with thermal distortion variations of the optic components as monitored by the thermal measurement system. Also included in the distortion information is distortion froin the earth's atmosphere.
[0031] Following the deterlnination of the position, attitude and distortion information, the predicted pixel location of at least one predetermined ground point is calculated, according to block 212. In one embodiment, this predicted pixel location is determined using the position of the imaging system, attitude of the imaging system, and distortion of the imaging system to calculate a ground location of at least one pixel from the image.
Specifically the position provides the location of the imaging system above the earth's surface, the attitude provides the direction from which the imaging system is collecting itnages, and
The position may be determined using information from the position measureinent system and other ground information used to improve the overaIl position knowledge. At block 204, the attitude information for the imaging system is determined. The attitude of the imaging system, as previously discussed, includes the pitch, roll and yaw orientation of the imaging system relative to the orbital path of a reference coordinate system of the itnaging system. When determining the attitude information, information is collected froin various attitude measurement systein coinponents. This information is analyzed to determine the attitude of the imaging system. At block 208, the distortion information for the imaging system is determined. The distortion information includes known variances in the optic components of the imaging system, along with thermal distortion variations of the optic components as monitored by the thermal measurement system. Also included in the distortion information is distortion froin the earth's atmosphere.
[0031] Following the deterlnination of the position, attitude and distortion information, the predicted pixel location of at least one predetermined ground point is calculated, according to block 212. In one embodiment, this predicted pixel location is determined using the position of the imaging system, attitude of the imaging system, and distortion of the imaging system to calculate a ground location of at least one pixel from the image.
Specifically the position provides the location of the imaging system above the earth's surface, the attitude provides the direction from which the imaging system is collecting itnages, and
-14-the distortion provides tlie amount by which the light rays are skewed from what they would be if there were no thermal, atmospheric or relativistic effects. The position of the imaging system, along with the direction in which the im.aging system is pointed, and the effects of distortion on the imaging system result in a theoretical location on the earth's surface that produced the light received by the imaging system. This theoretical location is then further adjusted based on surface features of the location on the earth's surface, such as mountainous terrain. This additional calculation is made, and the predicted pi-xel location is produced.
[0032] Following the determination of the predicted pixel location of each predetermined ground point in the reference image, a compensation factor is calculated for one or more of the position, attitude and distortion information based on a comparison between the predicted pixel location of each predetermined ground point in the reference unage and the actual pixel location of each predeterinined ground point, as indicated at block 216. The calculation of the compensation factor(s) will be described in more detail below.
[0033] Following the calculation of the coinpensation factor(s), the ground location of at least one pixel in other images collected by the imaging system may be computed using the compensated attitude, position and/or distortion information. In the embodiment of FIG. 3, the compensation factor(s) are utilized if the location accuracies of the pixels in the target images are better than accuracies achievable using other conventional methods.
As discussed above, the satellite has various perturbations and temperature fluctuations throughout every orbit. Thus, when compensation factor(s) are calculated based on the difference between a predicted pixel location of a predetermined ground point in a reference image and an actual pixel location of the predetermined ground point in the reference image, further changes in the position, attitude or distortion of the imaging system will reduce the accuracy of the compensation factor(s), until, at some point, the ground locations of pixels predicted using standard sensor-derived measurements are more accurate than the ground locations determined using the compensation factor(s). In such a case, the compensation factor(s) may
[0032] Following the determination of the predicted pixel location of each predetermined ground point in the reference image, a compensation factor is calculated for one or more of the position, attitude and distortion information based on a comparison between the predicted pixel location of each predetermined ground point in the reference unage and the actual pixel location of each predeterinined ground point, as indicated at block 216. The calculation of the compensation factor(s) will be described in more detail below.
[0033] Following the calculation of the coinpensation factor(s), the ground location of at least one pixel in other images collected by the imaging system may be computed using the compensated attitude, position and/or distortion information. In the embodiment of FIG. 3, the compensation factor(s) are utilized if the location accuracies of the pixels in the target images are better than accuracies achievable using other conventional methods.
As discussed above, the satellite has various perturbations and temperature fluctuations throughout every orbit. Thus, when compensation factor(s) are calculated based on the difference between a predicted pixel location of a predetermined ground point in a reference image and an actual pixel location of the predetermined ground point in the reference image, further changes in the position, attitude or distortion of the imaging system will reduce the accuracy of the compensation factor(s), until, at some point, the ground locations of pixels predicted using standard sensor-derived measurements are more accurate than the ground locations determined using the compensation factor(s). In such a case, the compensation factor(s) may
- 15 -not used, and the ground locations of pixels predicted using standard sensor-derived measurements are utilized for ground location information. The ground location of one or more pixels in the second image is determined utilizing the compensation factor(s), as noted at block 220. In this manner, the ground location of images acquired before and/or after acquiring a reference image may be deterinined to a relatively high degree of accuracy.
Furthermore, if multiple reference isnages are taken during an orbit while collecting images, it may be possible to determine the ground location of all of the images taken for that orbit utilizing adjustment factors generated from the respective reference images.
[0034] It is noted that the order in which the operational steps are described with respect to FIG. 3 may be modified. For example, the second image may be acquired prior to the reference image being acquired. The compensation factor may be applied to the second iinage, even though the second image was acquired prior to the acquisition of the reference image. In another embodiment, multiple reference images are taken, and a fitting algorithm is applied to the predicted locations for each predetermined ground point in each image to derive a set of compensation factors for various images acquired between the acquisition of reference images. Such a fitting algorithm may be a least squares fit.
[0035] Referring now to FIG. 4, the deteYmination of the compensation factor(s) for one embodiment of the invention is now described. As discussed previously, the imaging system aboard an imaging satellite acquites a reference itnage 300, overlapping one or more predetermined ground points. The location on the earth of each predetermined ground point may be expressed in terms of latitude, longitude and elevation, relative to any appropriate datum, such as WGS84. Such a predetermined ground point may be any identifiable natural or artificial feature included in an image of the earth having a known location. Examples of predetermined ground points include, but are not limited to, sidewalk corners, building corners, parking lot corners, coastal features and identifiable features on islands. One consideration in the selection of a predetermined ground point is that it be relatively easy to
Furthermore, if multiple reference isnages are taken during an orbit while collecting images, it may be possible to determine the ground location of all of the images taken for that orbit utilizing adjustment factors generated from the respective reference images.
[0034] It is noted that the order in which the operational steps are described with respect to FIG. 3 may be modified. For example, the second image may be acquired prior to the reference image being acquired. The compensation factor may be applied to the second iinage, even though the second image was acquired prior to the acquisition of the reference image. In another embodiment, multiple reference images are taken, and a fitting algorithm is applied to the predicted locations for each predetermined ground point in each image to derive a set of compensation factors for various images acquired between the acquisition of reference images. Such a fitting algorithm may be a least squares fit.
[0035] Referring now to FIG. 4, the deteYmination of the compensation factor(s) for one embodiment of the invention is now described. As discussed previously, the imaging system aboard an imaging satellite acquites a reference itnage 300, overlapping one or more predetermined ground points. The location on the earth of each predetermined ground point may be expressed in terms of latitude, longitude and elevation, relative to any appropriate datum, such as WGS84. Such a predetermined ground point may be any identifiable natural or artificial feature included in an image of the earth having a known location. Examples of predetermined ground points include, but are not limited to, sidewalk corners, building corners, parking lot corners, coastal features and identifiable features on islands. One consideration in the selection of a predetermined ground point is that it be relatively easy to
-16-identify in an image of the area containing the predetermined ground point. A
point that has a high degree of contrast compared to surrounding area within an image and having a known location is often desirable, although a predetermined ground point may be any point that is identifiable either by a computing system or a human user. In one embodiment, image registration is used to determine the amount of error present in the computed locations of the predetermined ground point. Such image registration may be general feature based, line feature based and/or area correlation based. Area correlation based image registration evaluates an area of pixels around a point, and registers that area to an area of similar size in a control image. The control image has been acquired by a remote imaging platform and has actual area locations known to a higli degree of accuracy. The amount of error present between the predicted location for the area and the actual location of the area is used in determin;ng the compensation factors. Feature and line registration identify and match more specific items within an image, such as the edge of a building or a sidewalk.
Groups of pixels are identified that outline or delineate a feature, and that grouping of pixels is compared to the same grouping in a control image. While the above discussion covers registrations performed in pixel space, the same registrations can be accomplished in any other domain whose transformation to and from the pixel domain can be performed with a predetermined precision. For example, the line feature registration can be done in vector space by representing features in the reference image as vectors and registering them to known vectors for features in the control image. Similarly, area correlation registration can be done by representing the area in the reference image as a polygon and then registering the polygon to a known polygon in the control image. In one embodiment, predetermined ground points are selected in locations where the likelihood of cloud cover is reduced, in order to have increased likelihood that the predetermined ground point will be visible when the reference itnage 300 is collected.
point that has a high degree of contrast compared to surrounding area within an image and having a known location is often desirable, although a predetermined ground point may be any point that is identifiable either by a computing system or a human user. In one embodiment, image registration is used to determine the amount of error present in the computed locations of the predetermined ground point. Such image registration may be general feature based, line feature based and/or area correlation based. Area correlation based image registration evaluates an area of pixels around a point, and registers that area to an area of similar size in a control image. The control image has been acquired by a remote imaging platform and has actual area locations known to a higli degree of accuracy. The amount of error present between the predicted location for the area and the actual location of the area is used in determin;ng the compensation factors. Feature and line registration identify and match more specific items within an image, such as the edge of a building or a sidewalk.
Groups of pixels are identified that outline or delineate a feature, and that grouping of pixels is compared to the same grouping in a control image. While the above discussion covers registrations performed in pixel space, the same registrations can be accomplished in any other domain whose transformation to and from the pixel domain can be performed with a predetermined precision. For example, the line feature registration can be done in vector space by representing features in the reference image as vectors and registering them to known vectors for features in the control image. Similarly, area correlation registration can be done by representing the area in the reference image as a polygon and then registering the polygon to a known polygon in the control image. In one embodiment, predetermined ground points are selected in locations where the likelihood of cloud cover is reduced, in order to have increased likelihood that the predetermined ground point will be visible when the reference itnage 300 is collected.
-17-[0036] Referring again to FIG. 4, the predicted pixel locations of four predetermined ground points illustrated as A, B, C, and D are determined for the reference image 300. The locations of A, B, C, and D as i]Iustrated in FIG. 4 are the predicted pixel locations of A, B, C, D based on attitude, position, and distortion information for the imaging satellite, and surface location information such as elevation for the earth location. The actual pixel locations of the predetermined ground points, identified as A', B', C, and D', are known a pzotz to a high degree of accuracy. The difference between the predicted pixel locations and the actual pixel locations is then utilized to determine the compensation factor. In one embodiment, the coinpensation factor is a modified imaging system attitude. In another embodiment, the compensation factor is a modified imaging system attitude and a modified iinaging system position. In yet another embodiment, the compensation factor is a modified imaging system attitude and a tnodified imaging system position, and modified distortion information. In other embodiments where more than one of the imaging system attitude, position and distortion are compensated, one factor receives more compensation relative to another factor.
[0037] The compensation factor is determined, in one embodiment, by solving a set of equations having variables related to position of the imaging system, attitude of the imaging system, distortion of the itnaging system and the ground location of images acquired by the imaging system. In one embodiment, where imaging system attitude is compensated, the position of the imaging system determined at block 200 in FIG. 3 is assumed to be correct, the distortion of the imaging system determined at block 208 in FIG. 3 is assumed to be correct and the ground location of a pixel corresponding to a predetermined ground point from the reference image is set to be the known location of the predetermined ground point identified in the reference image. The equations are then solved to determine the compensated attitude of the imaging system. This compensated attitude is then used in other images in determining tlie ground location of pixels within the other images.
[0037] The compensation factor is determined, in one embodiment, by solving a set of equations having variables related to position of the imaging system, attitude of the imaging system, distortion of the itnaging system and the ground location of images acquired by the imaging system. In one embodiment, where imaging system attitude is compensated, the position of the imaging system determined at block 200 in FIG. 3 is assumed to be correct, the distortion of the imaging system determined at block 208 in FIG. 3 is assumed to be correct and the ground location of a pixel corresponding to a predetermined ground point from the reference image is set to be the known location of the predetermined ground point identified in the reference image. The equations are then solved to determine the compensated attitude of the imaging system. This compensated attitude is then used in other images in determining tlie ground location of pixels within the other images.
-18-[0038] In one embodiment, triangulation is used to compute the compensated imaging system attitude. Triangulation, in this embodiment, is performed using a state-space estimation approach. The state-space approach to the triangulation may utilize least squares, least squares utilizing a przori information, or stochastic or Bayesian estimation such as a Kalman filter. In an embodiment utilizing a basic least squares approach, it is assumed that the position is correct, the distortion is correct and that the ground location associated with a pixel in the reference image corresponding to a predetermined ground point is correct. The attitude is tlhen solved for and utilized as the compensation factor.
[0039] While the position parameters described above are assuined to be correct, or to have a small covariance, when determining compensated imaging system attitude information, other alternatives may also be used. In the above-described embodiment, imaging system attitude is selected because, in this embodiment, the imaging system attitude is the primary source of uncertainty. By reducing the primary source of uncertainty, the accuracy of the ground locations associated with other images that do not overlap ground control points is increased. In other emboditnents, where imaging system attitude is not the primary source of uncertainty, other parameters may be compensated as appropriate.
[0040] In another embodiment, a least squares approach utilizing a pzori information is utilized to determine the compensation factor. In this embodiment, the imaging system position, attitude, distortion and pixel location of the predetermined ground point, along with a pfzori covariance information related to each of these factors are utilized in calculating the compensation factor. In this embodiment, all of the factors may be compensated, with the amount of compensation to each parameter controlled by their respective covariances.
Covariance is a measure of uncertainty, and may be represented by a covariance matrix. For example, a 3X3 covariance mattix may be used for position of the imaging system, with elements in the matrix corresponding to the in-track, cross-track and radial distance position of the imaging system. The 3x3 matrix includes diagonal elements that are the variance of
[0039] While the position parameters described above are assuined to be correct, or to have a small covariance, when determining compensated imaging system attitude information, other alternatives may also be used. In the above-described embodiment, imaging system attitude is selected because, in this embodiment, the imaging system attitude is the primary source of uncertainty. By reducing the primary source of uncertainty, the accuracy of the ground locations associated with other images that do not overlap ground control points is increased. In other emboditnents, where imaging system attitude is not the primary source of uncertainty, other parameters may be compensated as appropriate.
[0040] In another embodiment, a least squares approach utilizing a pzori information is utilized to determine the compensation factor. In this embodiment, the imaging system position, attitude, distortion and pixel location of the predetermined ground point, along with a pfzori covariance information related to each of these factors are utilized in calculating the compensation factor. In this embodiment, all of the factors may be compensated, with the amount of compensation to each parameter controlled by their respective covariances.
Covariance is a measure of uncertainty, and may be represented by a covariance matrix. For example, a 3X3 covariance mattix may be used for position of the imaging system, with elements in the matrix corresponding to the in-track, cross-track and radial distance position of the imaging system. The 3x3 matrix includes diagonal elements that are the variance of
-19-the position error for each axis of position information, and the off-diagonal elements are correlation factors between position errors for each element. Other covariance matrices may be generated for imaging system attitude information, distortion information, and the predetermined ground point location.
[0041] Using least squares or Is'-alman filter with a ptzolz covariances, compensations are generated for each paxameter. In addition, covariances associated with each parameter are also produced. Hence, the a posteriojz covariance of the improved attitude, for example, is known using the covariance associated with the attitude corrections.
[0042] As discussed previously, in one embodiment multiple reference images are collected from a particular orbit of the iinaging system. In this embodiment, as illustrated in FIG. 5, various images are collected within a satellite ground access swath 400. Included in the collected images are a first reference image 404, and a second reference image 408. The reference images 404, 408 are collected from areas within the satellite ground access swath 400 that overlap predeterinuied ground points. The areas that contain actual predetermined ground points are indicated as cross-hatched control images 406, 410 in FIG.
5. In the example illustrated in FIG. 5, a third image 412 and a fourth image 416 are also acquired, neither of which overlap any predetermined ground points. Iinages 412, 416 are target iinages. In this embodiment, the actual locations of predetermined ground points contained in the first reference iunage 404 are compared with predicted locations of predetermined ground points contained in the first reference image 404. A first compensation factor is determined based on the difference between the predicted predetermined ground point locations and the actual predetermined ground point locations.
[0043] Similarly, the actual locations of the predetermined ground points contained in the second reference image 408 are compared with predicted locations of predetermined ground points contained in the second reference image 408. A second compensation factor is determined based on the difference between the predicted predetermined ground point
[0041] Using least squares or Is'-alman filter with a ptzolz covariances, compensations are generated for each paxameter. In addition, covariances associated with each parameter are also produced. Hence, the a posteriojz covariance of the improved attitude, for example, is known using the covariance associated with the attitude corrections.
[0042] As discussed previously, in one embodiment multiple reference images are collected from a particular orbit of the iinaging system. In this embodiment, as illustrated in FIG. 5, various images are collected within a satellite ground access swath 400. Included in the collected images are a first reference image 404, and a second reference image 408. The reference images 404, 408 are collected from areas within the satellite ground access swath 400 that overlap predeterinuied ground points. The areas that contain actual predetermined ground points are indicated as cross-hatched control images 406, 410 in FIG.
5. In the example illustrated in FIG. 5, a third image 412 and a fourth image 416 are also acquired, neither of which overlap any predetermined ground points. Iinages 412, 416 are target iinages. In this embodiment, the actual locations of predetermined ground points contained in the first reference iunage 404 are compared with predicted locations of predetermined ground points contained in the first reference image 404. A first compensation factor is determined based on the difference between the predicted predetermined ground point locations and the actual predetermined ground point locations.
[0043] Similarly, the actual locations of the predetermined ground points contained in the second reference image 408 are compared with predicted locations of predetermined ground points contained in the second reference image 408. A second compensation factor is determined based on the difference between the predicted predetermined ground point
- 20 -location and the actual predetermined ground point locations. A combination of the first and second compensation factors, as described above, may then be utilized to determine the ground locations for one or more pixels in each of the target ivnages 412, 416.
[0044] The imaging system of the satellite may be controlled to acquire the various images in any order. For example, the satellite may acquire the thixd and fourth images 412, 416, and then acquire the first and second reference images 404, 408. In one embodiment, the images are acquired in the following order: the first reference iinage 404 is acquired, followed by the third image 412, followed by the fourth image 416, and finally the second reference image 408 is acquired. In this example, the compensation factor for the tlv.rd and fourth image 412, 416 is calculated according to a least squares fit of the first and second compensation factors. If the images were acquired in a different order, it would be straightforward, and well within the capabilities of one of ordinary skill in the art, to calculate compensation factors for the third and fourth images 412, 416 utilizing similar techniques.
[0045] As described above, in one embodiment two or more reference images are collected and utilized to calculate the compensation factor. In this embodiment, triangulation (via the methods described above) is performed on each image independently to determine compensation factors for each. These compensation factors are then combined for use in determining ground locations associated with images collected in which ground location is determined without using predetermined ground points. The coinpensation factors may be combined using methods such as, including but not limited to, interpolation, polynomial fit, simple averaging, covariance weighted averaging, etc. Alternatively, a single triangulation (using the same methods described above) is performed on all the images togetlzer, resulting in a global compensation factor that would apply to the entire span of orbit within the appropriate timeframe. This global compensation factor could be applied to any image without using predetermined ground points.
[0044] The imaging system of the satellite may be controlled to acquire the various images in any order. For example, the satellite may acquire the thixd and fourth images 412, 416, and then acquire the first and second reference images 404, 408. In one embodiment, the images are acquired in the following order: the first reference iinage 404 is acquired, followed by the third image 412, followed by the fourth image 416, and finally the second reference image 408 is acquired. In this example, the compensation factor for the tlv.rd and fourth image 412, 416 is calculated according to a least squares fit of the first and second compensation factors. If the images were acquired in a different order, it would be straightforward, and well within the capabilities of one of ordinary skill in the art, to calculate compensation factors for the third and fourth images 412, 416 utilizing similar techniques.
[0045] As described above, in one embodiment two or more reference images are collected and utilized to calculate the compensation factor. In this embodiment, triangulation (via the methods described above) is performed on each image independently to determine compensation factors for each. These compensation factors are then combined for use in determining ground locations associated with images collected in which ground location is determined without using predetermined ground points. The coinpensation factors may be combined using methods such as, including but not limited to, interpolation, polynomial fit, simple averaging, covariance weighted averaging, etc. Alternatively, a single triangulation (using the same methods described above) is performed on all the images togetlzer, resulting in a global compensation factor that would apply to the entire span of orbit within the appropriate timeframe. This global compensation factor could be applied to any image without using predetermined ground points.
-21 -[0046] As mentioned previously, the sateUite transmits collected images to at least one ground station located on the earth. The ground station is situated such that the satellite may communicate with the ground station for a portion of an orbit. The images received at a ground station may be analyzed at the g.eound station to deterinine location information for the pixels in the images, with this information sent to a user or to a data center (hereinafter referred to as a receiver). Alternatively, the raw data received from the satellite at the ground station may be sent from the ground station to a receiver directly without any processing to determine ground location information associated with images. The raw data, which includes information related to position, attitude and distortion of the imaging system, is contiguously collected at pre-determined intervals and encompasses the ground access swath 400. The raw data may then be analyzed to determine images containing predetermined ground points.
Using the predetermined ground points in those images, along with other information as described above, the ground locations for pixels in other images may be calculated. In one emboditnent, the image(s) are transmitted to the receiver by conveying the images over the Internet. Typically, an image is conveyed in a coinpressed format. Once received, the receiver is able to produce an image of the earth location along with ground location information associated with the image. It is also possible to convey the itnage(s) to the receiver in other ways. For instance, the image(s) can be recorded on a magnetic disk, CD, tape or other recording medium and mailed to the receiver. If needed the recording mediu.m can also include the satellite position, attitude, and distortion information.
It is also possible to produce a hard copy of an image and then mail the hardcopy the receiver.
The hard copy can also be faxed or otherwise electronically sent to the receiver.
[0047] While the invention has been particularly shown and described witli reference to a preferred embodiment thereof, it will be understood by those skilled in the art that various other changes in the form and details may be made without departing from the spirit and scope of the invention.
Using the predetermined ground points in those images, along with other information as described above, the ground locations for pixels in other images may be calculated. In one emboditnent, the image(s) are transmitted to the receiver by conveying the images over the Internet. Typically, an image is conveyed in a coinpressed format. Once received, the receiver is able to produce an image of the earth location along with ground location information associated with the image. It is also possible to convey the itnage(s) to the receiver in other ways. For instance, the image(s) can be recorded on a magnetic disk, CD, tape or other recording medium and mailed to the receiver. If needed the recording mediu.m can also include the satellite position, attitude, and distortion information.
It is also possible to produce a hard copy of an image and then mail the hardcopy the receiver.
The hard copy can also be faxed or otherwise electronically sent to the receiver.
[0047] While the invention has been particularly shown and described witli reference to a preferred embodiment thereof, it will be understood by those skilled in the art that various other changes in the form and details may be made without departing from the spirit and scope of the invention.
-22-
Claims (20)
1. A method for determining location information of an earth image, comprising:
obtaining at least one reference image;
obtaining at least one target image associated with an earth view, said at least one target image not overlapping said at least one reference image; and using known location information associated with said at least one reference image to determine location information associated with said at least one target image.
obtaining at least one reference image;
obtaining at least one target image associated with an earth view, said at least one target image not overlapping said at least one reference image; and using known location information associated with said at least one reference image to determine location information associated with said at least one target image.
2. The method for determining location information of an earth image as claimed in claim 1, wherein said at least one reference image is associated with either at least one terrestrial feature or at least one celestial object.
3. The method for determining location information of an earth image as claimed in claim 1, wherein said at least one target image is associated with either at least one natural terrestrial feature, at least one artificial terrestrial feature or both.
4. The method for determining location information of an earth image as claimed in claim 1, wherein said determining location information associated with said at least one target image comprises:
determining known location information of at least one celestial object or at least one terrestrial feature imaged by said at least one reference image;
using said known location information and imaging system movement information to determine at least one compensation factor and at least one error measurement associated with said at least one compensation factor for said at least one target image;
and using said at least one compensation factor to determine location information associated with said at least one target image when the magnitude of said at least one error measurement is less than a predetermined error limit.
determining known location information of at least one celestial object or at least one terrestrial feature imaged by said at least one reference image;
using said known location information and imaging system movement information to determine at least one compensation factor and at least one error measurement associated with said at least one compensation factor for said at least one target image;
and using said at least one compensation factor to determine location information associated with said at least one target image when the magnitude of said at least one error measurement is less than a predetermined error limit.
5. The method for determining location information of an earth image as claimed in claim 4, wherein said predetermined error limit is associated with at least one of attitude, position and distortion measurement information associated with an imaging system.
6. A method for determining location information of an earth image acquired from an imaging system, comprising:
obtaining at least one reference image;
obtaining at least one target image associated with an earth view, said at least one target image not overlapping said at least one reference image;
locating at least a first ground point in said at least one reference image having known earth location information;
determining an expected location of said first ground point in said at least one reference image using at least one of position, attitude and distortion information associated with the imaging system; and calculating at least one compensation factor based on a comparison between said expected location information and said known location information of said first ground point;
and determining location information of said at least one target image based on said at least one compensation factor.
obtaining at least one reference image;
obtaining at least one target image associated with an earth view, said at least one target image not overlapping said at least one reference image;
locating at least a first ground point in said at least one reference image having known earth location information;
determining an expected location of said first ground point in said at least one reference image using at least one of position, attitude and distortion information associated with the imaging system; and calculating at least one compensation factor based on a comparison between said expected location information and said known location information of said first ground point;
and determining location information of said at least one target image based on said at least one compensation factor.
7. The method for determining location information of an earth image as claimed in claim 6, wherein said at least one reference image is associated with either at least one terrestrial feature or at least one celestial object.
8. The method for determining location information of an earth image as claimed in claim 6, wherein said at least one target image is associated with either at least one natural terrestrial feature, at least one artificial terrestrial feature or both.
9. The method for determining location information of an earth image as claimed in claim 6, wherein calculating said at least one compensation factor comprises:
determining a first position information of the imaging system when said at least one reference image was acquired by the imaging system;
determining a first attitude information of the imaging system when said at least one reference image was acquired by the imaging system;
determining a first distortion information of the imaging system when said at least one reference image was acquired by the imaging system; and solving for said at least one compensation factor for at least one of said first position information, said first attitude information and said first distortion information based on a difference between the location of said first ground point in said at least one reference image and said expected location of said ground point.
determining a first position information of the imaging system when said at least one reference image was acquired by the imaging system;
determining a first attitude information of the imaging system when said at least one reference image was acquired by the imaging system;
determining a first distortion information of the imaging system when said at least one reference image was acquired by the imaging system; and solving for said at least one compensation factor for at least one of said first position information, said first attitude information and said first distortion information based on a difference between the location of said first ground point in said at least one reference image and said expected location of said ground point.
10. The method for determining location information of an earth image as claimed in claim 9, wherein determining location information of said at least one target image comprises:
determining a second position information of the imaging system when said at least one target image was acquired by the imaging system, said second position information modified by said at least one compensation factor;
determining a second attitude information of the imaging system when said at least one target image was acquired by the imaging system, said second attitude information modified by said at least one compensation factor;
determining a second distortion information of the imaging system when said at least one target image was acquired by the imaging system, said second distortion information modified by said at least one compensation factor; and determining location information for at least one location in said at least one target image.
determining a second position information of the imaging system when said at least one target image was acquired by the imaging system, said second position information modified by said at least one compensation factor;
determining a second attitude information of the imaging system when said at least one target image was acquired by the imaging system, said second attitude information modified by said at least one compensation factor;
determining a second distortion information of the imaging system when said at least one target image was acquired by the imaging system, said second distortion information modified by said at least one compensation factor; and determining location information for at least one location in said at least one target image.
11. The method for determining location information of an earth image as claimed in claim 6, wherein calculating said at least one compensation factor comprises:
determining a first position information and associated covariance of the imaging system when said at least one reference image was acquired;
determining a first attitude information and associated covariance of the imaging system when said at least one reference image was acquired;
determining a first distortion information and associated covariance of the imaging system when said at least one reference image was acquired;
solving for said at least one compensation factor for each of said first position information, said first attitude information and said first distortion information of said imaging system based on the difference between the location of said first ground point in said at least one reference image and said expected location of said ground point, wherein said compensation factors are weighted by their respective covariances.
determining a first position information and associated covariance of the imaging system when said at least one reference image was acquired;
determining a first attitude information and associated covariance of the imaging system when said at least one reference image was acquired;
determining a first distortion information and associated covariance of the imaging system when said at least one reference image was acquired;
solving for said at least one compensation factor for each of said first position information, said first attitude information and said first distortion information of said imaging system based on the difference between the location of said first ground point in said at least one reference image and said expected location of said ground point, wherein said compensation factors are weighted by their respective covariances.
12. The method for determining location information of an earth image as claimed in claim 11, wherein determining location information of said at least one target image comprises:
determining a second position information of the imaging system when said at least one target image was acquired by the imaging system, said second position information modified by said at least one compensation factor;
determining a second attitude information of the imaging system when said at least one target image was acquired by the imaging system, said second attitude information modified by said at least one compensation factor;
determining a second distortion information of the imaging system when said at least one target image was acquired by the imaging system, said second distortion information modified by said at least one compensation factor; and determining location information for at least one location in said at least one target image.
determining a second position information of the imaging system when said at least one target image was acquired by the imaging system, said second position information modified by said at least one compensation factor;
determining a second attitude information of the imaging system when said at least one target image was acquired by the imaging system, said second attitude information modified by said at least one compensation factor;
determining a second distortion information of the imaging system when said at least one target image was acquired by the imaging system, said second distortion information modified by said at least one compensation factor; and determining location information for at least one location in said at least one target image.
13. An image of an earth view comprising a plurality of pixels and earth location coordinates of at least one of said pixels, said plurality of pixels and earth location coordinates obtained by:
obtaining at least one reference image, said at least one reference image comprising a plurality of pixels;
determining a first pixel location of at least a first pixel in said at least one reference image associated with a first ground point having a known earth location;
calculating at least one compensation factor based on a comparison between an expected pixel location of said first ground point and said first pixel location;
obtaining at least one target image of an earth view, said at least one target image comprising a plurality of pixels, and said at least one target image not overlapping said at least one reference image; and determining an earth location for at least one pixel of said at least one target image based on said at least one compensation factor.
obtaining at least one reference image, said at least one reference image comprising a plurality of pixels;
determining a first pixel location of at least a first pixel in said at least one reference image associated with a first ground point having a known earth location;
calculating at least one compensation factor based on a comparison between an expected pixel location of said first ground point and said first pixel location;
obtaining at least one target image of an earth view, said at least one target image comprising a plurality of pixels, and said at least one target image not overlapping said at least one reference image; and determining an earth location for at least one pixel of said at least one target image based on said at least one compensation factor.
14. The image as claimed in claim 13, wherein said earth location is locatable to sub-pixel precision.
15. The image as claimed in claim 13, wherein said ground point comprises a natural feature or an artificial feature.
16. The image as claimed in claim 13, wherein calculating said at least one compensation factor comprises:
determining a first position information and associated covariance of an imaging system when said at least one reference image was acquired;
determining a first attitude information and associated covariance of the imaging system when said at least one reference image was acquired;
determining a first distortion information and associated covariance of the imaging system when said at least one reference image was acquired;
calculating said expected pixel location of said first ground point based on said first position information, said first attitude information and said first distortion information;
determining a difference between said expected pixel location and said first pixel location; and solving for at least one compensation factor for each of a position, attitude and distortion of said imaging system based on said difference, wherein said compensation factors are weighted by their respective covariances.
determining a first position information and associated covariance of an imaging system when said at least one reference image was acquired;
determining a first attitude information and associated covariance of the imaging system when said at least one reference image was acquired;
determining a first distortion information and associated covariance of the imaging system when said at least one reference image was acquired;
calculating said expected pixel location of said first ground point based on said first position information, said first attitude information and said first distortion information;
determining a difference between said expected pixel location and said first pixel location; and solving for at least one compensation factor for each of a position, attitude and distortion of said imaging system based on said difference, wherein said compensation factors are weighted by their respective covariances.
17. The image as claimed in claim 16, wherein determining an earth location comprises:
determining a second position information of the imaging system when said at least one target image was acquired;
determining a second attitude information of the imaging system when said at least one target image was acquired;
determining a second distortion information of the imaging system when said at least one target image was acquired;
applying said compensation factor to at least one of said second position infromation, said second attitude information and said second distortion information; and determining an earth location for at least one pixel in said at least one target image.
determining a second position information of the imaging system when said at least one target image was acquired;
determining a second attitude information of the imaging system when said at least one target image was acquired;
determining a second distortion information of the imaging system when said at least one target image was acquired;
applying said compensation factor to at least one of said second position infromation, said second attitude information and said second distortion information; and determining an earth location for at least one pixel in said at least one target image.
18. A method for transporting an image towards an interested entity over a communications network, comprising:
conveying, over a portion of the communication network, a digital image of an earth view comprising a plurality of pixels, at least one of said pixels having associated ground location information derived based on at least one compensation factor that has been determined based on at least one ground point from at least one reference image, wherein said at least one reference image is different than said digital image and said at least one reference image does not overlap said digital image.
conveying, over a portion of the communication network, a digital image of an earth view comprising a plurality of pixels, at least one of said pixels having associated ground location information derived based on at least one compensation factor that has been determined based on at least one ground point from at least one reference image, wherein said at least one reference image is different than said digital image and said at least one reference image does not overlap said digital image.
19. The method of claim 18, wherein said digital image is associated with either at least one natural terrestrial feature, at least one artificial terrestrial feature or both.
20. The method of claim 18, wherein said at least one reference image is associated with either at least one terrestrial feature or at least one celestial object.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/US2005/022961 WO2006078310A2 (en) | 2004-06-25 | 2005-06-24 | Method and apparatus for determining a location associated with an image |
USPCT/US2005/22961 | 2005-06-24 | ||
PCT/US2005/046749 WO2007001471A2 (en) | 2005-06-24 | 2005-12-23 | Method and apparatus for determining a location associated with an image |
Publications (1)
Publication Number | Publication Date |
---|---|
CA2613252A1 true CA2613252A1 (en) | 2007-01-04 |
Family
ID=37595619
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA002613252A Abandoned CA2613252A1 (en) | 2005-06-24 | 2005-12-23 | Method and apparatus for determining a location associated with an image |
Country Status (5)
Country | Link |
---|---|
EP (1) | EP1899889A2 (en) |
JP (1) | JP2009509125A (en) |
AU (1) | AU2005333561A1 (en) |
CA (1) | CA2613252A1 (en) |
WO (1) | WO2007001471A2 (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5567805B2 (en) * | 2009-08-31 | 2014-08-06 | ライトハウステクノロジー・アンド・コンサルティング株式会社 | Flying object detection method, system, and program |
CN102798381B (en) * | 2012-07-20 | 2014-12-24 | 中国资源卫星应用中心 | Scene division cataloging method based on geographical location of real image |
JP6131568B2 (en) * | 2012-10-30 | 2017-05-24 | 株式会社ニコン | Microscope device and image forming method |
US10048084B2 (en) * | 2016-09-16 | 2018-08-14 | The Charles Stark Draper Laboratory, Inc. | Star tracker-aided airborne or spacecraft terrestrial landmark navigation system |
US10935381B2 (en) | 2016-09-16 | 2021-03-02 | The Charles Stark Draper Laboratory, Inc. | Star tracker-aided airborne or spacecraft terrestrial landmark navigation system |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS59133667A (en) * | 1983-01-20 | 1984-08-01 | Hitachi Ltd | Processing system of picture correction |
US4688092A (en) * | 1986-05-06 | 1987-08-18 | Ford Aerospace & Communications Corporation | Satellite camera image navigation |
US6735348B2 (en) * | 2001-05-01 | 2004-05-11 | Space Imaging, Llc | Apparatuses and methods for mapping image coordinates to ground coordinates |
US6810153B2 (en) * | 2002-03-20 | 2004-10-26 | Hitachi Software Global Technology, Ltd. | Method for orthocorrecting satellite-acquired image |
KR100519054B1 (en) * | 2002-12-18 | 2005-10-06 | 한국과학기술원 | Method of precision correction for geometrically distorted satellite images |
-
2005
- 2005-12-23 CA CA002613252A patent/CA2613252A1/en not_active Abandoned
- 2005-12-23 JP JP2008518119A patent/JP2009509125A/en not_active Withdrawn
- 2005-12-23 EP EP05855332A patent/EP1899889A2/en not_active Withdrawn
- 2005-12-23 AU AU2005333561A patent/AU2005333561A1/en not_active Abandoned
- 2005-12-23 WO PCT/US2005/046749 patent/WO2007001471A2/en active Application Filing
Also Published As
Publication number | Publication date |
---|---|
WO2007001471A3 (en) | 2007-02-08 |
JP2009509125A (en) | 2009-03-05 |
EP1899889A2 (en) | 2008-03-19 |
AU2005333561A1 (en) | 2007-01-04 |
WO2007001471A2 (en) | 2007-01-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20080063270A1 (en) | Method and Apparatus for Determining a Location Associated With an Image | |
Pepe et al. | Planning airborne photogrammetry and remote-sensing missions with modern platforms and sensors | |
Grodecki et al. | IKONOS geometric accuracy | |
Li | Potential of high-resolution satellite imagery for national mapping products | |
Nagai et al. | UAV-borne 3-D mapping system by multisensor integration | |
CN106767714B (en) | Improve the equivalent mismatch model multistage Calibration Method of satellite image positioning accuracy | |
GREJNER‐BRZEZINSKA | Direct exterior orientation of airborne imagery with GPS/INS system: Performance analysis | |
Muller et al. | A program for direct georeferencing of airborne and spaceborne line scanner images | |
Haala et al. | Hybrid georeferencing of images and LiDAR data for UAV-based point cloud collection at millimetre accuracy | |
Seiz et al. | Cloud mapping from the ground: Use of photogrammetric methods | |
CN112461204B (en) | Method for satellite to dynamic flying target multi-view imaging combined calculation of navigation height | |
CA2613252A1 (en) | Method and apparatus for determining a location associated with an image | |
Mostafa et al. | A fully digital system for airborne mapping | |
Breuer et al. | Geometric correction of airborne whiskbroom scanner imagery using hybrid auxiliary data | |
KR20080033287A (en) | Method and apparatus for determining a location associated with an image | |
Ishii et al. | Autonomous UAV flight using the Total Station Navigation System in Non-GNSS Environments | |
Hu et al. | Scan planning optimization for 2-D beam scanning using a future geostationary microwave radiometer | |
Seiz et al. | Cloud mapping using ground-based imagers | |
JPH01229910A (en) | Navigating device | |
Wolfe et al. | The MODIS operational geolocation error analysis and reduction methodology | |
CN115524763B (en) | Multi-temporal high-resolution mountain satellite image terrain radiation correction method | |
Watanabe | Accuracy of geolocation and DEM for ASTER | |
Grayson | UAV photogrammetry ground control reductions using GNSS | |
Didan et al. | Award# DE-LM0000479 | |
Garg et al. | Geometric Correction and Mosaic Generation of Geo High Resolution Camera Images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FZDE | Dead |