AU2005333561A1 - Method and apparatus for determining a location associated with an image - Google Patents

Method and apparatus for determining a location associated with an image Download PDF

Info

Publication number
AU2005333561A1
AU2005333561A1 AU2005333561A AU2005333561A AU2005333561A1 AU 2005333561 A1 AU2005333561 A1 AU 2005333561A1 AU 2005333561 A AU2005333561 A AU 2005333561A AU 2005333561 A AU2005333561 A AU 2005333561A AU 2005333561 A1 AU2005333561 A1 AU 2005333561A1
Authority
AU
Australia
Prior art keywords
image
information
imaging system
determining
location
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
AU2005333561A
Inventor
Woodson Bercaw
Christopher J. Comp
James G. Mcclelland
Walter S. Scott
Gerald J. Smith
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Maxar Intelligence Inc
Original Assignee
DigitalGlobe Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from PCT/US2005/022961 external-priority patent/WO2006078310A2/en
Application filed by DigitalGlobe Inc filed Critical DigitalGlobe Inc
Publication of AU2005333561A1 publication Critical patent/AU2005333561A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods

Description

WO 2007/001471 PCT/US2005/046749 METHOD AND APPARATUS FOR DETERMINING A LOCATION ASSOCIATED WITH AN IMAGE FIELD OF THE INVENTION [0001] The present invention is directed to the determination of ground coordinates associated with imagery and more particularly to the translation of compensated coordinate information from one or more images to other images produced by an imaging system. BACKGROUND [00021 Remote sensing systems in present day satellite and airborne applications generally provide images that may be processed to include rows of pixels that make up an image frame. In many applications, it is desirable to know the ground location of one or more pixels within an image. For example, it may be desirable to have the ground location of the image pixels expressed in geographic terms such as longitude, latitude and elevation. A number of conventions are used to express the precise ground location of a point. Typically, a reference projection, such as Universal Transverse Mercator (UTM), is specified along with various horizontal and vertical datums, such as the North American Datum of 1927 (NAD27), the North American Datum of 1983 (NAD83), and the World Geodetic System of 1984 (WGS84). Furthermore, for images within the United States, it may be desirable to express the location of pixels or objects within an image in PLSS (Public Land Survey System) coordinates, such as township/range/section within a particular state or county. [0003] In order to derive accurate ground location information for an image collected by a remote imaging system and then express it in one of the above listed standards, or other standards, the state of the imaging system at the time of image collection must be known to some degree of certainty. There are numerous variables comprising the state of the imaging system that determine the precise area imaged by the imaging system. For example, in a satellite imaging application, the orbital position of the satellite, the attitude of imaging WO 2007/001471 PCT/US2005/046749 system, and various other factors including atmospheric effects and thermal distortion of the satellite or its imaging system, all contribute to the precision to which the area imaged by the imaging system can be determined. Error in the knowledge of each of these factors results in inaccuracies in determining the ground location of areas imaged by the imaging system. SUMMARY [0004] The present invention has recognized that many, if not all, of the factors used to generate ground location information for raw image data collected by remote sensing platforms are subject to errors that lead to the derivation of inaccurate ground location information for a related image. [0005] The present invention reduces the adverse effects of at least one source of error and provides for derivation of more accurate ground location information for imagery, thereby rendering the information more useful for various entities utilizing the images. Consequently, if an interested entity receives a ground image, the locations of various features within the ground image are known with increased accuracy, thereby facilitating the ability to use such images for a wider variety of applications. [0006] In one embodiment, the present invention provides a method for determining ground location coordinates for pixels within a satellite image. The method includes the steps of (a) obtaining at least one reference image; (b) locating at least a first pixel in the at least one reference image, the first pixel corresponding to a point having known earth location coordinates; (c) determining an expected pixel location of the point in the first image using at least one of attitude, position and distortion information available for the satellite; (d) calculating at least one compensation factor based on a comparison between the expected pixel location of the point and the known location of the first pixel; (e) obtaining at least one target image of an earth view, the at least one target image not overlapping the at least one reference image; and (f) determining earth location coordinates for at least one pixel within WO 2007/001471 PCT/US2005/046749 the at least one target image using the compensation factor in conjunction with the attitude, position and distortion information available for the satellite. [0007] The compensation factor may be calculated by solving a set of equations relating platform position, attitude, distortion and ground location information for an image, whereby adjustments to one or more of position, attitude and distortion are obtained. The position, attitude, distortion and ground location information are known to varying levels of accuracy prior to adjustment. Solving the set of equations may also be augmented with a priori accuracy uncertainties in the form of covariance matrices, in order to obtain adjustments to position, attitude and distortion relative to their respective levels of accuracy prior to adjustment. The adjusted attitude, adjusted position or adjusted distortion information, or any combination thereof, is then used as the compensation factor when determining location coordinates for the target image. The target image may be collected by the imaging system before or after the collection of the reference image. [0008] Another embodiment of the invention provides a method for determining location information of an earth image from a remote imaging platform. The method includes the steps of: (a) obtaining at least one reference image; (b) obtaining at least one target image associated with an earth view, the at least one target image not overlapping the at least one reference image; and (c) using known location information associated with the at least one reference image to determine location information associated with the at least one target image. [00091 Yet another embodiment of the invention provides an image of an earth area comprising a plurality of pixels and earth location coordinates of at least one of the pixels. The pixels and coordinates obtained by the steps of: (a) obtaining at least one reference image, the at least one reference image comprising a plurality of pixels; (b) locating at least a first pixel in the at least one reference image associated with a point, the point having known earth location coordinates; (c) calculating a compensation factor based on a comparison WO 2007/001471 PCT/US2005/046749 between an expected pixel location of the point within the at least one reference image and the known location of the first pixel within the first at least one reference image; (d) obtaining at least one target image from an earth view, the at least one target image comprising a plurality of pixels and not overlapping the at least one reference image; and (e) determining earth location coordinates for at least one pixel of the at least one target image based on the compensation factor. [00101 A further embodiment provides a method for transporting an image towards an interested entity over a communications network. The method comprises the conveying, over a portion of the communication network, a digital image that includes a plurality of pixels, at least one of the pixels having associated ground location information derived based on a compensation factor that has been determined based on at least one ground point from at least one reference image, wherein the at least one reference image is different than the digital image and the at least one reference image does not overlap the digital image. BRIEF DESCRIPTION OF THE DRAWINGS [00111 FIG. 1 is a diagrammatic illustration of a satellite in an earth orbit obtaining an image of the earth; [0012] FIG. 2 is a block diagram representation of a satellite of an embodiment of the present invention; [0013] FIG. 3 is a flow chart illustration of the operational steps for determining location coordinates associated with a satellite image for an embodiment of the present invention; [0014] FIG. 4 is an illustration of a reference image covering points whose precise locations are known; and [0015] FIG. 5 is an illustration of a path containing several imaged areas for an embodiment of the present invention.
WO 2007/001471 PCT/US2005/046749 DETAILED DESCRIPTION [0016] Generally, the present invention is directed to the determination of ground location information associated with at least one pixel of an image acquired by an imaging system aboard a satellite or other remote sensing platform. The process involved in producing the ground location information includes (a) obtaining one or more images (reference images) of areas covering points whose locations are precisely known, (b) predicting the locations of these points using time varying position, attitude, and distortion information available for the imaging system, (c) comparing the predicted locations with the known locations using a data fitting algorithm to derive one or more compensation factors, (d) interpolating or extrapolating the compensation factor(s) to other instants in time, and then (e) applying the compensation factor(s) to one or more other images (target images) of areas that are not covering points with the precisely known locations of the reference images. The process can be applied to a target image that does not overlap the reference image, and may also be applied to a target image that does overlap the reference image. [0017] Having generally described the process for producing the image and ground location information, an embodiment of the process is described in greater detail. Referring to FIG. 1, an illustration of a satellite 100 orbiting a planet 104 is now described. At the outset, it is noted that, when referring to the earth herein, reference is made to any celestial body of which it may be desirable to acquire images or other remote sensing information having a related location associated with the body. Furthermore, when referring to a satellite herein, reference is made to any spacecraft, satellite, aircraft or other remote sensing platform that is capable of acquiring images. It is also noted that none of the drawing figures contained herein are drawn to scale, and that such figures are for the purposes of illustration only. [0018] As illustrated in FIG. 1, the satellite 100 orbits the earth 104 following orbital path 108. The position of the satellite 100 along the orbital path 108 may be defined by several WO 2007/001471 PCT/US2005/046749 variables, including the in-track location, cross-track location, and radial distance location. In track location relates to the position of the satellite along the orbital path 108 as it orbits the earth 104. Cross-track location relates to the lateral position of satellite 100 relative to the direction of motion in the orbit 108 (relative to FIG. 1, this would be in and out of the page). Radial distance relates to the radial distance of the satellite 100 from the center of the earth 104. These factors related to the physical position of the satellite are collectively referred to as the ephemeris of the satellite. When referring to "position" of a satellite herein, reference is made to these factors. Also, relative to the orbital path, the satellite 100 may have pitch, yaw, and roll orientations that are collectively referred to as the attitude of the satellite 100. An imaging system aboard the satellite 100 is capable of acquiring an image 112 that includes a portion the surface of the earth 104. The image 112 is comprised of a plurality of pixels. [00191 When the satellite 100 is acquiring images of the surface of the earth 104, the associated ground location of any particular image pixel(s) may be calculated based on information related to the state of the imaging system, including the position of the system, attitude of the system, and distortion information, as will be described in more detail below. The ground location may be calculated in terms of latitude, longitude and elevation, or in terms of any other applicable coordinate system. It is often desirable to have knowledge of the location of one or more features associated with an image from such a satellite, and, furthermore, to have a relatively accurate knowledge of the location of each image pixel. Images collected from the satellite may be used in commercial and non-commercial applications. The number of applications for which an image 112 may be useful increases with higher resolution of the imaging system, and is further increased when the ground location of one or more pixels contained in the image 112 is known to higher accuracy. [00201 Referring now to Fig 2, a block diagram representation of an imaging satellite 100 of an embodiment of the present invention is described. The imaging satellite 100 includes a number of instruments, including a position measurement system 116, an attitude WO 2007/001471 PCT/US2005/046749 measurement system 120, a thermal measurement system 124, transmit/receive circuitry 128, a satellite movement system 132, a power system 136 and an imaging system 140. The position measurement system 116 of this embodiment includes a Global Positioning System (GPS) receiver that receives position information from a plurality of GPS satellites, and is well understood in the art. The position measurement system 116 obtains information from the GPS satellites at periodic intervals. If the position of the satellite 100 is desired to be determined for a point in time between the periodic intervals, the GPS information from the position measurement system is combined with other information related to the orbit of the satellite to generate the satellite position for that particular point in time. As is typical in such a system, the position of the satellite 100 obtained from the position measurement system 116 contains some amount of error, resulting from the limitations of the position measurement system 116 and associated GPS satellites. In one embodiment, the position of the satellite 100, using data derived and refined from the position measurement system 116 data, is known to within several meters. While this error is small, it is often a relatively significant contributor to uncertainty in ground location associated with pixels in the ground image. [0021] The attitude measurement system 120 is used in determining attitude information for the imaging system 140. In one embodiment, the attitude measurement system 120 includes one or more gyroscopes that measure angular rate and one or more star trackers that obtain images of various celestial bodies. The location of the celestial bodies within the images obtained by the star trackers is used to determine the attitude of the imaging system 140. The star trackers, in an embodiment, are placed to provide roll, pitch and yaw orientation information for a reference coordinate system fixed to the imaging system 140. Similarly as described above with respect to the position measurement system 116, the star trackers of the attitude measurement system operate to obtain images at periodic intervals. The attitude of the imaging system 140 can, and often does, change between these periodic intervals. For example, in one embodiment, the star trackers collect images at a rate of about -7 WO 2007/001471 PCT/US2005/046749 10 Hz, although the frequency may be increased or decreased. In this embodiment, the imaging system 140 operates to obtain images at line rates between 7 kHz and 24 kHz, although these frequencies may also be increased or decreased. In any event, the imaging system 140 generally operates at a higher rate than the star trackers, resulting in numerous ground image pixels being acquired between successive attitude measurements from the star trackers. The attitude of the imaging system 140 for time periods between successive images of the star trackers is determined using star tracker information along with additional information, such as angular rate information from the gyroscopes, to predict the attitude of the imaging system 140. The gyroscopes are used to detect the angular rates of the imaging system 140, with this information used to adjust the attitude information for the imaging system 140. The attitude measurement system 120, also has limitations on the accuracy of information provided, resulting in errors in the predicted attitude of the imaging system 140. While this error is generally small, it is often a relatively significant contributor to uncertainty in ground location associated with pixels in the ground image. [0022] The thermal measurement system 124 is used in determining thermal characteristics of the imaging system 140. Thermal characteristics are used, in this embodiment, to compensate for thermal distortion in the imaging system 140. As is well understood, a source of error when determining ground location associated with an image collected by such a satellite-based imaging system 140 is distortion in the imaging system. Thermal variations monitored by the thermal measurement system 124 are used in this embodiment to compensate for distortions in the imaging system 140. Such thermal variations occur, for example, when the satellite 100, or portions of the satellite 100, move in or out of sunlight due to shadows cast by the earth or other portions of the satellite 100. The difference in energy received at the components of the imaging system 140 results in the components being heated, thereby resulting in distortion of the imaging system 140 and/or changes in the alignments between the imaging system 140 and the position and attitude
Q
WO 2007/001471 PCT/US2005/046749 measurement systems 116 and 120. Such energy changes may occur when, for example, a solar panel of the satellite 100 changes orientation relative to the satellite body and results in the imaging system components being subject to additional radiation from the sun. In addition to reflections from component parts of the satellite 100, and to the satellite 100 moving into and out of the earth's shadow, the reflected energy from the earth itself may cause thermal variations in the imaging system 140. For example, if the portion of the earth that is reflecting light to the imaging system 140 is particularly cloudy, more energy is received at the satellite 100 relative to the energy received over a non-cloudy area, thus resulting in additional thermal distortions. The thermal measurement system 124 monitors changing thermal characteristics, and this information is used to compensate for such thermal distortions. The thermal measurement system 124, has limitations on the accuracy of information provided, resulting in errors in the thermal compensation of the imaging system 140 of the satellite 100. While this error is generally relatively small, when used in determining the ground location of pixels within an image that includes a portion of the surface of the earth, this error also contributes to uncertainty in ground location. [0023] In addition to thermal distortions from the imaging system 140, atmospheric distortions that increase the error of the imaging system 140 may also be present. Such atmospheric distortions may be caused by a variety of sources within the atmosphere associated with the area being imaged, including heating, water vapor, pollutants, and a relatively high or low concentration of aerosols, to name a few. The image distortions resulting from these atmospheric distortions are a further component of error when determining ground location information associated with an area being imaged by the imaging system 140. Furthermore, in addition to the errors in position, attitude and distortion information, the velocity of the satellite 100 results in relativistic distortions in information received. In one embodiment, the satellite 100 travels at a velocity of about seven and one half kilometers per second. At this velocity, relativistic considerations, while relatively small, WO 2007/001471 PCT/US2005/046749 are nonetheless present and in one embodiment, images collected at the satellite 100 are compensated to reflect such considerations. Although this compensation is performed to a relatively high degree of accuracy, some error still is present because of the relativistic changes. While this error is generally small, it is often a relatively significant contributor to uncertainty in ground location associated with pixels in the ground image. [00241 The added error of the position measurement system 116, the attitude measurement system 120, thermal measurement system 124, atmospheric distortion and relativistic changes result in ground location calculations having a degree of uncertainty that, in one embodiment, is about 20 meters. While this uncertainty is relatively small for typical satellite imaging systems, further reduction of this uncertainty would increase the utility of the ground images for a large number of users, and also enable the images to be used in a larger number of applications. [00251 The transmit/receive circuitry 128 in this embodiment includes well known components for communications with the satellite 100 and ground stations and/or other satellites. The satellite 100 generally receives command information related to controlling the positioning of the satellite 100 and the pointing of the imaging system 140, various transmit/receive antennas and/or solar panels. The satellite 100 generally transmits image data along with satellite information from the position measurement system 116, attitude measurement system 120, thermal measurement system 124 and other information used for the monitoring and control of the satellite system 100. [0026] The movement system 132 contains a number of momentum devices and thrust devices. The momentum devices are utilized in control of the satellite 100 by providing inertial attitude control, as is well understood in the art. As is also understood in the art, satellite positions are controlled by thrust devices mounted on the satellite that operate to position the satellite 100 in various orbital positions. The movement system may be used to change the satellite position and to compensate for various perturbations that result from a ~1 A WO 2007/001471 PCT/US2005/046749 number of environmental factors such as solar array or antenna movement, atmospheric drag, solar radiation pressure, gravity gradient effects or other external or internal forces. [0027] The satellite system 100 also contains a power system 136. The power system may be any power system used in generating power for a satellite. In one embodiment, the power system includes solar panels (not shown) having a plurality of solar cells that operate to generate electricity from light received at the solar panels. The solar panels are connected to the remainder of the power system, which includes a battery, a power regulator, a power supply, and circuitry that operates to change the relative orientation of the solar panels with respect to the satellite system 100 in order to enhance power output from the solar panels by maintaining proper alignment with the sun. [00281 The imaging system 140, as mentioned above, is used to collect images that include all or a portion of the earth's land or water surface. These images may contain one or more natural or manmade features including, but not limited to, buildings, roads, vehicles, geological landmarks, agricultural elements, watercraft or platforms. The imaging system 140, in one embodiment, utilizes a pushbroom type imager operating to collect lines of pixels at an adjustable frequency between 7 kHz and 24 kHz. The imaging system 140 may include a plurality of imagers that operate to collect images in different wavelength bands. In one embodiment, the imaging system 140 includes imagers for red, green, blue and near infrared bands. The images collected from these bands may be combined in order to produce a color image of visible light reflected from the surface being imaged. Similarly, the images from any one band, or combination of bands, may be utilized to obtain various types of information related to the imaged surface, such as agricultural information, air quality information and the like. While four bands of imagery are described above, other embodiments may collect data from sensors with more or fewer bands. In addition, embodiments may collect data from other sensor types, from sensors using active or passive collection technologies, from combinations of sensor types, from sensors with different collection modes or from any .11 - WO 2007/001471 PCT/US2005/046749 remote sensing device from whose data location information can be derived or upon whose data location information can be applied. Examples of sensor types include, but are not limited to, infrared sensors, ultraviolet sensors, radar sensors, lidar sensors and thermal band sensors. Furthermore, embodiments may use these sensor types with active or passive collection technologies. For example, one embodiment may collect radar imagery using active, bi-static radar technology while another embodiment may collect infrared imagery using passive, CCD imaging technology. In one embodiment, the imaging system 140 comprises a combination of sensor types whose orientations relative to each other are known or measured to a predetermined precision. Other embodiments employ sensors with different collection modes, including but not limited to, spot scanners, whiskbroom imagers, body-mounted frame cameras and frame cameras using one-axis or two-axis steering mirrors. The sensor types, collection technologies, combinations of sensor types and collection modes employed in a given embodiment will depend upon the applications that use the data. In one embodiment, the imaging system 140 includes imagers comprising an array of CCD pixels, wherein each pixel is capable of acquiring up to 2048 levels of brightness and then representing this brightness with 11 bits of data for each pixel in the image. In another embodiment, one band of the imaging system 140 is used to image features, such as reefs or other natural or manmade structures, that are on, above or beneath the ocean surface. [0029] The control that is registered to the acquired imagery may be at a geopositional accuracy of less than a pixel, and the matching of that control to the acquired imagery may also be at a sub-pixel accuracy. For example, the pixel size of the imagery may be 0.6 meters by 0.6 meters as projected to the ground, and the control on the ground is known to a horizontal accuracy of 0.3 meters CE90 (90th-percentile circular error) and a vertical accuracy of 0.3 meters LE90 (90th-percentile linear error). This accuracy knowledge may be derived from the accuracy of the GPS (Global Positioning System) survey of the location on the ground. That ground location of the control may then be defined on a separate "control WO 2007/001471 PCT/US2005/046749 chip" image of the immediate area surrounding and including the control, where the control chip is itself a small image with pixels that may be of a size of 0.6 meters by 0.6 meters. The defined location of the control on the control chip image may be defined at a sub-pixel level, and so the accuracy of the placement of the control feature on the control chip will be at a sub-pixel level. The control chip is then registered (matched) to the acquired imagery using common feature information in the overlap of the chip to the acquired imagery. The ability of a matcher to do a correlation between the control chip and the acquired imagery over the full common area allows matching accuracy to be less than the acquired image pixel size. With the matching of the acquired image to the chip at a sub-pixel accuracy level, the accuracy of the placement of the control feature onto the control chip at a sub-pixel accuracy level, and the accuracy of the ground control at a sub-pixel accuracy level, the resulting accuracy of the registration of the control to the image may be at a sub-pixel level. The compensation factor derived from this registration is therefore at a sub-pixel accuracy, and the level of error of subsequently acquired target images will be at a sub-pixel level for some determined length of time before and after the capture of reference image. [0030] Referring now to FIG. 3, the operational steps used in the determination of ground location information for an area imaged by a satellite system are described for an embodiment of the invention. In one embodiment, the satellite collects multiple images along its orbital path. Simultaneously, information is contiguously collected at pre determined intervals from the position measurement system, attitude measurement system and thermal measurement system. These images, along with position, attitude and thermal information, are sent via one or more ground stations to an image production system where the images and associated position, attitude, and distortion information are processed along with any other known information related to the satellite system. The processing may occur at any time, and may be done at near real-time. In this embodiment, the images include both reference images and target images. As mentioned previously, reference images are images WO 2007/001471 PCT/US2005/046749 that overlap one or more ground points having location coordinates that are known to a high degree of accuracy, and target images are images that do not overlap ground points having location coordinates that are known to a high degree of accuracy. In the embodiment of FIG. 3, the position of the satellite is determined for a first reference image, as indicated at block 200. The position, as described above, includes information related to the orbital position of the satellite at the time the first reference image was collected, and includes in track information, cross-track information, and radial distance information. The position may be determined using information from the position measurement system and other ground information used to improve the overall position knowledge. At block 204, the attitude information for the imaging system is determined. The attitude of the imaging system, as previously discussed, includes the pitch, roll and yaw orientation of the imaging system relative to the orbital path of a reference coordinate system of the imaging system. When determining the attitude information, information is collected from various attitude measurement system components. This information is analyzed to determine the attitude of the imaging system. At block 208, the distortion information for the imaging system is determined. The distortion information includes known variances in the optic components of the imaging system, along with thermal distortion variations of the optic components as monitored by the thermal measurement system. Also included in the distortion information is distortion from the earth's atmosphere. [0031] Following the determination of the position, attitude and distortion information, the predicted pixel location of at least one predetermined ground point is calculated, according to block 212. In one embodiment, this predicted pixel location is determined using the position of the imaging system, attitude of the imaging system, and distortion of the imaging system to calculate a ground location of at least one pixel from the image. Specifically the position provides the location of the imaging system above the earth's surface, the attitude provides the direction from which the imaging system is collecting images, and
-IA
WO 2007/001471 PCT/US2005/046749 the distortion provides the amount by which the light rays are skewed from what they would be if there were no thermal, atmospheric or relativistic effects. The position of the imaging system, along with the direction in which the imaging system is pointed, and the effects of distortion on the imaging system result in a theoretical location on the earth's surface that produced the light received by the imaging system. This theoretical location is then further adjusted based on surface features of the location on the earth's surface, such as mountainous terrain. This additional calculation is made, and the predicted pixel location is produced. [00321 Following the determination of the predicted pixel location of each predetermined ground point in the reference image, a compensation factor is calculated for one or more of the position, attitude and distortion information based on a comparison between the predicted pixel location of each predetermined ground point in the reference image and the actual pixel location of each predetermined ground point, as indicated at block 216. The calculation of the compensation factor(s) will be described in more detail below. [00331 Following the calculation of the compensation factor(s), the ground location of at least one pixel in other images collected by the imaging system may be computed using the compensated attitude, position and/or distortion information. In the embodiment of FIG. 3, the compensation factor(s) are utilized if the location accuracies of the pixels in the target images are better than accuracies achievable using other conventional methods. As discussed above, the satellite has various perturbations and temperature fluctuations throughout every orbit. Thus, when compensation factor(s) are calculated based on the difference between a predicted pixel location of a predetermined ground point in a reference image and an actual pixel location of the predetermined ground point in the reference image, further changes in the position, attitude or distortion of the imaging system will reduce the accuracy of the compensation factor(s), until, at some point, the ground locations of pixels predicted using standard sensor-derived measurements are more accurate than the ground locations determined using the compensation factor(s). In such a case, the compensation factor(s) may WO 2007/001471 PCT/US2005/046749 not used, and the ground locations of pixels predicted using standard sensor-derived measurements are utilized for ground location information. The ground location of one or more pixels in the second image is determined utilizing the compensation factor(s), as noted at block 220. In this manner, the ground location of images acquired before and/or after acquiring a reference image may be determined to a relatively high degree of accuracy. Furthermore, if multiple reference images are taken during an orbit while collecting images, it may be possible to determine the ground location of all of the images taken for that orbit utilizing adjustment factors generated from the respective reference images. [00341 It is noted that the order in which the operational steps are described with respect to FIG. 3 may be modified. For example, the second image may be acquired prior to the reference image being acquired. The compensation factor may be applied to the second image, even though the second image was acquired prior to the acquisition of the reference image. In another embodiment, multiple reference images are taken, and a fitting algorithm is applied to the predicted locations for each predetermined ground point in each image to derive a set of compensation factors for various images acquired between the acquisition of reference images. Such a fitting algorithm may be a least squares fit. [0035] Referring now to FIG. 4, the determination of the compensation factor(s) for one embodiment of the invention is now described. As discussed previously, the imaging system aboard an imaging satellite acquires a reference image 300, overlapping one or more predetermined ground points. The location on the earth of each predetermined ground point may be expressed in terms of latitude, longitude and elevation, relative to any appropriate datum, such as WGS84. Such a predetermined ground point may be any identifiable natural or artificial feature included in an image of the earth having a known location. Examples of predetermined ground points include, but are not limited to, sidewalk corners, building corners, parking lot corners, coastal features and identifiable features on islands. One consideration in the selection of a predetermined ground point is that it be relatively easy to WO 2007/001471 PCT/US2005/046749 identify in an image of the area containing the predetermined ground point. A point that has a high degree of contrast compared to surrounding area within an image and having a known location is often desirable, although a predetermined ground point may be any point that is identifiable either by a computing system or a human user. In one embodiment, image registration is used to determine the amount of error present in the computed locations of the predetermined ground point. Such image registration may be general feature based, line feature based and/or area correlation based. Area correlation based image registration evaluates an area of pixels around a point, and registers that area to an area of similar size in a control image. The control image has been acquired by a remote imaging platform and has actual area locations known to a high degree of accuracy. The amount of error present between the predicted location for the area and the actual location of the area is used in determining the compensation factors. Feature and line registration identify and match more specific items within an image, such as the edge of a building or a sidewalk. Groups of pixels are identified that outline or delineate a feature, and that grouping of pixels is compared to the same grouping in a control image. While the above discussion covers registrations performed in pixel space, the same registrations can be accomplished in any other domain whose transformation to and from the pixel domain can be performed with a predetermined precision. For example, the line feature registration can be done in vector space by representing features in the reference image as vectors and registering them to known vectors for features in the control image. Similarly, area correlation registration can be done by representing the area in the reference image as a polygon and then registering the polygon to a known polygon in the control image. In one embodiment, predetermined ground points are selected in locations where the likelihood of cloud cover is reduced, in order to have increased likelihood that the predetermined ground point will be visible when the reference image 300 is collected.
WO 2007/001471 PCT/US2005/046749 [0036] Referring again to FIG. 4, the predicted pixel locations of four predetermined ground points illustrated as A, B, C, and D are determined for the reference image 300. The locations of A, B, C, and D as illustrated in FIG. 4 are the predicted pixel locations of A, B, C, D based on attitude, position, and distortion information for the imaging satellite, and surface location information such as elevation for the earth location. The actual pixel locations of the predetermined ground points, identified as A', B', C, and D', are known a priori to a high degree of accuracy. The difference between the predicted pixel locations and the actual pixel locations is then utilized to determine the compensation factor. In one embodiment, the compensation factor is a modified imaging system attitude. In another embodiment, the compensation factor is a modified imaging system attitude and a modified imaging system position. In yet another embodiment, the compensation factor is a modified imaging system attitude and a modified imaging system position, and modified distortion information. In other embodiments where more than one of the imaging system attitude, position and distortion are compensated, one factor receives more compensation relative to another factor. [0037] The compensation factor is determined, in one embodiment, by solving a set of equations having variables related to position of the imaging system, attitude of the imaging system, distortion of the imaging system and the ground location of images acquired by the imaging system. In one embodiment, where imaging system attitude is compensated, the position of the imaging system determined at block 200 in FIG. 3 is assumed to be correct, the distortion of the imaging system determined at block 208 in FIG. 3 is assumed to be correct and the ground location of a pixel corresponding to a predetermined ground point from the reference image is set to be the known location of the predetermined ground point identified in the reference image. The equations are then solved to determine the compensated attitude of the imaging system. This compensated attitude is then used in other images in determining the ground location of pixels within the other images. ' Q WO 2007/001471 PCT/US2005/046749 [0038] In one embodiment, triangulation is used to compute the compensated imaging system attitude. Triangulation, in this embodiment, is performed using a state-space estimation approach. The state-space approach to the triangulation may utilize least squares, least squares utilizing a priori information, or stochastic or Bayesian estimation such as a Kalman filter. In an embodiment utilizing a basic least squares approach, it is assumed that the position is correct, the distortion is correct and that the ground location associated with a pixel in the reference image corresponding to a predetermined ground point is correct. The attitude is then solved for and utilized as the compensation factor. [00391 While the position parameters described above are assumed to be correct, or to have a small covariance, when determining compensated imaging system attitude information, other alternatives may also be used. In the above-described embodiment, imaging system attitude is selected because, in this embodiment, the imaging system attitude is the primary source of uncertainty. By reducing the primary source of uncertainty, the accuracy of the ground locations associated with other images that do not overlap ground control points is increased. In other embodiments, where imaging system attitude is not the primary source of uncertainty, other parameters may be compensated as appropriate. [0040] In another embodiment, a least squares approach utilizing a pioi information is utilized to determine the compensation factor. In this embodiment, the imaging system position, attitude, distortion and pixel location of the predetermined ground point, along with a prioi covariance information related to each of these factors are utilized in calculating the compensation factor. In this embodiment, all of the factors may be compensated, with the amount of compensation to each parameter controlled by their respective covariances. Covariance is a measure of uncertainty, and may be represented by a covariance matrix. For example, a 3X3 covariance matrix may be used for position of the imaging system, with elements in the matrix corresponding to the in-track, cross-track and radial distance position of the imaging system. The 3X3 matrix includes diagonal elements that are the variance of I n WO 2007/001471 PCT/US2005/046749 the position error for each axis of position information, and the off-diagonal elements are correlation factors between position errors for each element. Other covariance matrices may be generated for imaging system attitude information, distortion information, and the predetermined ground point location. [00411 Using least squares or Kalman filter with a priodi covariances, compensations are generated for each parameter. In addition, covariances associated with each parameter are also produced. Hence, the a posteriori covariance of the improved attitude, for example, is known using the covariance associated with the attitude corrections. [00421 As discussed previously, in one embodiment multiple reference images are collected from a particular orbit of the imaging system. In this embodiment, as illustrated in FIG. 5, various images are collected within a satellite ground access swath 400. Included in the collected images are a first reference image 404, and a second reference image 408. The reference images 404, 408 are collected from areas within the satellite ground access swath 400 that overlap predetermined ground points. The areas that contain actual predetermined ground points are indicated as cross-hatched control images 406, 410 in FIG. 5. In the example illustrated in FIG. 5, a third image 412 and a fourth image 416 are also acquired, neither of which overlap any predetermined ground points. Images 412, 416 are target images. In this embodiment, the actual locations of predetermined ground points contained in the first reference image 404 are compared with predicted locations of predetermined ground points contained in the first reference image 404. A first compensation factor is determined based on the difference between the predicted predetermined ground point locations and the actual predetermined ground point locations. [0043] Similarly, the actual locations of the predetermined ground points contained in the second reference image 408 are compared with predicted locations of predetermined ground points contained in the second reference image 408. A second compensation factor is determined based on the difference between the predicted predetermined ground point WO 2007/001471 PCT/US2005/046749 location and the actual predetermined ground point locations. A combination of the first and second compensation factors, as described above, may then be utilized to determine the ground locations for one or more pixels in each of the target images 412, 416. [0044] The imaging system of the satellite may be controlled to acquire the various images in any order. For example, the satellite may acquire the third and fourth images 412, 416, and then acquire the first and second reference images 404, 408. In one embodiment, the images are acquired in the following order: the first reference image 404 is acquired, followed by the third image 412, followed by the fourth image 416, and finally the second reference image 408 is acquired. In this example, the compensation factor for the third and fourth image 412, 416 is calculated according to a least squares fit of the first and second compensation factors. If the images were acquired in a different order, it would be straightforward, and well within the capabilities of one of ordinary skill in the art, to calculate compensation factors for the third and fourth images 412, 416 utilizing similar techniques. [0045] As described above, in one embodiment two or more reference images are collected and utilized to calculate the compensation factor. In this embodiment, triangulation (via the methods described above) is performed on each image independently to determine compensation factors for each. These compensation factors are then combined for use in determining ground locations associated with images collected in which ground location is determined without using predetermined ground points. The compensation factors may be combined using methods such as, including but not limited to, interpolation, polynomial fit, simple averaging, covariance weighted averaging, etc. Alternatively, a single triangulation (using the same methods described above) is performed on all the images together, resulting in a global compensation factor that would apply to the entire span of orbit within the appropriate timeframe. This global compensation factor could be applied to any image without using predetermined ground points. 01 WO 2007/001471 PCT/US2005/046749 [00461 As mentioned previously, the satellite transmits collected images to at least one ground station located on the earth. The ground station is situated such that the satellite may communicate with the ground station for a portion of an orbit. The images received at a ground station may be analyzed at the ground station to determine location information for the pixels in the images, with this information sent to a user or to a data center (hereinafter referred to as a receiver). Alternatively, the raw data received from the satellite at the ground station may be sent from the ground station to a receiver directly without any processing to determine ground location information associated with images. The raw data, which includes information related to position, attitude and distortion of the imaging system, is contiguously collected at pre-determined intervals and encompasses the ground access swath 400. The raw data may then be analyzed to determine images containing predetermined ground points. Using the predetermined ground points in those images, along with other information as described above, the ground locations for pixels in other images may be calculated. In one embodiment, the image(s) are transmitted to the receiver by conveying the images over the Internet. Typically, an image is conveyed in a compressed format. Once received, the receiver is able to produce an image of the earth location along with ground location information associated with the image. It is also possible to convey the image(s) to the receiver in other ways. For instance, the image(s) can be recorded on a magnetic disk, CD, tape or other recording medium and mailed to the receiver. If needed the recording medium can also include the satellite position, attitude, and distortion information. It is also possible to produce a hard copy of an image and then mail the hardcopy the receiver. The hard copy can also be faxed or otherwise electronically sent to the receiver. [0047] While the invention has been particularly shown and described with reference to a preferred embodiment thereof, it will be understood by those skilled in the art that various other changes in the form and details may be made without departing from the spirit and scope of the invention.

Claims (20)

1. A method for determining location information of an earth image, comprising: obtaining at least one reference image; obtaining at least one target image associated with an earth view, said at least one target image not overlapping said at least one reference image; and using known location information associated with said at least one reference image to determine location information associated with said at least one target image.
2. The method for determining location information of an earth image as claimed in claim 1, wherein said at least one reference image is associated with either at least one terrestrial feature or at least one celestial object.
3. The method for determining location information of an earth image as claimed in claim 1, wherein said at least one target image is associated with either at least one natural terrestrial feature, at least one artificial terrestrial feature or both.
4. The method for determining location information of an earth image as claimed in claim 1, wherein said determining location information associated with said at least one target image comprises: determining known location information of at least one celestial object or at least one terrestrial feature imaged by said at least one reference image; using said known location information and imaging system movement information to determine at least one compensation factor and at least one error measurement associated with said at least one compensation factor for said at least one target image; and WO 2007/001471 PCT/US2005/046749 using said at least one compensation factor to determine location information associated with said at least one target image when the magnitude of said at least one error measurement is less than a predetermined error limit.
5. The method for determining location information of an earth image as claimed in claim 4, wherein said predetermined error limit is associated with at least one of attitude, position and distortion measurement information associated with an imaging system.
6. A method for determining location information of an earth image acquired from an imaging system, comprising: obtaining at least one reference image; obtaining at least one target image associated with an earth view, said at least one target image not overlapping said at least one reference image; locating at least a first ground point in said at least one reference image having known earth location information; determining an expected location of said first ground point in said at least one reference image using at least one of position, attitude and distortion information associated with the imaging system; and calculating at least one compensation factor based on a comparison between said expected location information and said known location information of said first ground point; and determining location information of said at least one target image based on said at least one compensation factor. WO 2007/001471 PCT/US2005/046749
7. The method for determining location information of an earth image as claimed in claim 6, wherein said at least one reference image is associated with either at least one terrestrial feature or at least one celestial object.
8. The method for determining location information of an earth image as claimed in claim 6, wherein said at least one target image is associated with either at least one natural terrestrial feature, at least one artificial terrestrial feature or both.
9. The method for determining location information of an earth image as claimed in claim 6, wherein calculating said at least one compensation factor comprises: determining a first position information of the imaging system when said at least one reference image was acquired by the imaging system; determining a first attitude information of the imaging system when said at least one reference image was acquired by the imaging system; determining a first distortion information of the imaging system when said at least one reference image was acquired by the imaging system; and solving for said at least one compensation factor for at least one of said first position information, said first attitude information and said first distortion information based on a difference between the location of said first ground point in said at least one reference image and said expected location of said ground point.
10. The method for determining location information of an earth image as claimed in claim 9, wherein determining location information of said at least one target image composes: WO 2007/001471 PCT/US2005/046749 determining a second position information of the imaging system when said at least one target image was acquired by the imaging system, said second position information modified by said at least one compensation factor; determining a second attitude information of the imaging system when said at least one target image was acquired by the imaging system, said second attitude information modified by said at least one compensation factor; determining a second distortion information of the imaging system when said at least one target image was acquired by the imaging system, said second distortion information modified by said at least one compensation factor; and determining location information for at least one location in said at least one target image.
11. The method for determining location information of an earth image as claimed in claim 6, wherein calculating said at least one compensation factor comprises: determining a first position information and associated covariance of the imaging system when said at least one reference image was acquired; determining a first attitude information and associated covariance of the imaging system when said at least one reference image was acquired; determining a first distortion information and associated covariance of the imaging system when said at least one reference image was acquired; solving for said at least one compensation factor for each of said first position information, said first attitude information and said first distortion information of said imaging system based on the difference between the location of said first ground point in said at least one reference image and said expected location of said ground point, wherein said compensation factors are weighted by their respective covariances. WO 2007/001471 PCT/US2005/046749
12. The method for determining location information of an earth image as claimed in claim 11, wherein determining location information of said at least one target image comprises: determining a second position information of the imaging system when said at least one target image was acquired by the imaging system, said second position information modified by said at least one compensation factor; determining a second attitude information of the imaging system when said at least one target image was acquired by the imaging system, said second attitude information modified by said at least one compensation factor; determining a second distortion information of the imaging system when said at least one target image was acquired by the imaging system, said second distortion information modified by said at least one compensation factor; and determining location information for at least one location in said at least one target image.
13. An image of an earth view comprising a plurality of pixels and earth location coordinates of at least one of said pixels, said plurality of pixels and earth location coordinates obtained by: obtaining at least one reference image, said at least one reference image comprising a plurality of pixels; determining a first pixel location of at least a first pixel in said at least one reference image associated with a first ground point having a known earth location; calculating at least one compensation factor based on a comparison between an expected pixel location of said first ground point and said first pixel location; WO 2007/001471 PCT/US2005/046749 obtaining at least one target image of an earth view, said at least one target image comprising a plurality of pixels, and said at least one target image not overlapping said at least one reference image; and determining an earth location for at least one pixel of said at least one target image based on said at least one compensation factor.
14. The image as claimed in claim 13, wherein said earth location is locatable to sub-pixel precision.
15. The image as claimed in claim 13, wherein said ground point comprises a natural feature or an artificial feature.
16. The image as claimed in claim 13, wherein calculating said at least one compensation factor comprises: determining a first position information and associated covariance of an imaging system when said at least one reference image was acquired; determining a first attitude information and associated covariance of the imaging system when said at least one reference image was acquired; determining a first distortion information and associated covariance of the imaging system when said at least one reference image was acquired; calculating said expected pixel location of said first ground point based on said first position information, said first attitude information and said first distortion information; determining a difference between said expected pixel location and said first pixel location; and WO 2007/001471 PCT/US2005/046749 solving for at least one compensation factor for each of a position, attitude and distortion of said imaging system based on said difference, wherein said compensation factors are weighted by their respective covariances.
17. The image as claimed in claim 16, wherein determining an earth location comprises: determining a second position information of the imaging system when said at least one target image was acquired; determining a second attitude information of the imaging system when said at least one target image was acquired; determining a second distortion information of the imaging system when said at least one target image was acquired; applying said compensation factor to at least one of said second position infromation, said second attitude information and said second distortion information; and determining an earth location for at least one pixel in said at least one target image.
18. A method for transporting an image towards an interested entity over a communications network, comprising: conveying, over a portion of the communication network, a digital image of an earth view comprising a plurality of pixels, at least one of said pixels having associated ground location information derived based on at least one compensation factor that has been determined based on at least one ground point from at least one reference image, wherein said at least one reference image is different than said digital image and said at least one reference image does not overlap said digital image. on) WO 2007/001471 PCT/US2005/046749
19. The method of claim 18, wherein said digital image is associated with either at least one natural terrestrial feature, at least one artificial terrestrial feature or both.
20. The method of claim 18, wherein said at least one reference image is associated with either at least one terrestrial feature or at least one celestial object.
AU2005333561A 2005-06-24 2005-12-23 Method and apparatus for determining a location associated with an image Abandoned AU2005333561A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
AUPCT/US2005/022961 2005-06-24
PCT/US2005/022961 WO2006078310A2 (en) 2004-06-25 2005-06-24 Method and apparatus for determining a location associated with an image
PCT/US2005/046749 WO2007001471A2 (en) 2005-06-24 2005-12-23 Method and apparatus for determining a location associated with an image

Publications (1)

Publication Number Publication Date
AU2005333561A1 true AU2005333561A1 (en) 2007-01-04

Family

ID=37595619

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2005333561A Abandoned AU2005333561A1 (en) 2005-06-24 2005-12-23 Method and apparatus for determining a location associated with an image

Country Status (5)

Country Link
EP (1) EP1899889A2 (en)
JP (1) JP2009509125A (en)
AU (1) AU2005333561A1 (en)
CA (1) CA2613252A1 (en)
WO (1) WO2007001471A2 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5567805B2 (en) * 2009-08-31 2014-08-06 ライトハウステクノロジー・アンド・コンサルティング株式会社 Flying object detection method, system, and program
CN102798381B (en) * 2012-07-20 2014-12-24 中国资源卫星应用中心 Scene division cataloging method based on geographical location of real image
JP6131568B2 (en) * 2012-10-30 2017-05-24 株式会社ニコン Microscope device and image forming method
US10048084B2 (en) * 2016-09-16 2018-08-14 The Charles Stark Draper Laboratory, Inc. Star tracker-aided airborne or spacecraft terrestrial landmark navigation system
US10935381B2 (en) 2016-09-16 2021-03-02 The Charles Stark Draper Laboratory, Inc. Star tracker-aided airborne or spacecraft terrestrial landmark navigation system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS59133667A (en) * 1983-01-20 1984-08-01 Hitachi Ltd Processing system of picture correction
US4688092A (en) * 1986-05-06 1987-08-18 Ford Aerospace & Communications Corporation Satellite camera image navigation
US6735348B2 (en) * 2001-05-01 2004-05-11 Space Imaging, Llc Apparatuses and methods for mapping image coordinates to ground coordinates
US6810153B2 (en) * 2002-03-20 2004-10-26 Hitachi Software Global Technology, Ltd. Method for orthocorrecting satellite-acquired image
KR100519054B1 (en) * 2002-12-18 2005-10-06 한국과학기술원 Method of precision correction for geometrically distorted satellite images

Also Published As

Publication number Publication date
WO2007001471A3 (en) 2007-02-08
JP2009509125A (en) 2009-03-05
WO2007001471A2 (en) 2007-01-04
EP1899889A2 (en) 2008-03-19
CA2613252A1 (en) 2007-01-04

Similar Documents

Publication Publication Date Title
US20080063270A1 (en) Method and Apparatus for Determining a Location Associated With an Image
He et al. An integrated GNSS/LiDAR-SLAM pose estimation framework for large-scale map building in partially GNSS-denied environments
Grodecki et al. IKONOS geometric accuracy
Nagai et al. UAV-borne 3-D mapping system by multisensor integration
US4313678A (en) Automated satellite mapping system (MAPSAT)
GREJNER‐BRZEZINSKA Direct exterior orientation of airborne imagery with GPS/INS system: Performance analysis
Mostafa et al. Digital image georeferencing from a multiple camera system by GPS/INS
Muller et al. A program for direct georeferencing of airborne and spaceborne line scanner images
Burman Calibration and orientation of airborne image and laser scanner data using GPS and INS
Nagai et al. UAV borne mapping by multi sensor integration
Seiz et al. Cloud mapping from the ground: Use of photogrammetric methods
Fujisada et al. ASTER stereo system performance
Haala et al. Hybrid georeferencing of images and LiDAR data for UAV-based point cloud collection at millimetre accuracy
US20150211864A1 (en) Image navigation and registration (inr) transfer from exquisite systems to hosted space payloads
AU2005333561A1 (en) Method and apparatus for determining a location associated with an image
Mostafa et al. A fully digital system for airborne mapping
KR20080033287A (en) Method and apparatus for determining a location associated with an image
Storey et al. A geometric performance assessment of the EO-1 advanced land imager
Breuer et al. Geometric correction of airborne whiskbroom scanner imagery using hybrid auxiliary data
Hu et al. Scan planning optimization for 2-D beam scanning using a future geostationary microwave radiometer
Ishii et al. Autonomous UAV flight using the Total Station Navigation System in Non-GNSS Environments
Perry et al. Precision directly georeferenced unmanned aerial remote sensing system: Performance evaluation
Seiz et al. Cloud mapping using ground-based imagers
JPH01229910A (en) Navigating device
Haala et al. Geometric Processing of High Resolution Airborne Scanner Imagery Using GPS-INS and Ground Control Points

Legal Events

Date Code Title Description
MK1 Application lapsed section 142(2)(a) - no request for examination in relevant period