EP3574472B1 - Apparatus and method for registering recorded images - Google Patents

Apparatus and method for registering recorded images Download PDF

Info

Publication number
EP3574472B1
EP3574472B1 EP18701525.0A EP18701525A EP3574472B1 EP 3574472 B1 EP3574472 B1 EP 3574472B1 EP 18701525 A EP18701525 A EP 18701525A EP 3574472 B1 EP3574472 B1 EP 3574472B1
Authority
EP
European Patent Office
Prior art keywords
image
base
transformations
images
modified
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP18701525.0A
Other languages
German (de)
French (fr)
Other versions
EP3574472A1 (en
Inventor
John Edward DAWSON
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
UK Secretary of State for Defence
Original Assignee
UK Secretary of State for Defence
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by UK Secretary of State for Defence filed Critical UK Secretary of State for Defence
Publication of EP3574472A1 publication Critical patent/EP3574472A1/en
Application granted granted Critical
Publication of EP3574472B1 publication Critical patent/EP3574472B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/344Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/32Determination of transform parameters for the alignment of images, i.e. image registration using correlation-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/37Determination of transform parameters for the alignment of images, i.e. image registration using transform domain methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30184Infrastructure

Definitions

  • the present invention relates to the field of sensing imagery, in particular to the registration of sensed images (images recorded by a sensor such as a camera).
  • a first key step in these processes is to geometrically align the sensed images into the same coordinate system, such that pixels in the images correspond to the same features of the location imaged. This process is known as image registration.
  • a sensed image is an image generated by a sensor observing a physical object, location or environment.
  • the sensor itself has intrinsic properties and characteristics including, but not limited to, waveband of operation, number of sensing elements, field of view and lens variations. These intrinsic characteristics will influence the appearance of any resultant sensed image generated by the sensor. Generally these properties are known and sensed images can be corrected accordingly.
  • a sensed image will also be influenced by the geometry of the sensor relative to the object or location being imaged (the viewing geometry). In particular for remote sensing, the viewing geometry can have dramatic effects on the representation of a three dimensional surface as a two dimensional image.
  • a sensor such as a photographic camera observing the ground from on-board an aircraft may produce a sensed image of a location comprising a tall building that is directly beneath the aircraft (at the nadir), or may produce a sensed image of the location when the location is forwards and below the aircraft.
  • a given sensor's position, orientation (and therefore line of sight), relative to some coordinate system are known (albeit often only approximately) enabling the determination of the approximate pixel locations (line and sample) in the sensed image corresponding to real world coordinates (latitude, longitude, height).
  • a sensor model e.g. rigorous or replacement
  • processing such as that from the field of photogrammetry
  • a three dimensional world coordinate is being mapped to a two dimensional image coordinate, and therefore has a unique solution.
  • mapping image coordinates to World coordinates does not have a unique solution, owing to the need to map a two dimensional coordinate into a three dimensional space.
  • Image registration is the process of taking two images and identifying a transformation that describes how the image coordinates in one image map to corresponding locations (objects or terrains) in the other image.
  • a registered image is an image that has had such a transformation applied to it, such that it aligns with another image, often referred to as the base image.
  • the image to which the transformation is applied may be referred to as the secondary image.
  • the base image is an image that was recorded earlier than the secondary image, but optionally the images may be images that were recorded simultaneously from different sensors.
  • performing an image comparison of the transformed image with its corresponding image or modified version thereof' covers various possibilities, including that the base image is transformed for comparison to the secondary, the secondary is transformed for comparison to the base, or that both are transformed - for example into an orthorectified view - and compared to each other Additionally it is possible that the image comparison is completed without transforming either image, and that the transformation is used directly within the image comparison calculation applied to the raw images.
  • a ground to image transformation is used to relate real World coordinates (for instance latitude, longitude, and height) to their approximate locations in the image (image coordinates comprising line and sample).
  • This may be provided in or derived from sensor metadata (for instance base image metadata and secondary image metadata for the sensors having obtained the respective base and secondary images) or from photogrammetric processing.
  • sensor metadata for instance base image metadata and secondary image metadata for the sensors having obtained the respective base and secondary images
  • photogrammetric processing for example base image metadata and secondary image metadata for the sensors having obtained the respective base and secondary images
  • a person skilled in the art will be familiar with the requirement for elevation information of the location being imaged (in particular the use of Digital Elevation Models), such that any conversion of image coordinates to World coordinates has a unique solution.
  • a ground to image transformation means that in theory, features of a location having coordinates (pixels) in a secondary image, can have their corresponding coordinates (pixels) in a base image determined, thereby deriving a transformation that registers the two images.
  • the geometric relationship between a pair of images and the scene is illustrated in Figure 7 , which shows how the ground to image transformation for each image and knowledge of the in scene elevation data can be used to geometrically relate one image to the other.
  • a registered image is a secondary image that has undergone this process i.e. it has been converted to the coordinate system of the base image.
  • This approach is considered an approximate registration of the images as the accuracy will be limited by errors in the ground to image transforms and elevation information which cause inaccuracies in world coordinates corresponding to positions in one image, which are compounded when they are also inaccurately projected into the other image.
  • Figure 8 shows that with an uncorrected ground to image transformation a feature at a given ground coordinate does not get projected to the true location in the image of that feature, and similarly a feature at a given location in an image does not get projected to the correct location of that feature on the ground.
  • An improvement to the approximate registration approach is to generate modified ground to image transformations that more accurately represent the mapping between coordinate systems. If the transformations can be corrected such that they become substantially error-free, then the accuracy of mapping locations in the world to their corresponding features in the images (i.e. mapping World to image coordinates and vice versa) will be entirely dependent upon the accuracy of the elevation information.
  • Figure 9 shows the corrected ground to image transformations for a pair of images yielding an accurate image to image transformation. Practically however the errors in the ground to image transformations are not known and so cannot be directly corrected for and must be derived through some other means.
  • Grodecki, J and Dial, G (2003. Block Adjustment of High-Resolution Satellite Images Described by Rational Polynomials; Photogrammetric Engineering & Remote Sensing Vol.69 No. 1; pp. 59 68 .) present a method by which a number of geometric parameters of a sensor model can be replaced by a single adjustment parameter in each image dimension. This is particularly relevant to remote sensing applications where the distance between the location and the sensor is large relative to the field of view of the sensor, and the sensor metadata is relatively accurate.
  • biases applied to the transformations are offsets in image line and sample.
  • corrected transformation refers to the transformation including these offset parameters and the uncorrected transformation refers to the transformation before the offsets are calculated and applied.
  • a nominally corrected transformation has an offset applied to it but it is not known whether this is the true offset, and therefore whether it is indeed the corrected transformation.
  • the image alignment error between the images output by the approximate registration approach discussed previously can be recovered independently of any terrain perspective errors or errors in the elevation data.
  • Such an offset can be considered the relative correction to the base image ground to image transformation.
  • Figure 10 shows how this relative correction can enhance the image to image relationship for some points within the images but does not necessarily align the images with the elevation data. The result of applying this relative correction is that the larger flatter areas of the images will be well aligned but at areas of changing gradient on the ground the images will appear misaligned.
  • the resultant registered image will not have an image alignment error, but may still present terrain perspective errors and errors due to inaccuracy of the elevation data. This is owing to the fact that the necessary correction to the secondary image has still not been derived. Deriving the corrections for both images is essential if the sensed images are to project onto the elevation information such that features in the images align with the same features in the elevation information and terrain perspective errors are minimised. Note that this minimisation occurs when the corrected transformation models align the images with the elevation data despite any bias in its alignment with real world coordinates, and not necessarily when the corrected models are perfectly aligned with the real world.
  • a plurality of nominal corrections are applied to the secondary image's ground to image transformation and an equivalent plurality of nominal corrections are applied to the base image ground to image transformation to generate a plurality of sets of nominally corrected ground to image transformations.
  • the secondary image is transformed using each of the sets of nominally corrected ground to image transformations, and the elevation information, thereby generating a plurality of transformed images.
  • Each of the transformed images is then compared (through an image comparison operation) to the base image to determine a value for a measure of image similarity.
  • the set of corrections which maximises this measure of image similarity will yield registered images with minimised image alignment and terrain perspective errors.
  • a set in this context comprises corrections to the base and secondary image ground to image transformations.
  • the image comparison operation may be a correlation operation (a cross correlation or phase correlation).
  • the measure of image similarity for this embodiment of the invention may be a correlation score.
  • the inventor has determined that a particularly well suited correlation operation is phase correlation owing to its tolerance of intrinsic differences between the images including but not limited to sensor noise, illumination conditions, shadows, man-made changes, as well as differences in excess of linear shift, for instance slight differences in rotation, scale and also terrain distortion.
  • an image correlation for example cross correlation or phase correlation
  • a person skilled in the art will recognise that computing the image correlation between the base image and a registered secondary image (calculated using a nominal correction to the secondary image ground to image transformation), will simultaneously compute the correlation score for each possible base image correction. The maximum score will correspond to the base image correction that matches that nominal secondary image correction.
  • the problem becomes one of finding the optimal nominal correction to the secondary image and calculating the matched correction for the base image transformation as well as the image similarity value via correlation.
  • the optimal correction is that which maximises image similarity.
  • computing the image similarity in this manner has the disadvantage that a bias is introduced.
  • the method favours nominal secondary image transformations which correspond to matched base image corrections with small magnitude. This is due to the degree of overlap between the base image and the nominally registered image decreasing as larger corrections are applied to the base image. This reduced overlap reduces the image correlation as it applies to an increasingly small sub-image.
  • the varying levels of overlap between the base image and successive nominally registered images may cause the image similarity measure to fluctuate significantly with small changes in secondary image correction parameters, which is undesirable for an optimisation algorithm that seeks the optimum value using gradient information. This is particularly apparent when using phase correlation as the image comparison metric and is particularly detrimental when the robustness of phase correlation to differences in appearance and content between the images is required in order to successfully identify the optimum alignment.
  • the transformations from ground to image coordinates are provided as mathematical functions from world coordinates to image coordinates.
  • the matched corrections for subsequent estimates can be computed directly.
  • S line 0 and S samp 0 are the initial estimates of the correction to the secondary image (typically both are equal to zero) and b line 0 and b samp 0 are the matched correction to the base image corresponding to S line 0 and S samp 0 (computed for example via image correlation of the nominally registered images).
  • J b and J s are the Jacobians of the base and secondary transformation functions with respect to world coordinates.
  • S line and S samp is another nominal correction to the secondary image transformation and b line and b samp (computed as per equation 1 are the matched correction to the base image transformation that corresponds to S line and S samp .
  • Pre-calculating the matched correction for each nominal secondary image correction results in nominally registered images with no image alignment error relative to the base image.
  • This has three major advantages. Firstly, it results in a stable image comparison function, secondly it reduces the search space to be over the correction terms of the secondary image (and not the terms of both images) and thirdly, in cases where a correlation based image similarity function are used, the computation of this is simplified as it only needs to be computed for the case of zero offset which has the further benefit of rendering the image similarity function differentiable with respect to correction parameters to the secondary image (it removes the need to search the correlation output for its maximum - a non-analytical step). This is beneficial for the application of advanced optimisation techniques.
  • the value from the plurality of values for the measure of image similarity corresponding to greatest image similarity may be the maximum value. This value will occur when the corrections applied to the base and secondary transformations result in a registered image that is most similar to the base image. This must occur when the registered image and base image optimally overlay (the method ensures this is the case for all registered images), but importantly also when the features in the transformed image are represented in a manner most similar to the base image. Such a scenario can only occur when the secondary image has been accurately converted to World coordinates and then back to image coordinates i.e. any distortion effects owing to viewing geometry and three dimensional terrain are accurately corrected for. The value corresponding to the greatest measure of image similarity will thus represent a transformed image truly registered to the base image and will also represent the true bias corrections to the ground to image transformations.
  • a change detection operation may be applied to the registered image and base image.
  • the change detection operation may identify features of the location that appear differently in the registered image and the base image, highlighting said differences on either of the registered image and base image.
  • the change detection operation may simply output the image coordinates of said differences as an array of values. Other methods of outputting differences detected between the two images will be apparent to a person skilled in the art.
  • an image fusion operation may be applied to the registered image and base image.
  • the image fusion operation may output an image comprising the registered image and base image overlaid with predetermined levels of transparency.
  • Some embodiments of the invention may comprise the additional step of identifying the corrected ground to image transformations for the base and secondary images, and using said transformations and the elevation information to convert the base and secondary images to orthorectified images.
  • the corresponding corrected ground to image transformations can be used to accurately convert image coordinates of features in the base and secondary images to World coordinates.
  • the resultant images will have distortion effects removed such that the images are orthorectified. Orthorectified images are particularly useful for mapping and other applications where spatial information needs to be measured (distances between objects and features of the location).
  • Some embodiments of the invention may comprise the additional step of directing at least one sensor having a line of sight, such that the line of sight intersects the location arid having the at least one sensor produce the first and second sensed images.
  • a sensor or multiple sensors may be directed to image a location, said sensors comprising satellites, sensors on aircraft, or others.
  • the sensors may be directed to image the location at the same time (for instance in different wavebands) or at different times (for instance if the requirement is to measure change).
  • the sensors may have the same or different viewing geometries.
  • the first and second sensed images may be stored on board the sensor or may be transmitted to a different location upon acquisition, or at some other time.
  • the sensed images may be stored as digital images or as physical images (for instance photographic film) which can later be provided as digital images through other means (scanning for instance).
  • Figure 1 shows a viewing geometry 1 of a sensor 2 observing a location.
  • the location is three dimensional (not shown) and comprises a tall building 3.
  • the viewing geometry 1 is such that the sensor 2 is observing the tall building 3 at the nadir.
  • the field of view of the sensor is indicated by the two arrows originating at the sensor 2 and intercepting a physical object at , points annotated A and B in the figure.
  • the sensor 2 obtains a sensed image 4 which may be provided as a base digital image.
  • the sensed image 4 shows a location comprising the top of a building 5 in the centre of the image.
  • FIG 2 shows a different viewing geometry 6 where a sensor 2 (which may be the same or a different sensor as used in Figure 1 ) is observing the location comprising the same tall building 3, but away from the nadir position.
  • the field of view of the sensor is indicated by the arrows and the point of intercept of the extremities of the field of view with a physical object are shown by the points annotated A and B.
  • the sensor 2 is now able to observe the side and top of the building 3.
  • the sensor 2 obtains a sensed image 7 which may be provided as a secondary digital image.
  • the tall building 8 is represented differently in this image when compared to the previous image 4. It is difficult to tell from the image 7, alone whether the feature 8 is a tall building or a wide building.
  • Figure 3 shows a flow diagram of a prior art approach to registering a base image 9 and secondary image 10. Both images are provided as digital images.
  • a digital elevation model 11 is provided in addition to a base sensor model 12 and a secondary sensor model 13.
  • the first step in transforming the secondary image is to convert the image coordinates of the base image. This first step is achieved using the sensor model for the base image and elevation information. This yields a 3 dimensional world coordinate 14 for every pixel of the base image defined by its dimensions 15.
  • the next step is to convert these world coordinates into image coordinates within the secondary image using the secondary image sensor model 12.
  • the result of this step is an image coordinate in the secondary image for each pixel of the base image 16.
  • This result can then be used to interpolate 17 the pixel values of the secondary image 10 onto a pixel grid that corresponds to the base image thereby reprojecting 18 the pixels of the secondary image onto a grid that corresponds to the base image.
  • This method of image registration is approximate as it does not account for errors in either of the sensor models or in the elevation data.
  • Figure 4 shows a flow diagram of the derivation of the relative offset error.
  • Figure 4 is applicable to the prior art design in Figure 3 , other possible approaches such as that shown in figure 5 , and the embodiment of the invention shown in figure 5 .
  • this uses elevation information provided as a digital elevation model and ground to image transformations provided as a secondary image sensor model and a base image sensor model as per Figure 3 (Prior Art).
  • the prior art approach in Figure 3 is followed to obtain a reprojected secondary image 18.
  • This image 18 and the base image 9 are then used as inputs to a digital image correlation calculation 19 (e.g. cross correlation or phase correlation).
  • the output of which is a correlation score 20 and an offset 21 between the base image and the reprojected secondary image.
  • the offset 21 can be applied to shift either the base image or the reprojected image such that they align with each other, or may be applied as a bias correction to the base image sensor model such that reapplying the prior art process in Figure 3 yields a reprojected secondary image equivalent to the reprojected secondary image that was calculated using the uncorrected secondary image sensor model following a shift by the amount computed by the digital image correlation.
  • This offset can be applied as the relative offset error of the present invention.
  • Figure 5 shows a flow diagram of an alternative approach which is described here to assist the reader in understanding the embodiment of the invention illustrated in Figure 6 .
  • this has image transformations provided as a base sensor model 12 and a secondary sensor model 13, and has elevation information 11 provided as a digital elevation model.
  • the base image 9 and secondary image 10 are provided as digital images.
  • a plurality of biases 23, 24 is applied to each of the base and secondary image sensor models to derive a plurality of modified sensor models for each image 25, 26.
  • the reprojected secondary image 18 is computed as per Figure 3 , 27.
  • a digital image correlation algorithm 19 is then applied to compute both a correlation score 20 and an offset 21 (not shown) from which adjustment to the bias corrections can be computed 28.
  • the combination of base and secondary sensor model bias that yields the strongest correlation is deemed to be the correct value and the optimally corrected sensor models are taken to be the nominally corrected sensor models that yielded this strongest correlation, adjusted if necessary by the corresponding adjustment values.
  • Figure 6 shows a flow diagram of an embodiment of the invention.
  • this embodiment has image transformations (not shown) provided as a base sensor model 12 and a secondary sensor model 13, and has elevation information 11 provided as a digital elevation model.
  • the base image 9 and secondary image 10 are provided as digital images.
  • a plurality of biases 24 are applied to secondary image sensor model to derive a plurality of nominally corrected sensor models 26.
  • For each secondary image bias a corresponding matched bias is calculated 29 for the base image sensor model (possibly using a precomputed relative offset 30) to derive a nominally corrected base image sensor model 31.
  • the reprojected secondary image is computed as per the prior art method shown in Figure 3 , 27 using each nominally corrected secondary image sensor model 26 and the nominally corrected base image sensor model 31.
  • a digital image correlation algorithm 19 is then applied to compute both a correlation score 20 and an adjustment to the bias corrections 32.
  • the combination of base and secondary sensor model bias that yields the strongest correlation is deemed to be the correct value and the optimally corrected sensor models are taken to be the bias terms that yielded this strongest correlation.
  • the adjustment value 32 should be zero for all candidate biases.
  • Figure 7 shows two images, 1a and 2a, of a scene 3a where the arrows 1b and 2b represent the ground to image relationship for images 1a and 1b respectively.
  • the diagram shows how knowledge of the ground to image relationships and elevation data can be used to relate points in one image to their corresponding location in another.
  • Figure 8 shows the collection geometry of two images, la and 2a, of a scene, 3a.
  • Points 1b and 2b in the respective images are pixels in each image that correspond to the feature on the ground located at 3b (represented by the star symbol ⁇ ).
  • the raw ground to image transformations map the points 1b and 2b via 1c and 2c to the incorrect ground points 1d and 2d.
  • the true ground point, 3b is projected to the incorrect image positions 1e and 2e via the raw ground to image transformations 2f and 2f. Therefore a mapping from image la to image 2b using the illustrated ground to image transformations and elevation data would not map point 1b to point 2b.
  • Figure 9 shows the collection geometry of two images, la and 2a, of a scene, 3a.
  • Points 1b and 2b in the respective images are pixels in each image that correspond to the feature on the ground located at 3b (represented by the star symbol ⁇ ).
  • the raw ground to image transformations map the points 1b and 2b via 1c and 2c to the incorrect ground points 1d and 2d.
  • the true ground point, 3b is projected to the incorrect image positions 1e and 2e via the raw ground to image transformations 2f and 2f.
  • 1g and 2g represent some correction to the ground to image transforms for images la and 1b, yielding corrected images 1h and 2h such that with the correction applied the ground point 3b now projects to the updated locations of image points 1b and 2b in the corrected images 1h and 2h.
  • Figure 10 shows the collection geometry of the same two images 1a and 2a, of the scene, 3a as in Figure 9 .
  • Points 1b and 2b in the respective images are pixels in each image that correspond to the feature on the ground located at 3b (represented by the star symbol *).
  • the raw ground to image transformations map the points la and 2b via 1c and 2c to the incorrect ground points 1d and 2d.
  • 1g is a correction to the ground to image transformation for image 1a that matches the uncorrected ground to image transformation for image 2a. This yields the corrected image 1h.
  • This correction ensures that the image points, 1b and 2b which represent the same ground feature project to the same ground coordinate as each other (2d), but because image 2a has not been corrected, this ground coordinate is not in the correct location (it should be at 3b). However, despite this, the correction 1g to image la enables the point 1b in image la to be mapped to the point 2b in image 2a via the ground point 2d.
  • the present invention provides an apparatus and method for registering two recorded digital images having different transformations (information relating to the view from which the image was, or appears to be, taken from), so as to provide one or both images with a transformation adjusted to match that of the other, or to provide a correction of an estimated transformation.
  • the method involves translating or adjusting one or both transformations with respect to each other to maximise the degree they match, and then translating or adjusting the two transformations in tandem with respect to a 3D elevation model of the ground or object or environment, and determining the translation or adjustment which results in a best image match between the two images.
  • the invention is applicable to the fields of mapping, change detection, and image stitching and comparison of images including those of terrain, objects and physical environments, and to land management, flood management, medical scans and monitoring of historic or vulnerable buildings.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Description

  • The present invention relates to the field of sensing imagery, in particular to the registration of sensed images (images recorded by a sensor such as a camera).
  • There are a number of practical applications that use sensed imagery including geospatial mapping, surveillance, medical imaging and robotics. Within these applications there is a requirement to be able to compare different sensed images of the same location with each other (for instance to detect changes in a geographic location over time, or to monitor changes in medical imagery of a patient), and to fuse the information from multiple sensed images (for instance the fusion of sensed images taken in different wavebands in order to derive a better understanding of a physical object). A first key step in these processes is to geometrically align the sensed images into the same coordinate system, such that pixels in the images correspond to the same features of the location imaged. This process is known as image registration.
  • A sensed image is an image generated by a sensor observing a physical object, location or environment. The sensor itself has intrinsic properties and characteristics including, but not limited to, waveband of operation, number of sensing elements, field of view and lens variations. These intrinsic characteristics will influence the appearance of any resultant sensed image generated by the sensor. Generally these properties are known and sensed images can be corrected accordingly. However, a sensed image will also be influenced by the geometry of the sensor relative to the object or location being imaged (the viewing geometry). In particular for remote sensing, the viewing geometry can have dramatic effects on the representation of a three dimensional surface as a two dimensional image. For instance, a sensor such as a photographic camera observing the ground from on-board an aircraft may produce a sensed image of a location comprising a tall building that is directly beneath the aircraft (at the nadir), or may produce a sensed image of the location when the location is forwards and below the aircraft.
  • This is illustrated in Figure 1 and Figure 2. The different viewing geometries will result in sensed images with different perspective views of the location. At nadir the sensed image will only show the top of the tall building, whereas other viewing geometries will result in sensed images where the tall building appears to lean away from the centre of the image. Practically this makes direct analysis and comparison of sensed imagery difficult. It is therefore a requirement in remote sensing applications that sensed images of a location are first transformed into the same coordinate system, providing the same perspective view of the location in each image, and such that three dimensional objects in the location are represented in the same way.
  • In many remote sensing applications a given sensor's position, orientation (and therefore line of sight), relative to some coordinate system, are known (albeit often only approximately) enabling the determination of the approximate pixel locations (line and sample) in the sensed image corresponding to real world coordinates (latitude, longitude, height). Alternatively, where sensor position and orientation is not available, a sensor model (e.g. rigorous or replacement) may be used to provide this ground to image transformation, or processing (such as that from the field of photogrammetry) can be applied to derive such a transformation. Common to all such transformations, a three dimensional world coordinate is being mapped to a two dimensional image coordinate, and therefore has a unique solution. However, mapping image coordinates to World coordinates does not have a unique solution, owing to the need to map a two dimensional coordinate into a three dimensional space.
  • Therefore in order to convert image coordinates in a sensed image to World coordinates it is necessary to use elevation information for the location imaged, or to use some other means of constraining the World coordinates to two degrees of freedom. A person skilled in the art will be familiar with the provision of elevation information in different forms including by way of a Digital Elevation Model (DEM). For the purposes of this document the term ground to image transformation encapsulates any transformation from world to image coordinates and its inverse (with suitable constraint) from image to ground coordinates. Thus it becomes feasible to convert image coordinates of features in a first sensed image, to ground coordinates, and then to convert those ground coordinates to image coordinates in a second sensed image, thereby deriving a transformation between different image coordinates. An example of such an approach is provided by Ozcanli Ozge C et al in "Automatic Geo-location Correction of Satellite Imagery", 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops, IEEE, 23 June 2014, pages 307-314. In reality however this only provides an approximate mapping between the two sets of image coordinates for reasons including;
    • Inaccuracy of the ground to image transformation/s
    • Inaccuracy of the elevation model (including random errors, lack of fidelity and interpolation errors and bias caused by the elevation data being misaligned in any dimension).
    Thus the error/s in the ground to image transformation/s must be derived and corrected in order to be truly confident that the coordinates and representation of features in sensed images are accurate when converting between coordinate systems.
  • Therefore it is an aim of the invention to provide a method that overcomes these issues such that sensed images can be accurately registered with each other in a common coordinate system.
  • According to a first aspect of the invention there is provided a method as set out in claim 1
  • According to a second aspect of the invention there is provided an apparatus as set out in claim 12.
  • Image registration is the process of taking two images and identifying a transformation that describes how the image coordinates in one image map to corresponding locations (objects or terrains) in the other image. A registered image is an image that has had such a transformation applied to it, such that it aligns with another image, often referred to as the base image. The image to which the transformation is applied may be referred to as the secondary image. Often the base image is an image that was recorded earlier than the secondary image, but optionally the images may be images that were recorded simultaneously from different sensors.
  • The term "performing an image comparison of the transformed image with its corresponding image or modified version thereof' covers various possibilities, including that the base image is transformed for comparison to the secondary, the secondary is transformed for comparison to the base, or that both are transformed - for example into an orthorectified view - and compared to each other Additionally it is possible that the image comparison is completed without transforming either image, and that the transformation is used directly within the image comparison calculation applied to the raw images.
  • A ground to image transformation is used to relate real World coordinates (for instance latitude, longitude, and height) to their approximate locations in the image (image coordinates comprising line and sample). This may be provided in or derived from sensor metadata (for instance base image metadata and secondary image metadata for the sensors having obtained the respective base and secondary images) or from photogrammetric processing. A person skilled in the art will be familiar with the requirement for elevation information of the location being imaged (in particular the use of Digital Elevation Models), such that any conversion of image coordinates to World coordinates has a unique solution. The availability of a ground to image transformation means that in theory, features of a location having coordinates (pixels) in a secondary image, can have their corresponding coordinates (pixels) in a base image determined, thereby deriving a transformation that registers the two images. The geometric relationship between a pair of images and the scene is illustrated in Figure 7, which shows how the ground to image transformation for each image and knowledge of the in scene elevation data can be used to geometrically relate one image to the other.
  • A registered image is a secondary image that has undergone this process i.e. it has been converted to the coordinate system of the base image. This approach is considered an approximate registration of the images as the accuracy will be limited by errors in the ground to image transforms and elevation information which cause inaccuracies in world coordinates corresponding to positions in one image, which are compounded when they are also inaccurately projected into the other image. This is illustrated in Figure 8 which shows that with an uncorrected ground to image transformation a feature at a given ground coordinate does not get projected to the true location in the image of that feature, and similarly a feature at a given location in an image does not get projected to the correct location of that feature on the ground.
  • An improvement to the approximate registration approach is to generate modified ground to image transformations that more accurately represent the mapping between coordinate systems. If the transformations can be corrected such that they become substantially error-free, then the accuracy of mapping locations in the world to their corresponding features in the images (i.e. mapping World to image coordinates and vice versa) will be entirely dependent upon the accuracy of the elevation information. This is illustrated in Figure 9 which shows the corrected ground to image transformations for a pair of images yielding an accurate image to image transformation. Practically however the errors in the ground to image transformations are not known and so cannot be directly corrected for and must be derived through some other means.
  • Grodecki, J and Dial, G (2003. Block Adjustment of High-Resolution Satellite Images Described by Rational Polynomials; Photogrammetric Engineering & Remote Sensing Vol.69 No. 1; pp. 59 68.) present a method by which a number of geometric parameters of a sensor model can be replaced by a single adjustment parameter in each image dimension. This is particularly relevant to remote sensing applications where the distance between the location and the sensor is large relative to the field of view of the sensor, and the sensor metadata is relatively accurate.
  • This is deemed to be the case and a pair of individual bias parameters is used to correct the ground to image transformations. The biases applied to the transformations are offsets in image line and sample. The term corrected transformation refers to the transformation including these offset parameters and the uncorrected transformation refers to the transformation before the offsets are calculated and applied. A nominally corrected transformation has an offset applied to it but it is not known whether this is the true offset, and therefore whether it is indeed the corrected transformation.
  • Following approximate registration of images using the uncorrected ground to image transforms and an elevation model the misalignment between the base image and the registered image will be largely due to four sources of error:
    • Image alignment error. This is caused by uncorrelated errors in the transformation models causing the same terrain or feature in each image to project onto the elevation data in different locations. This is typically the major component of the misalignment and manifests itself largely as a linear offset between the images.
    • Terrain perspective error. This is caused by one or both images being projected onto the elevation model at the wrong location and resulting in the differences in terrain perspective between the two images being incorrectly accounted for.
    • Bias errors in the elevation data. With perfect ground to image transformations for all images, the registered image will only be accurately aligned with the base image if the elevation data is also perfect aligned with real world coordinates. However, usually there are errors both horizontally and vertically in elevation data so this will be a source of image misalignment.
    • Random errors in the elevation data - For example lack of resolution or other localised flaws with the elevation data will also cause misalignment of the registered images.
  • Through use of an appropriate image comparison operation (for example cross correlation or phase correlation), the image alignment error between the images output by the approximate registration approach discussed previously can be recovered independently of any terrain perspective errors or errors in the elevation data. Such an offset can be considered the relative correction to the base image ground to image transformation. Figure 10 shows how this relative correction can enhance the image to image relationship for some points within the images but does not necessarily align the images with the elevation data. The result of applying this relative correction is that the larger flatter areas of the images will be well aligned but at areas of changing gradient on the ground the images will appear misaligned.
  • Therefore, by repeating the approximate registration approach, but with a relatively corrected base transformation, the resultant registered image will not have an image alignment error, but may still present terrain perspective errors and errors due to inaccuracy of the elevation data. This is owing to the fact that the necessary correction to the secondary image has still not been derived. Deriving the corrections for both images is essential if the sensed images are to project onto the elevation information such that features in the images align with the same features in the elevation information and terrain perspective errors are minimised. Note that this minimisation occurs when the corrected transformation models align the images with the elevation data despite any bias in its alignment with real world coordinates, and not necessarily when the corrected models are perfectly aligned with the real world.
  • Therefore a plurality of nominal corrections are applied to the secondary image's ground to image transformation and an equivalent plurality of nominal corrections are applied to the base image ground to image transformation to generate a plurality of sets of nominally corrected ground to image transformations. The secondary image is transformed using each of the sets of nominally corrected ground to image transformations, and the elevation information, thereby generating a plurality of transformed images. Each of the transformed images is then compared (through an image comparison operation) to the base image to determine a value for a measure of image similarity. The set of corrections which maximises this measure of image similarity will yield registered images with minimised image alignment and terrain perspective errors. A set in this context comprises corrections to the base and secondary image ground to image transformations.
  • The image comparison operation may be a correlation operation (a cross correlation or phase correlation). The measure of image similarity for this embodiment of the invention may be a correlation score. The inventor has determined that a particularly well suited correlation operation is phase correlation owing to its tolerance of intrinsic differences between the images including but not limited to sensor noise, illumination conditions, shadows, man-made changes, as well as differences in excess of linear shift, for instance slight differences in rotation, scale and also terrain distortion.
  • Where an image correlation (for example cross correlation or phase correlation) is used as the comparison operator, a person skilled in the art will recognise that computing the image correlation between the base image and a registered secondary image (calculated using a nominal correction to the secondary image ground to image transformation), will simultaneously compute the correlation score for each possible base image correction. The maximum score will correspond to the base image correction that matches that nominal secondary image correction.
  • This has the advantage that it reduces the dimensionality of the optimisation problem to the number of degrees of freedom of the correction model for one image rather than both images. The problem becomes one of finding the optimal nominal correction to the secondary image and calculating the matched correction for the base image transformation as well as the image similarity value via correlation. The optimal correction is that which maximises image similarity.
  • However, computing the image similarity in this manner has the disadvantage that a bias is introduced. The method favours nominal secondary image transformations which correspond to matched base image corrections with small magnitude. This is due to the degree of overlap between the base image and the nominally registered image decreasing as larger corrections are applied to the base image. This reduced overlap reduces the image correlation as it applies to an increasingly small sub-image. Additionally the varying levels of overlap between the base image and successive nominally registered images may cause the image similarity measure to fluctuate significantly with small changes in secondary image correction parameters, which is undesirable for an optimisation algorithm that seeks the optimum value using gradient information. This is particularly apparent when using phase correlation as the image comparison metric and is particularly detrimental when the robustness of phase correlation to differences in appearance and content between the images is required in order to successfully identify the optimum alignment.
  • This can be mitigated by computing the matched base image correction for each candidate secondary image correction as above and then re-computing a plurality of registered images using the plurality of nominal secondary image corrections along with the precomputed matched base image correction. Recomputing the image similarity score for each candidate nominal secondary image correction using the precomputed matched corrections for the base image will yield a set of image similarity scores that have been computed with full overlap between the base image and nominally registered secondary image. This results in a set of scores that can be compared to each other without bias. Moreover recomputing the image similarity in this manner enables the similarity measure to be used as a stable cost function for an optimisation algorithm.
  • In preferred embodiments of the invention the transformations from ground to image coordinates are provided as mathematical functions from world coordinates to image coordinates. In this case following the calculation of the matched correction for an initial estimate of the secondary image correction terms, the matched corrections for subsequent estimates can be computed directly. One such method is shown in equation 1. b line b samp = b line 0 b samp 0 + J b J s 1 s line s line 0 s samp s samp 0
    Figure imgb0001
  • S line 0 and S samp 0 are the initial estimates of the correction to the secondary image (typically both are equal to zero) and b line 0 and b samp 0 are the matched correction to the base image corresponding to S line 0 and S samp 0 (computed for example via image correlation of the nominally registered images). Jb and Js are the Jacobians of the base and secondary transformation functions with respect to world coordinates. Sline and Ssamp is another nominal correction to the secondary image transformation and bline and bsamp (computed as per equation 1 are the matched correction to the base image transformation that corresponds to Sline and Ssamp.
  • Pre-calculating the matched correction for each nominal secondary image correction results in nominally registered images with no image alignment error relative to the base image. This has three major advantages. Firstly, it results in a stable image comparison function, secondly it reduces the search space to be over the correction terms of the secondary image (and not the terms of both images) and thirdly, in cases where a correlation based image similarity function are used, the computation of this is simplified as it only needs to be computed for the case of zero offset which has the further benefit of rendering the image similarity function differentiable with respect to correction parameters to the secondary image (it removes the need to search the correlation output for its maximum - a non-analytical step). This is beneficial for the application of advanced optimisation techniques.
  • The value from the plurality of values for the measure of image similarity corresponding to greatest image similarity may be the maximum value. This value will occur when the corrections applied to the base and secondary transformations result in a registered image that is most similar to the base image. This must occur when the registered image and base image optimally overlay (the method ensures this is the case for all registered images), but importantly also when the features in the transformed image are represented in a manner most similar to the base image. Such a scenario can only occur when the secondary image has been accurately converted to World coordinates and then back to image coordinates i.e. any distortion effects owing to viewing geometry and three dimensional terrain are accurately corrected for. The value corresponding to the greatest measure of image similarity will thus represent a transformed image truly registered to the base image and will also represent the true bias corrections to the ground to image transformations.
  • In some embodiments of the invention a change detection operation may be applied to the registered image and base image. The change detection operation may identify features of the location that appear differently in the registered image and the base image, highlighting said differences on either of the registered image and base image. The change detection operation may simply output the image coordinates of said differences as an array of values. Other methods of outputting differences detected between the two images will be apparent to a person skilled in the art.
  • In some embodiments of the invention an image fusion operation may be applied to the registered image and base image. The image fusion operation may output an image comprising the registered image and base image overlaid with predetermined levels of transparency.
  • Some embodiments of the invention may comprise the additional step of identifying the corrected ground to image transformations for the base and secondary images, and using said transformations and the elevation information to convert the base and secondary images to orthorectified images. Owing to the fact that the registered image will only be generated when correct biases have been determined for the sensor metadata, the corresponding corrected ground to image transformations can be used to accurately convert image coordinates of features in the base and secondary images to World coordinates. The resultant images will have distortion effects removed such that the images are orthorectified. Orthorectified images are particularly useful for mapping and other applications where spatial information needs to be measured (distances between objects and features of the location).
  • Some embodiments of the invention may comprise the additional step of directing at least one sensor having a line of sight, such that the line of sight intersects the location arid having the at least one sensor produce the first and second sensed images. A sensor or multiple sensors may be directed to image a location, said sensors comprising satellites, sensors on aircraft, or others. The sensors may be directed to image the location at the same time (for instance in different wavebands) or at different times (for instance if the requirement is to measure change). The sensors may have the same or different viewing geometries. The first and second sensed images may be stored on board the sensor or may be transmitted to a different location upon acquisition, or at some other time. The sensed images may be stored as digital images or as physical images (for instance photographic film) which can later be provided as digital images through other means (scanning for instance).
  • Brief Description of the Drawings
  • A preferred embodiment of the invention will now be described by way of example only and with reference to the accompanying drawings, in which:
    • Figure 1 shows a sensor observing a three dimensional location from nadir and corresponding image;
    • Figure 2 shows a sensor observing a three dimensional location from off nadir and corresponding image;
    • Figure 3 shows a flow diagram of a prior art approximate approach to registering two sensed images via reprojection;
    • Figure 4 shows a flow diagram of the derivation of the relative offset error of the invention;
    • Figure 5 shows a flow diagram of an alternative approach to aid understanding;
    • Figure 6 shows a flow diagram of an embodiment of the invention;
    • Figure 7 shows a cross section of a terrain as viewed by two sensors;
    • Figure 8 shows an illustration of the errors present in typical ground to image transformations and their effect on the conversion between image and world coordinates
    • Figure 9 shows an illustration of corrections to the ground to image transformations and their effect on the image to image relationship; and
    • Figure 10 shows an illustration of the relative correction of one image within a pair of images.
    Detailed Description
  • Figure 1 shows a viewing geometry 1 of a sensor 2 observing a location. The location is three dimensional (not shown) and comprises a tall building 3. The viewing geometry 1 is such that the sensor 2 is observing the tall building 3 at the nadir. The field of view of the sensor is indicated by the two arrows originating at the sensor 2 and intercepting a physical object at , points annotated A and B in the figure. The sensor 2 obtains a sensed image 4 which may be provided as a base digital image. The sensed image 4 shows a location comprising the top of a building 5 in the centre of the image. In contrast Figure 2 shows a different viewing geometry 6 where a sensor 2 (which may be the same or a different sensor as used in Figure 1) is observing the location comprising the same tall building 3, but away from the nadir position. The field of view of the sensor is indicated by the arrows and the point of intercept of the extremities of the field of view with a physical object are shown by the points annotated A and B. It is clear that the sensor 2 is now able to observe the side and top of the building 3. Again the sensor 2 obtains a sensed image 7 which may be provided as a secondary digital image. The tall building 8 is represented differently in this image when compared to the previous image 4. It is difficult to tell from the image 7, alone whether the feature 8 is a tall building or a wide building.
  • Figure 3 shows a flow diagram of a prior art approach to registering a base image 9 and secondary image 10. Both images are provided as digital images. A digital elevation model 11 is provided in addition to a base sensor model 12 and a secondary sensor model 13. The first step in transforming the secondary image is to convert the image coordinates of the base image. This first step is achieved using the sensor model for the base image and elevation information. This yields a 3 dimensional world coordinate 14 for every pixel of the base image defined by its dimensions 15. The next step is to convert these world coordinates into image coordinates within the secondary image using the secondary image sensor model 12. The result of this step is an image coordinate in the secondary image for each pixel of the base image 16. This result can then be used to interpolate 17 the pixel values of the secondary image 10 onto a pixel grid that corresponds to the base image thereby reprojecting 18 the pixels of the secondary image onto a grid that corresponds to the base image. This method of image registration is approximate as it does not account for errors in either of the sensor models or in the elevation data.
  • Figure 4 shows a flow diagram of the derivation of the relative offset error. Figure 4 is applicable to the prior art design in Figure 3, other possible approaches such as that shown in figure 5, and the embodiment of the invention shown in figure 5. In particular this uses elevation information provided as a digital elevation model and ground to image transformations provided as a secondary image sensor model and a base image sensor model as per Figure 3 (Prior Art). The prior art approach in Figure 3 is followed to obtain a reprojected secondary image 18. This image 18 and the base image 9 are then used as inputs to a digital image correlation calculation 19 (e.g. cross correlation or phase correlation). The output of which is a correlation score 20 and an offset 21 between the base image and the reprojected secondary image. The offset 21 can be applied to shift either the base image or the reprojected image such that they align with each other, or may be applied as a bias correction to the base image sensor model such that reapplying the prior art process in Figure 3 yields a reprojected secondary image equivalent to the reprojected secondary image that was calculated using the uncorrected secondary image sensor model following a shift by the amount computed by the digital image correlation. This offset can be applied as the relative offset error of the present invention.
  • Figure 5 shows a flow diagram of an alternative approach which is described here to assist the reader in understanding the embodiment of the invention illustrated in Figure 6. In particular this has image transformations provided as a base sensor model 12 and a secondary sensor model 13, and has elevation information 11 provided as a digital elevation model. The base image 9 and secondary image 10 are provided as digital images. A plurality of biases 23, 24 is applied to each of the base and secondary image sensor models to derive a plurality of modified sensor models for each image 25, 26. For each pair of modified base and secondary image sensor models, the reprojected secondary image 18 is computed as per Figure 3, 27. A digital image correlation algorithm 19 is then applied to compute both a correlation score 20 and an offset 21 (not shown) from which adjustment to the bias corrections can be computed 28. The combination of base and secondary sensor model bias that yields the strongest correlation is deemed to be the correct value and the optimally corrected sensor models are taken to be the nominally corrected sensor models that yielded this strongest correlation, adjusted if necessary by the corresponding adjustment values.
  • Figure 6 shows a flow diagram of an embodiment of the invention. In particular this embodiment has image transformations (not shown) provided as a base sensor model 12 and a secondary sensor model 13, and has elevation information 11 provided as a digital elevation model. The base image 9 and secondary image 10 are provided as digital images. A plurality of biases 24 are applied to secondary image sensor model to derive a plurality of nominally corrected sensor models 26. For each secondary image bias a corresponding matched bias is calculated 29 for the base image sensor model (possibly using a precomputed relative offset 30) to derive a nominally corrected base image sensor model 31. Then the reprojected secondary image is computed as per the prior art method shown in Figure 3, 27 using each nominally corrected secondary image sensor model 26 and the nominally corrected base image sensor model 31. A digital image correlation algorithm 19 is then applied to compute both a correlation score 20 and an adjustment to the bias corrections 32. The combination of base and secondary sensor model bias that yields the strongest correlation is deemed to be the correct value and the optimally corrected sensor models are taken to be the bias terms that yielded this strongest correlation. As the base image bias should match the secondary image bias then the adjustment value 32 should be zero for all candidate biases.
  • Figure 7 shows two images, 1a and 2a, of a scene 3a where the arrows 1b and 2b represent the ground to image relationship for images 1a and 1b respectively. As a whole the diagram shows how knowledge of the ground to image relationships and elevation data can be used to relate points in one image to their corresponding location in another.
  • Figure 8 shows the collection geometry of two images, la and 2a, of a scene, 3a. Points 1b and 2b in the respective images are pixels in each image that correspond to the feature on the ground located at 3b (represented by the star symbol ★). The raw ground to image transformations map the points 1b and 2b via 1c and 2c to the incorrect ground points 1d and 2d. The true ground point, 3b, is projected to the incorrect image positions 1e and 2e via the raw ground to image transformations 2f and 2f. Therefore a mapping from image la to image 2b using the illustrated ground to image transformations and elevation data would not map point 1b to point 2b.
  • Figure 9 shows the collection geometry of two images, la and 2a, of a scene, 3a. Points 1b and 2b in the respective images are pixels in each image that correspond to the feature on the ground located at 3b (represented by the star symbol ★). The raw ground to image transformations map the points 1b and 2b via 1c and 2c to the incorrect ground points 1d and 2d. The true ground point, 3b, is projected to the incorrect image positions 1e and 2e via the raw ground to image transformations 2f and 2f. 1g and 2g represent some correction to the ground to image transforms for images la and 1b, yielding corrected images 1h and 2h such that with the correction applied the ground point 3b now projects to the updated locations of image points 1b and 2b in the corrected images 1h and 2h.
  • Figure 10 shows the collection geometry of the same two images 1a and 2a, of the scene, 3a as in Figure 9. Points 1b and 2b in the respective images are pixels in each image that correspond to the feature on the ground located at 3b (represented by the star symbol *). The raw ground to image transformations map the points la and 2b via 1c and 2c to the incorrect ground points 1d and 2d. 1g is a correction to the ground to image transformation for image 1a that matches the uncorrected ground to image transformation for image 2a. This yields the corrected image 1h. This correction ensures that the image points, 1b and 2b which represent the same ground feature project to the same ground coordinate as each other (2d), but because image 2a has not been corrected, this ground coordinate is not in the correct location (it should be at 3b). However, despite this, the correction 1g to image la enables the point 1b in image la to be mapped to the point 2b in image 2a via the ground point 2d.
  • In more general terms, the present invention provides an apparatus and method for registering two recorded digital images having different transformations (information relating to the view from which the image was, or appears to be, taken from), so as to provide one or both images with a transformation adjusted to match that of the other, or to provide a correction of an estimated transformation.
  • The method involves translating or adjusting one or both transformations with respect to each other to maximise the degree they match, and then translating or adjusting the two transformations in tandem with respect to a 3D elevation model of the ground or object or environment, and determining the translation or adjustment which results in a best image match between the two images.
  • The invention is applicable to the fields of mapping, change detection, and image stitching and comparison of images including those of terrain, objects and physical environments, and to land management, flood management, medical scans and monitoring of historic or vulnerable buildings.

Claims (12)

  1. A computer implemented remote sensing method of registering a base recorded image (9) of an object or terrain with a secondary recorded image (10) of the object or terrain comprising the steps:
    Providing the base (9) and secondary (10) recorded images with corresponding base (12) and secondary (13) image transformations,
    Wherein the base (12) and secondary (13) image transformations each comprise information sufficient to enable a position in world coordinates to be translated into a location in the respective image;
    Providing a 3D model (11) of the object or terrain;
    Identifying a relative offset error between the base (12) and secondary (13) image transformations, the relative offset error being an initial estimate of the offset of the base image (9) to the secondary image (10) when the images are transformed into the same image coordinate system using the base (12) and secondary (13) image transformations;
    Identifying true absolute offset errors of the image transformations (12, 13), to identify true transformations of the recorded images (9, 10), by:
    Generating a plurality of pairs of modified transformations by:
    Applying a plurality of absolute offset errors (24) to the secondary image transformation (13), to generate in each case modified secondary image transformations (26); and
    Applying for each of the absolute offset errors (24), a corresponding matched-bias (29) to the base image transformation (12) to generate a respective modified-base image transformation (31), the matched-bias (29) in each case comprising the relative offset error and a further offset that aligns the base image (9) to the secondary image (10) when the images are transformed into the same coordinate system using the respective base image transformation (12) and modified secondary image transformation (26);
    Identifying which pair of modified transformations is most accurate by
    For each of the pairs, transforming at least one of the base (9) and secondary (10) images to generate a transformed image, using:
    The modified base image transformation (31) of that pair;
    The modified secondary image transformation (26) of that pair, and;
    The 3D model (11);
    such as to provide two nominally aligned images, and performing an image comparison (19) of the nominally aligned images, to generate a value for a measure of image similarity (20); and
    Identifying the pair of modified transformations (26, 31) corresponding to the greatest image similarity (20) as being true image transformations, and the corresponding absolute offset error (24) and matched bias (29) as the true absolute offset errors; and
    Outputting as an image registration, either one of the pair of true image transformations and/or a transformed image corresponding to one of the true pair of image transformations.
  2. A computer implemented method according to claim 1 wherein the image comparison operation (19) is a correlation operation.
  3. A computer implemented method according to claim 2 wherein the measure of image similarity (20) is a correlation score.
  4. A computer implemented method according to any one of claims 1-3 wherein the base sensor image transformation (12), and the secondary image transformation (13) are provided by base and secondary sensor models.
  5. A computer implemented method according to claim 4 wherein the absolute offset errors (24) and matched biases (29) are applied to the secondary (13) and base (12) sensor models, thereby generating a plurality of sets of modified-secondary and modified-base sensor models.
  6. A computer implemented method according to claim 5 wherein the matched biases (29) applied to the base sensor model (12) are calculated from the absolute offset errors (24) applied to the secondary sensor model (13).
  7. A computer implemented method according to any preceding claim further comprising the step of applying a change detection operation to the registered secondary image and base image.
  8. A computer implemented method according to any one of claims 1-6 further comprising the step of performing an image fusion operation to the registered secondary image and base image.
  9. A computer implemented method according to any one of the preceding claims, wherein the matched biases (29) are calculated according to: b line b samp = b line 0 b samp 0 + J b J s 1 s line s line 0 s samp s samp 0
    Figure imgb0002
    where Jb and Js are the Jacobians of the base (12) and secondary (13) transformations respectively, b line 0 and b samp 0 and s line 0 and s samp 0 are the initial estimates for the matched bias (29) and absolute offset error in line and sample, sline and s samp are the next pair of absolute offset errors (24) in line and sample for the secondary image (10), and bline and bsamp are the corresponding matched biases (29) for the base image (9).
  10. A computer implemented method according to any preceding claim further comprising the step of identifying the modified-base (31) and modified-secondary (26) sensor transformations corresponding to the maximum value for the measure of image similarity (20), and using said sensor metadata and the elevation information (11) to convert the base (9) and secondary (10) images to orthorectified images.
  11. A computer implemented method according to any preceding claim further comprising the additional step of directing at least one sensor having a line of sight, such that the line of sight intersects the object or terrain and having the at least one sensor produce the base (9) and secondary (10) recorded images.
  12. Remote sensing apparatus for registering a base recorded image (9) of an object or terrain with a secondary recorded image (10) of the object or terrain, the apparatus comprising a computer processor and computer data storage jointly configured to:
    Provide the base (9) and secondary (10) recorded images with corresponding base (12) and secondary (13) image transformations,
    The base (12) and secondary (13) image transformations each comprise information sufficient to enable a position in world coordinates to be translated into a location in the respective image;
    Provide a 3D model (11) of the object or terrain;
    Identify a relative offset error between the base (12) and secondary (13) image transformations, the relative offset error being an initial estimate of the offset of the base image (9) to the secondary image (10) when the images are transformed into the same image coordinate system using the base (12) and secondary (13) image transformations;
    Identify true absolute offset errors of the image transformations (12, 13), to identify true transformations of the recorded images (9, 10), by:
    Generating a plurality of pairs of modified transformations by:
    Applying a plurality of absolute offset errors (24) to the secondary image transformation (13), to generate in each case modified secondary image transformations (26); and
    Applying for each of the absolute offset errors (24), a corresponding matched-bias (29) to the base image transformation (12) to generate a respective modified-base image transformation (31), the matched-bias (29) in each case comprising the relative offset error and a further offset that aligns the base image (9) to the secondary image (10) when the images are transformed into the same coordinate system using the respective base image transformation (12) and modified secondary image transformation (26);
    Identifying which pair of modified transformations (26, 31) is most accurate by
    For each of the pairs, transforming at least one of the base (9) and secondary (10) images to generate a transformed image, using:
    The modified base image transformation (31) of that pair;
    The modified secondary image transformation (26) of that pair, and;
    The 3D model (11);
    such as to provide two nominally aligned images, and performing an image comparison (19) of the nominally aligned images, to generate a value for a measure of image similarity (20); and
    Identifying the pair of modified transformations (26, 31) corresponding to the greatest image similarity (20), as being true image transformations, and the corresponding absolute offset error (24) and matched bias (29) as the true absolute offset errors; and
    Output as an image registration, either one of the pair of true image transformations and/or a transformed image corresponding to one of the true pair of image transformations.
EP18701525.0A 2017-01-27 2018-01-16 Apparatus and method for registering recorded images Active EP3574472B1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GBGB1701363.2A GB201701363D0 (en) 2017-01-27 2017-01-27 Apparatus and method for registering recorded images
PCT/GB2018/000006 WO2018138470A1 (en) 2017-01-27 2018-01-16 Apparatus and method for registering recorded images

Publications (2)

Publication Number Publication Date
EP3574472A1 EP3574472A1 (en) 2019-12-04
EP3574472B1 true EP3574472B1 (en) 2021-09-29

Family

ID=58462820

Family Applications (1)

Application Number Title Priority Date Filing Date
EP18701525.0A Active EP3574472B1 (en) 2017-01-27 2018-01-16 Apparatus and method for registering recorded images

Country Status (6)

Country Link
US (1) US11120563B2 (en)
EP (1) EP3574472B1 (en)
AU (1) AU2018213100B2 (en)
CA (1) CA3049306A1 (en)
GB (2) GB201701363D0 (en)
WO (1) WO2018138470A1 (en)

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5872630A (en) * 1995-09-20 1999-02-16 Johs; Blaine D. Regression calibrated spectroscopic rotating compensator ellipsometer system with photo array detector
US6219462B1 (en) * 1997-05-09 2001-04-17 Sarnoff Corporation Method and apparatus for performing global image alignment using any local match measure
US8160364B2 (en) 2007-02-16 2012-04-17 Raytheon Company System and method for image registration based on variable region of interest
US20090232388A1 (en) * 2008-03-12 2009-09-17 Harris Corporation Registration of 3d point cloud data by creation of filtered density images
US8170840B2 (en) * 2008-10-31 2012-05-01 Eagle View Technologies, Inc. Pitch determination systems and methods for aerial roof estimation
CN103026381B (en) * 2010-04-18 2016-01-20 图象欧洲公司 Dual-stack projects
KR20120055102A (en) * 2010-11-23 2012-05-31 삼성전자주식회사 Image processing apparatus and image processing method
US9977978B2 (en) 2011-11-14 2018-05-22 San Diego State University Research Foundation Image station matching, preprocessing, spatial registration and change detection with multi-temporal remotely-sensed imagery
US20160321838A1 (en) * 2015-04-29 2016-11-03 Stmicroelectronics S.R.L. System for processing a three-dimensional (3d) image and related methods using an icp algorithm
CN106682912B (en) * 2015-11-10 2021-06-15 艾普维真股份有限公司 Authentication method of 3D structure
CN109715071B (en) * 2016-09-16 2022-08-02 皇家飞利浦有限公司 Device and method for detecting an interventional tool
US10212410B2 (en) * 2016-12-21 2019-02-19 Mitsubishi Electric Research Laboratories, Inc. Systems and methods of fusing multi-angle view HD images based on epipolar geometry and matrix completion
US10379545B2 (en) * 2017-07-03 2019-08-13 Skydio, Inc. Detecting optical discrepancies in captured images
US10970815B2 (en) * 2018-07-10 2021-04-06 Raytheon Company Multi-source image fusion
US10992873B2 (en) * 2019-01-18 2021-04-27 Qualcomm Incorporated Systems and methods for color matching for realistic flash images

Also Published As

Publication number Publication date
GB2560243A (en) 2018-09-05
EP3574472A1 (en) 2019-12-04
WO2018138470A1 (en) 2018-08-02
GB2560243B (en) 2019-06-12
GB201701363D0 (en) 2017-03-15
CA3049306A1 (en) 2018-08-02
AU2018213100A1 (en) 2019-07-18
US20190333235A1 (en) 2019-10-31
US11120563B2 (en) 2021-09-14
AU2018213100B2 (en) 2022-04-07
GB201800752D0 (en) 2018-02-28

Similar Documents

Publication Publication Date Title
US8428344B2 (en) System and method for providing mobile range sensing
De Franchis et al. An automatic and modular stereo pipeline for pushbroom images
US5581638A (en) Method for autonomous image registration
KR102249769B1 (en) Estimation method of 3D coordinate value for each pixel of 2D image and autonomous driving information estimation method using the same
US8107722B2 (en) System and method for automatic stereo measurement of a point of interest in a scene
US7313252B2 (en) Method and system for improving video metadata through the use of frame-to-frame correspondences
CN108510551B (en) Method and system for calibrating camera parameters under long-distance large-field-of-view condition
US20060215935A1 (en) System and architecture for automatic image registration
US20050220363A1 (en) Processing architecture for automatic image registration
US8406511B2 (en) Apparatus for evaluating images from a multi camera system, multi camera system and process for evaluating
KR20150112362A (en) Imaging processing method and apparatus for calibrating depth of depth sensor
US7149346B2 (en) Three-dimensional database generating system and method for generating three-dimensional database
EP3332387B1 (en) Method for calibration of a stereo camera
Tong et al. Detection and estimation of along-track attitude jitter from Ziyuan-3 three-line-array images based on back-projection residuals
Yang et al. Relative geometric refinement of patch images without use of ground control points for the geostationary optical satellite GaoFen4
KR102195040B1 (en) Method for collecting road signs information using MMS and mono camera
EP3574472B1 (en) Apparatus and method for registering recorded images
WO2002012830A1 (en) Height measurement apparatus
KR20150119770A (en) Method for measuring 3-dimensional cordinates with a camera and apparatus thereof
Zhang et al. Registration of CBERS-02B satellite imagery in quick GIS updating
Radhadevi et al. Performance assessment and geometric calibration of resourcesat-2
Guérin Effect of the DTM quality on the bundle block adjustment and orthorectification process without GCP: Exemple on a steep area
Ban et al. Relative Geometric Correction of Multiple Satellite Images by Rigorous Block Adjustment
Ragia et al. Homography and Image Processing Techniques for Cadastre Object Extraction.
Petrova et al. BINDING AERIAL IMAGES OBTAINED BY AN UNMANNED AIRCRAFT VEHICLE TO OTHER IMAGES AND MAPS.

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20190711

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

INTG Intention to grant announced

Effective date: 20210423

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

Ref country code: AT

Ref legal event code: REF

Ref document number: 1434876

Country of ref document: AT

Kind code of ref document: T

Effective date: 20211015

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602018024230

Country of ref document: DE

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG9D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210929

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210929

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210929

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210929

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211229

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211229

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210929

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20210929

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 1434876

Country of ref document: AT

Kind code of ref document: T

Effective date: 20210929

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210929

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20211230

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210929

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220129

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210929

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210929

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220131

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210929

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210929

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210929

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210929

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210929

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210929

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602018024230

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210929

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210929

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

26N No opposition filed

Effective date: 20220630

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20220131

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20220116

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210929

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20220131

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20220131

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20220131

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210929

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20220116

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230510

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210929

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210929

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20210929

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20240119

Year of fee payment: 7

Ref country code: GB

Payment date: 20240123

Year of fee payment: 7

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20180116

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20240122

Year of fee payment: 7