WO2015140484A1 - A method, apparatus, system, and computer readable medium for enhancing differences between two images of a structure - Google Patents

A method, apparatus, system, and computer readable medium for enhancing differences between two images of a structure Download PDF

Info

Publication number
WO2015140484A1
WO2015140484A1 PCT/GB2014/050857 GB2014050857W WO2015140484A1 WO 2015140484 A1 WO2015140484 A1 WO 2015140484A1 GB 2014050857 W GB2014050857 W GB 2014050857W WO 2015140484 A1 WO2015140484 A1 WO 2015140484A1
Authority
WO
WIPO (PCT)
Prior art keywords
images
registering
image
photometric
result
Prior art date
Application number
PCT/GB2014/050857
Other languages
French (fr)
Inventor
Riccardo Gherardi
Bjorn Stenger
Oliver Woodford
Frank Perbet
Pablo ALCANTARILLA
Sam Johnson
Minh-Tri Pham
Roberto Cipolla
Akihito Seki
Ryuzo Okada
Original Assignee
Kabushiki Kaisha Toshiba
Toshiba Research Europe Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kabushiki Kaisha Toshiba, Toshiba Research Europe Limited filed Critical Kabushiki Kaisha Toshiba
Priority to PCT/GB2014/050857 priority Critical patent/WO2015140484A1/en
Publication of WO2015140484A1 publication Critical patent/WO2015140484A1/en

Links

Classifications

    • G06T3/14
    • G06T5/92
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30221Sports video; Sports image
    • G06T2207/30224Ball; Puck

Definitions

  • This disclosure relates to image difference enhancement.
  • this disclosure relates to the enhancement of differences in images of a physical structure that are acquired at different times so as to facilitate the detection of temporal changes in the structure.
  • An approach to reduce the labour intensive nature of manual inspection is to pass one or more image capture devices, such as a camera, along the structure so as to record the state of the structure during an initial time period. Images of the structure that are subsequently acquired can then be compared with the image data acquired during the initial time period.
  • image capture devices such as a camera
  • Figure 1 shows a cross-section through a tunnel lining in which an image capture device is positioned
  • Figure 2 shows an exemplary block diagram of the macro components of the computer
  • Figure 3 shows a flow diagram illustrating the steps of a method according to the present disclosure
  • Figure 4 shows an image of a structure that has been acquired at a first time point
  • Figure 5 shows an image of the structure of Figure 4 that has been acquired at a second time point
  • Figure 6 shows a visual representation of the x-component of the deformation field produced following geometric registration of the images of Figures 4 and 5;
  • Figure 7 shows a visual representation of the y-component of the deformation field produced following geometric registration of the images of Figures 4 and 5;
  • Figure 8 shows a gradient map that has been constructed by thresholding the X and Y gradients of the deformation fields shown in Figures 6 and 7;
  • Figure 9 shows a binary image obtained by searching for the largest group of connected pixels in Figure 8, and masking out the other pixels;
  • Figure 10 show a series of dots representing the locations of control points in a control point array that has been created based on the binary image of Figure 9;
  • Figure 1 1 shows the results of fitting a 2D spline to the representation of Figure 6 using the control point array specified in Figure 10;
  • Figure 12 shows the results of fitting a 2D spline to the representation of Figure 7 using the control point array specified in Figure 10;
  • Figure 13 shows an image that has been created by warping the image of Figure 5 by the deformation fields of Figures 11 and 12;
  • Figure 14 shows a brightness difference map between the image of Figure 4 and the image of Figure 13;
  • Figure 15 shows the results of fitting a 2D spline array to the photometric map of Figure 14, using the control point array specified in Figure 10;
  • Figure 16 shows a subtraction image produced by adjusting the warped image of Figure 13 by the spline fitting of Figure 15 and subtracting that image from the image of Figure 4.
  • differences between two images are enhanced.
  • This is achieved by performing a geometric registration of the two images to correct for alignment differences therebetween and also by performing a photometric registration of the two images to correct for illumination differences therebetween.
  • an initial registration result an initial computational result
  • an initial computational result for example a deformation field in the case of the geometric registration or a brightness difference (or photometric) map in the case of the photometric registration - is processed to produce a modified intermediate computational result which is subsequently used to represent the result of the respective registration.
  • the processing of the intermediate computational result causes the production of modified intermediate computational results that do not feature components that are due to changes in the structure. This enables a superior geometric and photometric correction to be applied to the images so as to enhance the differences therebetween that are due, not to alignment or illumination differences between the two images, but instead to changes in the structure.
  • Figure 1 shows a cross-section through a tunnel lining 110 in which an image capture device 112 is positioned.
  • the image capture device 1 12 comprises a plurality of cameras 1 14 that are mounted to a body 116 of the image capture device 1 12 and which are arranged so as to capture overlapping images of the tunnel lining 110 when the image capture device 112 is present within the tunnel lining.
  • the image capture device 112 further comprises a flat-bed trolley 118 upon which the image capture device 112 may ride so as to move longitudinally along the tunnel lining 110 thereby enabling the capture of images that overlap in both radial and longitudinal directions.
  • the image capture device 112 further comprises a memory and communication module 120 that is arranged to record the captured images and subsequently communicate them wirelessly to a computer 122.
  • FIG. 2 shows an exemplary block diagram of the macro components of the computer 122.
  • the computer 122 comprises a micro-processor 210 arranged to execute computer readable instructions as may be provided to the computer 122 via one or more of: a network interface 212 arranged to enable the micro-processor 210 to communicate with an external network - for example the internet; a wireless interface 214; a plurality of input interfaces 216 including a keyboard, a mouse, a disk drive and a USB connection; and a memory 218 that is arranged to be able to retrieve and provide to the micro-processor 210 both instructions and data that have been stored in the memory 218.
  • the micro- processor 210 is coupled to a monitor 220 upon which a user interface may be displayed and further upon which the results of processing operations may be presented.
  • the image capture device 112 is traversed along the tunnel lining 110 whilst images are acquired by the plurality of cameras 1 14 and stored in the memory and communication module 120. Subsequently, the images recorded on the capture device are transmitted to the computer 122 and stored in the memory 218 thereof. Following such an initial scan of the tunnel lining 1 10, at a subsequent time point, for example when it is deemed to be time to again inspect the tunnel lining, the image capture device 112 is again positioned within the tunnel lining 110 and one or more further images are required. The further images are transmitted to the computer 122 so that they can be compared with the initially acquired images in order to identify whether any changes to the tunnel lining 1 10 have occurred.
  • Differences between initially acquired and subsequently acquired images may, in addition to being due to an underlying change in the structure, also be due to a number of other factors such as misalignment between the images (caused, for example by the images having been taken from different positions) and differences in the direction and strength of the illuminant used during image capture - as may occur when different lighting rigs are employed, when a flash bulb fades during its lifetime, or when different flash bulbs produce different amounts of light - which can result in different shading being present in different images.
  • An approach described herein is directed to the reduction of image differences that are not due to underlying changes in the structure.
  • FIG. 3 shows a flow diagram illustrating the steps of a method according to the present disclosure.
  • the image capture device 112 is traversed along the structure (in this case the tunnel lining 1 10) during which a plurality of initial images are captured by the plurality of cameras 114. These initial images are captured during an initial time period and represent a recordal of the state of the tunnel lining 1 10 during that time period.
  • the plurality of initial images are used to create a colour 3D point cloud that provides an estimation of the spatial origin of the pixels of each of the recorded initial images and further provides an estimation of a number of camera acquisition parameters including: the position, orientation, and focal length of the camera that acquired each image, and the relative positions of the cameras.
  • the approach that is used to provide the 3D point cloud is a Structure from Motion (SfM) processing approach.
  • SfM is a method of 3- dimentional reconstruction. A camera position and 3D shape in a scene around the camera are reconstructed from a plurality of images acquiring at difference viewpoints.
  • a geometric model of the structure in this case a geometric representation of the tunnel lining 110, is fitted to the 3D point cloud.
  • a knowledge of the position and orientation of the camera that acquired each initial image is used to map that image onto the geometric model so as to create a textured surface wherein image pixels are mapped onto the surface.
  • the image capture device 1 12 is again traversed along the tunnel lining 110 and one or more subsequent images are recorded at step S318.
  • a feature-based registration is performed between the textured surface and the subsequent image so as to identify which of the images that forms the textured surface best corresponds to the subsequent image and to provide an estimation of the transformation required to align the identified and subsequent images (the image that is identified being hereinafter referred to as the identified image).
  • the feature-based registration may not provide subpixel registration accuracy which can result in significant artifact creation in situations where a difference (or more sophisticated) operation is subsequently performed in order to identify changes in the structure.
  • a geometric registration of the identified and subsequent images is performed taking into account the transformation determined by the feature-based registration.
  • the geometric registration approach is a non-rigid PatchMatch approach that computes a Nearest Neighbour Field (NNF) by breaking one of the images to be registered in to a plurality of overlapping patches (in this case, one for each pixel) and then, for each patch, seeking to identify a local transformation that would optimize a similarity measure between the pixels of that patch and the image against which that patch is being registered.
  • NMF Nearest Neighbour Field
  • the similarity measure that is used is a cross-correlation based approach.
  • the result of the non-rigid registration is a deformation field that describes how locations in one of the registered images correspond to locations in the other of the registered images.
  • the deformation field represents an intermediate computational result in the geometric registration process and may take the form of one or more mappings, for example a first mapping indicating by how much each pixel needs to be displaced in the x-direction in order to achieve alignment and a second mapping indicating by how much each pixel needs to be displaced in the y-direction in order to achieve alignment.
  • the deformation field may be a combined mapping, or in the form of a matrix or a set of control point locations for application with mathematical splines which are piecewise defined polynomials having a high degree of smoothness at the points where polynomial pieces connect. Low degree splines, for example cubic splines, can be easily implemented and can yield results similar to high-order polynomial interpolation while avoiding instability.
  • the deformation field is regularized so as to remove components associated with change therefrom as such components are likely to have been caused by a change in the structure as opposed to misalignment between the identified and subsequent images.
  • the result of the NNF approach produces a deformation field that is characterised by a first map that is the same size as the images and for which the intensity of any pixel is indicative of the 'x' direction displacement required in order to align a patch centered on that pixel with the image to which it is registered and a second map which correspondingly indicates y direction displacement.
  • a gradient map is produced based on the deformation field, in this case by creating x- and y-direction gradient sub-maps for each of the first and second maps and summing, on a pixel-by-pixel basis, each of the four gradient sub-maps.
  • the gradient map is then thresholded to produce a binary map that represents with a first value areas of high gradient (which may, for example, be associated with changes in the structure or differences in image overlap) and represents with a second value areas of low gradient (which are likely to correspond to image differences due to geometric misalignment).
  • a search algorithm is then run over all of the pixels in the binary map that have the second (low gradient) value to identify a sub-portion of one of the images corresponding to the largest connected group of such pixels and mask out other pixels.
  • a regular grid of control points is then set out the over area covered by that group (the sub- portion) so as to define a 2D thin plate spline array.
  • the 2D thin plate spline array is then fitted to each of the first and second maps of the deformation field thereby ignoring components thereof which are likely to be due to underlying changes in the structure.
  • the smoothed (or adjusted) deformation field is then a modified intermediate computational result.
  • the identified image is warped (transformed) by the adjusted deformation field so as to correct the identified image for geometric differences between the identified and subsequent image.
  • a photometric registration is performed to produce a map of brightness differences (a photometric map which is an intermediate computational result) between the warped identified image and the subsequent image is produced.
  • Image brightness is determined on a pixel-by-pixel basis by either converting to grayscale, or recovering the H or Y channels in a HS ⁇ B,L ⁇ or YUV representation.
  • photometric registration used herein is used to describe the identification of differences between two images that are due to photometric aspects of those images such as differences in illumination intensity and direction when the images were acquired.
  • the purpose of the photometric registration is to undo the effect of ambient lighting discrepancies in the images.
  • luminance gradients due to point light sources (flashes) and surface geometry need to be estimated and accounted for.
  • Other apparent illumination differences, such as those caused by acquiring the images using two different cameras having slightly different light response characteristics may also be corrected by photometric registration.
  • the photometric map is regularized so as to remove components associated with change therefrom.
  • a two-dimensional regular grid of spline points are defined and thin plate splines defined by low order polynomials are fitted to the photometric map.
  • An adjusted photometric map (a modified intermediate computational result) is then created from the fitted splines which effectively smoothes, or filters, the photometric map so as to remove components associated with change therefrom.
  • the brightness of the warped image is adjusted using the adjusted photometric map so as to remove illumination variation therefrom.
  • the brightness is altered by adjusting the L-channels whilst leaving the other channels unchanged.
  • the adjusted warped image is then compared with the subsequent image so that differences between the identified and subsequent images that are due neither to geometric nor photometric mis- registration can be identified.
  • the adjusted warped image is subtracted from the subsequent image so as to provide a difference map highlighting changes to the structure.
  • Figure 4 shows an image of a structure that has been acquired at a first time point
  • Figure 5 shows an image of the structure of Figure 4 that has been acquired at a second time point.
  • Figure 5 shows roughly the same portion of the structure as Figure 4, but following the application of an amount of water (circled by hoop 510) to the structure to induce discolouration that was not present when the image of Figure 4 was acquired.
  • Figure 6 shows a visual representation of the x-component of a deformation field produced by the geometric registration of the images of Figures 4 and 5.
  • Figure 7 shows a corresponding representation of the y-component of the deformation field produced by the geometric registration.
  • Figure 8 shows a gradient map that has been constructed based on the deformation field shown in Figures 6 and 7 and Figure 9 shows the subsequently produced binary image following running of the search algorithm to identify the largest group of connected low gradient pixels and mask out other pixels.
  • Figure 10 show a series of dots representing the locations of control points in a control point array that has been created based on the binary image of Figure 9.
  • Figure 1 1 shows the results of fitting the 2D spline array having the control point spacing of Figure 10 to the representation of Figure 6 to remove components associated with change therefrom
  • Figure 12 shows the results of fitting the 2D spline array having the control point spacing of Figure 10 to the representation of Figure 7 to remove components associated with change therefrom.
  • the sharps blots that correspond to the locations at which water was applied have been removed thereby adjusting the deformation fields so as to more faithfully represent the geometric misalignment between the two images.
  • one of the images is warped by the adjusted deformation field.
  • warping comprises applying that transformation to the image with interpolation used where necessary when the transformation does not result in a pixel's location being transformed exactly onto the location of another pixel or where there are gaps in the mapping.
  • Figure 13 shows an image that has been created by warping the image of Figure 5 by the deformation fields of Figures 1 1 and 12 and
  • Figure 14 shows a photometric map (in this case a brightness difference map) created between the image of Figure 4 and the image of Figure 13.
  • Figure 15 shows the results of fitting a 2D spline array to the photometric map of Figure 14 to remove components associated with change therefrom. As can be seen, the blots where water was applied have been filtered from the image of Figure 15.
  • Figure 16 shows a subtraction image produced by adjusting the warped image of Figure 13 by the spline fitting of Figure 15 and subtracting that image from the image of Figure 4. It can be seen that differences due to geometric and photometric mis- registration have been removed to leave differences between the images of Figures 4 and 5 that are due to underlying changes in the imaged structure (the application of water thereto).
  • 3D structure can be computed from an initial dense registration such as optical flow, regularized optical flow or dense stereo matching; and the use of surface priors and model fitting by fitting a known model (for example, a cylinder for a tunnel) to the 3D data before applying a 3D warp.
  • a known model for example, a cylinder for a tunnel
  • Other registration approaches including non-rigid registration approaches and feature matching approaches, could equally be employed to perform the geometric registration.
  • the photometric map employed during the photometric registration may be formed in a number of different manners, including: based on the brightness (or luminance) difference between the two images; based on the grayscale difference between the two images; based on the quotient of the pixel brightnesses of the two images; and based on the quotient of the pixel grayscales of the two images.
  • thin plate splines may be employed to regularize deformation fields and photometric maps
  • other types of spline or polynomials may be fitted to the deformation fields and photometric maps so as to smooth them thereby removing outliers that are likely to represent structural changes.
  • the regularization of a deformation field or a photometric map to remove components associated with change may act to remove information therefrom that is at a specific spatial frequency, or range thereof - for example so as to remove high frequency components.
  • the verb 'to regularize' and its various conjugations as employed herein is used to express the action of making its subject more regular without completely removing information therefrom.
  • the geometric and photometric registration algorithms could be combined so that geometric and photometric registration occurs at the same time.
  • the adjustment of at least one of the two images based on the results of the geometric and photometric registrations may be performed: sequentially by first performing an adjustment based on the results of the geometric registration before then performing an adjustment based on the results of the photometric registration; or in a single, combined, step based upon the results of the geometric and photometric registrations.
  • all of the at least one of the two images may be adjusted, in some cases only a portion of the at least one of the two images is adjusted. Such an approach can be useful in cases where images are very large, and/or where change is expected in only a region of one of the images.
  • the adjusted intermediate computational results could also be determined using other approaches.
  • Approaches that could be employed in order to arrive at the adjusted intermediate computational results include: polynomial (including spline, thin-plate spline, and b-spline) representation of an intermediate computational result, using a collection of overlapping homographies to represent an intermediate computational result; and representing a deformation field as the deformation induced by the projection to a 3D model of the structure along with a radial distortion compensation element.
  • the approaches described herein could be applied also in circumstances where it is the subsequently captured image that is warped.
  • the approaches described herein could be applied also in circumstances where it is the subsequently captured image that is adjusted.
  • mention has been made of the initially captured image and the subsequently captured image the approaches described herein could also be applied simply to a pair of two images.
  • an image capture device comprises a plurality of cameras mounted on a flatbed trolley
  • the approaches described herein could be applied to images acquired using other images capture and/or creation devices.
  • the image capture device of Figure 1 has been described as being arranged to record the captured images and subsequently communicate them wirelessly to a computer, communication to the computer could be by other means, for example by way of a cable transfer and/or the physical transfer of a computer readable medium.
  • one or both of the images that are to be registered are corrected to remove image components that are present due to parallax caused by the two images being acquired from different positions.
  • image components are to be expected at image locations adjacent to projecting or recessed surface features.
  • removal can be achieved by acquiring information about the surface of the structure.
  • Surface information could be acquired, for example, using a laser scanner to obtain a 3D model of the surface or using a 3D camera such as an MS Kinect active illumination camera and processor.
  • a knowledge of the 3D surface along with the location and orientation of the camera that produced an image can be used to work out where in the image parallax is to be expected and so information can be removed or masked at corresponding pixel locations.
  • the subsequently acquired image may be geometrically registered to the textured surface or to an image created therefrom - which may contain information that originated from a plurality of initially captured images.
  • the approaches described herein could be employed for the same structure at a plurality of time points.
  • the progression of change in the structure could then be monitored.
  • a movie of the change (or lack of change) across time could be played or associated images could be presented next to a slider by which a user could move through a temporal sequence of images - such as subtraction images like those of Figure 15.
  • initial images could be acquired using an image capture device such as that described with reference to Figure 1 and a subsequent image could be acquired with a different image capture device, such as a camera on a tablet computer.
  • a different image capture device such as a camera on a tablet computer.
  • the results could then be loaded onto the tablet computer which could conveniently be carried by a user who would then benefit from the previous image acquisition (and any processing) without the need to carry heavy imaging/processing equipment.
  • a user carrying such a tablet would be able to take images with the camera of their tablet and identify changes in real time.
  • each of the registering steps comprises a step of regularizing a respective intermediate computational result to create a respective modified intermediate computational result and basing the result of that registering step upon the respective modified intermediate computational result.
  • only one of the registering steps comprises a step of regularizing a respective intermediate computational result to create a respective modified intermediate computational result and basing the result of that one of the registering steps upon the respective modified intermediate computational result.
  • a method of enhancing differences between two images comprises geometrically and photometrically registering the two images to correct for geometric and photometric differences therebetween and adjusting a portion of one of images based on the results of the geometric and photometric registrations.
  • a computer readable medium which may be a non-transitory computer readable medium.
  • the computer readable medium carrying computer readable instructions arranged for execution upon a processor so as to make the processor carry out any or all of the methods described herein.
  • the term computer readable medium as used herein refers to any medium that stores data and/or instructions for causing a processor to operate in a specific manner.
  • Such a storage medium may comprise non-volatile media and/or volatile media.
  • Non-volatile media may include, for example, optical or magnetic disks.
  • Volatile media may include dynamic memory.
  • Exemplary forms of storage medium include, a floppy disk, a flexible disk, a hard disk, a solid state drive, a magnetic tape, any other magnetic data storage medium, a CD- ROM, any other optical data storage medium, any physical medium with one or more patterns of holes or protrusions, a RAM, a PROM, an EPROM, a FLASH-EPROM, NVRAM, and any other memory chip or cartridge.

Abstract

A method of enhancing differences between two images comprises geometrically and photometrically registering the two images to correct for geometric and photometric differences therebetween and adjusting a portion of one of the images based on the results of the geometric and photometric registrations.

Description

A method, apparatus, system, and computer readable medium for enhancing differences between two images of a structure
Field This disclosure relates to image difference enhancement. In particular, but without limitation, this disclosure relates to the enhancement of differences in images of a physical structure that are acquired at different times so as to facilitate the detection of temporal changes in the structure.
Background Physical structures such as tunnels, bridges, dams, roads, and buildings can change over time. Some changes, such as a changing of colour due to a watermark on a pipe, are not of concern to engineers. However, some changes, such as the appearance of a crack or a leak in a tunnel, are of great concern to engineers and so structures may need to be regularly monitored in order to identify changes thereto. Visual inspection of a structure is a good way of identifying changes in that structure but can be highly labour intensive and can be susceptible to observer inconsistency.
An approach to reduce the labour intensive nature of manual inspection is to pass one or more image capture devices, such as a camera, along the structure so as to record the state of the structure during an initial time period. Images of the structure that are subsequently acquired can then be compared with the image data acquired during the initial time period.
Summary
Aspects and features of the invention are set out in the claims. Brief description of the drawings
Examples of the present disclosure will now be described with reference to the accompanying drawings in which:
Figure 1 shows a cross-section through a tunnel lining in which an image capture device is positioned;
Figure 2 shows an exemplary block diagram of the macro components of the computer; l Figure 3 shows a flow diagram illustrating the steps of a method according to the present disclosure;
Figure 4 shows an image of a structure that has been acquired at a first time point;
Figure 5 shows an image of the structure of Figure 4 that has been acquired at a second time point;
Figure 6 shows a visual representation of the x-component of the deformation field produced following geometric registration of the images of Figures 4 and 5;
Figure 7 shows a visual representation of the y-component of the deformation field produced following geometric registration of the images of Figures 4 and 5; Figure 8 shows a gradient map that has been constructed by thresholding the X and Y gradients of the deformation fields shown in Figures 6 and 7;
Figure 9 shows a binary image obtained by searching for the largest group of connected pixels in Figure 8, and masking out the other pixels;
Figure 10 show a series of dots representing the locations of control points in a control point array that has been created based on the binary image of Figure 9;
Figure 1 1 shows the results of fitting a 2D spline to the representation of Figure 6 using the control point array specified in Figure 10;
Figure 12 shows the results of fitting a 2D spline to the representation of Figure 7 using the control point array specified in Figure 10; Figure 13 shows an image that has been created by warping the image of Figure 5 by the deformation fields of Figures 11 and 12;
Figure 14 shows a brightness difference map between the image of Figure 4 and the image of Figure 13;
Figure 15 shows the results of fitting a 2D spline array to the photometric map of Figure 14, using the control point array specified in Figure 10; and
Figure 16 shows a subtraction image produced by adjusting the warped image of Figure 13 by the spline fitting of Figure 15 and subtracting that image from the image of Figure 4. Detailed description
In the present disclosure, differences between two images, for example images of a structure that have been taken at different time points, are enhanced. This is achieved by performing a geometric registration of the two images to correct for alignment differences therebetween and also by performing a photometric registration of the two images to correct for illumination differences therebetween. When performing each of the geometric and photometric registrations, an initial registration result (an initial computational result) - for example a deformation field in the case of the geometric registration or a brightness difference (or photometric) map in the case of the photometric registration - is processed to produce a modified intermediate computational result which is subsequently used to represent the result of the respective registration. The processing of the intermediate computational result causes the production of modified intermediate computational results that do not feature components that are due to changes in the structure. This enables a superior geometric and photometric correction to be applied to the images so as to enhance the differences therebetween that are due, not to alignment or illumination differences between the two images, but instead to changes in the structure.
Figure 1 shows a cross-section through a tunnel lining 110 in which an image capture device 112 is positioned. The image capture device 1 12 comprises a plurality of cameras 1 14 that are mounted to a body 116 of the image capture device 1 12 and which are arranged so as to capture overlapping images of the tunnel lining 110 when the image capture device 112 is present within the tunnel lining. The image capture device 112 further comprises a flat-bed trolley 118 upon which the image capture device 112 may ride so as to move longitudinally along the tunnel lining 110 thereby enabling the capture of images that overlap in both radial and longitudinal directions. The image capture device 112 further comprises a memory and communication module 120 that is arranged to record the captured images and subsequently communicate them wirelessly to a computer 122.
Figure 2 shows an exemplary block diagram of the macro components of the computer 122. The computer 122 comprises a micro-processor 210 arranged to execute computer readable instructions as may be provided to the computer 122 via one or more of: a network interface 212 arranged to enable the micro-processor 210 to communicate with an external network - for example the internet; a wireless interface 214; a plurality of input interfaces 216 including a keyboard, a mouse, a disk drive and a USB connection; and a memory 218 that is arranged to be able to retrieve and provide to the micro-processor 210 both instructions and data that have been stored in the memory 218. Further, the micro- processor 210 is coupled to a monitor 220 upon which a user interface may be displayed and further upon which the results of processing operations may be presented.
During operation, the image capture device 112 is traversed along the tunnel lining 110 whilst images are acquired by the plurality of cameras 1 14 and stored in the memory and communication module 120. Subsequently, the images recorded on the capture device are transmitted to the computer 122 and stored in the memory 218 thereof. Following such an initial scan of the tunnel lining 1 10, at a subsequent time point, for example when it is deemed to be time to again inspect the tunnel lining, the image capture device 112 is again positioned within the tunnel lining 110 and one or more further images are required. The further images are transmitted to the computer 122 so that they can be compared with the initially acquired images in order to identify whether any changes to the tunnel lining 1 10 have occurred.
Differences between initially acquired and subsequently acquired images may, in addition to being due to an underlying change in the structure, also be due to a number of other factors such as misalignment between the images (caused, for example by the images having been taken from different positions) and differences in the direction and strength of the illuminant used during image capture - as may occur when different lighting rigs are employed, when a flash bulb fades during its lifetime, or when different flash bulbs produce different amounts of light - which can result in different shading being present in different images. An approach described herein is directed to the reduction of image differences that are not due to underlying changes in the structure. In particular, it is expected that image differences caused by image misalignment or due to differences in the direction and strength of the illuminant in areas unaffected by change will be describable with geometric and photometric maps that are continuous, smooth and cover most of the images. Globally applying such maps will enable the enhancement and detection of differences that are due to underlying changes in the structure by correcting for misalignments between the images and differences in the direction and strength of the illuminant.
Figure 3 shows a flow diagram illustrating the steps of a method according to the present disclosure. At step S310 the image capture device 112 is traversed along the structure (in this case the tunnel lining 1 10) during which a plurality of initial images are captured by the plurality of cameras 114. These initial images are captured during an initial time period and represent a recordal of the state of the tunnel lining 1 10 during that time period. At step S312, the plurality of initial images are used to create a colour 3D point cloud that provides an estimation of the spatial origin of the pixels of each of the recorded initial images and further provides an estimation of a number of camera acquisition parameters including: the position, orientation, and focal length of the camera that acquired each image, and the relative positions of the cameras. In this example, the approach that is used to provide the 3D point cloud is a Structure from Motion (SfM) processing approach. SfM is a method of 3- dimentional reconstruction. A camera position and 3D shape in a scene around the camera are reconstructed from a plurality of images acquiring at difference viewpoints.
At step S314, a geometric model of the structure, in this case a geometric representation of the tunnel lining 110, is fitted to the 3D point cloud. At step 316, once the geometric model has been fitted to the 3D point cloud, a knowledge of the position and orientation of the camera that acquired each initial image is used to map that image onto the geometric model so as to create a textured surface wherein image pixels are mapped onto the surface.
At a subsequent time point, the image capture device 1 12 is again traversed along the tunnel lining 110 and one or more subsequent images are recorded at step S318. At step S320, a feature-based registration is performed between the textured surface and the subsequent image so as to identify which of the images that forms the textured surface best corresponds to the subsequent image and to provide an estimation of the transformation required to align the identified and subsequent images (the image that is identified being hereinafter referred to as the identified image). However, the feature-based registration may not provide subpixel registration accuracy which can result in significant artifact creation in situations where a difference (or more sophisticated) operation is subsequently performed in order to identify changes in the structure.
At step S322, a geometric registration of the identified and subsequent images is performed taking into account the transformation determined by the feature-based registration. In this example the geometric registration approach is a non-rigid PatchMatch approach that computes a Nearest Neighbour Field (NNF) by breaking one of the images to be registered in to a plurality of overlapping patches (in this case, one for each pixel) and then, for each patch, seeking to identify a local transformation that would optimize a similarity measure between the pixels of that patch and the image against which that patch is being registered. In this example the similarity measure that is used is a cross-correlation based approach. The result of the non-rigid registration is a deformation field that describes how locations in one of the registered images correspond to locations in the other of the registered images. The deformation field represents an intermediate computational result in the geometric registration process and may take the form of one or more mappings, for example a first mapping indicating by how much each pixel needs to be displaced in the x-direction in order to achieve alignment and a second mapping indicating by how much each pixel needs to be displaced in the y-direction in order to achieve alignment. The deformation field may be a combined mapping, or in the form of a matrix or a set of control point locations for application with mathematical splines which are piecewise defined polynomials having a high degree of smoothness at the points where polynomial pieces connect. Low degree splines, for example cubic splines, can be easily implemented and can yield results similar to high-order polynomial interpolation while avoiding instability.
At step S324, the deformation field is regularized so as to remove components associated with change therefrom as such components are likely to have been caused by a change in the structure as opposed to misalignment between the identified and subsequent images. In this example the result of the NNF approach produces a deformation field that is characterised by a first map that is the same size as the images and for which the intensity of any pixel is indicative of the 'x' direction displacement required in order to align a patch centered on that pixel with the image to which it is registered and a second map which correspondingly indicates y direction displacement. A gradient map is produced based on the deformation field, in this case by creating x- and y-direction gradient sub-maps for each of the first and second maps and summing, on a pixel-by-pixel basis, each of the four gradient sub-maps. The gradient map is then thresholded to produce a binary map that represents with a first value areas of high gradient (which may, for example, be associated with changes in the structure or differences in image overlap) and represents with a second value areas of low gradient (which are likely to correspond to image differences due to geometric misalignment). A search algorithm is then run over all of the pixels in the binary map that have the second (low gradient) value to identify a sub-portion of one of the images corresponding to the largest connected group of such pixels and mask out other pixels. A regular grid of control points is then set out the over area covered by that group (the sub- portion) so as to define a 2D thin plate spline array. The 2D thin plate spline array is then fitted to each of the first and second maps of the deformation field thereby ignoring components thereof which are likely to be due to underlying changes in the structure. The smoothed (or adjusted) deformation field is then a modified intermediate computational result. At step S326, the identified image is warped (transformed) by the adjusted deformation field so as to correct the identified image for geometric differences between the identified and subsequent image.
At step S328, a photometric registration is performed to produce a map of brightness differences (a photometric map which is an intermediate computational result) between the warped identified image and the subsequent image is produced. Image brightness is determined on a pixel-by-pixel basis by either converting to grayscale, or recovering the H or Y channels in a HS{B,L} or YUV representation.
The term photometric registration used herein is used to describe the identification of differences between two images that are due to photometric aspects of those images such as differences in illumination intensity and direction when the images were acquired.
The purpose of the photometric registration is to undo the effect of ambient lighting discrepancies in the images. In particular, luminance gradients due to point light sources (flashes) and surface geometry need to be estimated and accounted for. Other apparent illumination differences, such as those caused by acquiring the images using two different cameras having slightly different light response characteristics may also be corrected by photometric registration.
At step S330, the photometric map is regularized so as to remove components associated with change therefrom. In this example, a two-dimensional regular grid of spline points are defined and thin plate splines defined by low order polynomials are fitted to the photometric map. An adjusted photometric map (a modified intermediate computational result) is then created from the fitted splines which effectively smoothes, or filters, the photometric map so as to remove components associated with change therefrom.
At step S332, the brightness of the warped image is adjusted using the adjusted photometric map so as to remove illumination variation therefrom. When using an HSL format to represent the images, the brightness is altered by adjusting the L-channels whilst leaving the other channels unchanged.
At step S334, the adjusted warped image is then compared with the subsequent image so that differences between the identified and subsequent images that are due neither to geometric nor photometric mis- registration can be identified. In this example, the adjusted warped image is subtracted from the subsequent image so as to provide a difference map highlighting changes to the structure. Figure 4 shows an image of a structure that has been acquired at a first time point and Figure 5 shows an image of the structure of Figure 4 that has been acquired at a second time point. Figure 5 shows roughly the same portion of the structure as Figure 4, but following the application of an amount of water (circled by hoop 510) to the structure to induce discolouration that was not present when the image of Figure 4 was acquired.
Figure 6 shows a visual representation of the x-component of a deformation field produced by the geometric registration of the images of Figures 4 and 5. Figure 7 shows a corresponding representation of the y-component of the deformation field produced by the geometric registration. In both of Figures 6 and 7, there is an underlying and slowly varying component to the deformation field, along a wedge-shaped portion 610, 710 where the two images do not overlap, and a number of sharps blots (circled by hoops 612 and 712) that correspond to the locations at which water was applied to the structure.
Figure 8 shows a gradient map that has been constructed based on the deformation field shown in Figures 6 and 7 and Figure 9 shows the subsequently produced binary image following running of the search algorithm to identify the largest group of connected low gradient pixels and mask out other pixels. Figure 10 show a series of dots representing the locations of control points in a control point array that has been created based on the binary image of Figure 9.
Figure 1 1 shows the results of fitting the 2D spline array having the control point spacing of Figure 10 to the representation of Figure 6 to remove components associated with change therefrom and Figure 12 shows the results of fitting the 2D spline array having the control point spacing of Figure 10 to the representation of Figure 7 to remove components associated with change therefrom. As can be seen, in both Figures 11 and 12, the sharps blots that correspond to the locations at which water was applied have been removed thereby adjusting the deformation fields so as to more faithfully represent the geometric misalignment between the two images.
Once adjusted deformation fields have been produced, one of the images, in this case the image of Figure 5 is warped by the adjusted deformation field. As the deformation field effectively relates to a transformation to be applied to the images, warping comprises applying that transformation to the image with interpolation used where necessary when the transformation does not result in a pixel's location being transformed exactly onto the location of another pixel or where there are gaps in the mapping. Figure 13 shows an image that has been created by warping the image of Figure 5 by the deformation fields of Figures 1 1 and 12 and Figure 14 shows a photometric map (in this case a brightness difference map) created between the image of Figure 4 and the image of Figure 13. As can be seen, in addition to the relatively sharp blots (circled by hoop 1310) in the middle of the images, there remains a substantial amount of image inhomogeneity that corresponds to the photometric mis- registration between the images of Figures 4 and 5 - due probably to differences in illumination intensity and direction when those images were acquired.
Figure 15 shows the results of fitting a 2D spline array to the photometric map of Figure 14 to remove components associated with change therefrom. As can be seen, the blots where water was applied have been filtered from the image of Figure 15.
Figure 16 shows a subtraction image produced by adjusting the warped image of Figure 13 by the spline fitting of Figure 15 and subtracting that image from the image of Figure 4. It can be seen that differences due to geometric and photometric mis- registration have been removed to leave differences between the images of Figures 4 and 5 that are due to underlying changes in the imaged structure (the application of water thereto).
Although the above has been described with respect to the use of a patch matching (NNF) approach for performing geometric registration, a number of other techniques could equally be employed - either alternatively, jointly, or sequentially in order to geometrically register the two images.
Approaches that could be used to perform the geometric registration include: PatchMatch; 3D PatchMatch; optical flow; regularized optical flow; dense stereo matching; 3D warp approaches consisting of reducing the effects in images due to shape and perspective using 3D geometry such as can be obtained directly from sensors (for example, consumer 3D cameras like MS Kinect / Asus Xtion / Intel RealSense, laser scanners, MIT photon camera, light field cameras, etc.); Frequency domain (FFT or DFT) registration; and phase correlation registration. Additionally or alternatively, in conjunction with known camera matrices, 3D structure can be computed from an initial dense registration such as optical flow, regularized optical flow or dense stereo matching; and the use of surface priors and model fitting by fitting a known model (for example, a cylinder for a tunnel) to the 3D data before applying a 3D warp. Other registration approaches, including non-rigid registration approaches and feature matching approaches, could equally be employed to perform the geometric registration. The photometric map employed during the photometric registration may be formed in a number of different manners, including: based on the brightness (or luminance) difference between the two images; based on the grayscale difference between the two images; based on the quotient of the pixel brightnesses of the two images; and based on the quotient of the pixel grayscales of the two images.
Although thin plate splines may be employed to regularize deformation fields and photometric maps, other types of spline or polynomials may be fitted to the deformation fields and photometric maps so as to smooth them thereby removing outliers that are likely to represent structural changes. As one possibility, the regularization of a deformation field or a photometric map to remove components associated with change may act to remove information therefrom that is at a specific spatial frequency, or range thereof - for example so as to remove high frequency components. The verb 'to regularize' and its various conjugations as employed herein is used to express the action of making its subject more regular without completely removing information therefrom. Other approaches that could be used to form the photometric map and/or correct for photometric variations between the two images include: colour invariant transforms; colour normalization; texture blending, multi-band blending; low pass component removal; homomorphic filtering; histogram equalization; quotient image; the retinex algorithm; and illumination and transport modelling. The above has been described in relation to images that have been acquired using a camera. As such the images may have been acquired in the human visible spectrum, and/or they may include light that was acquired beyond the range of human visibility, for example infrared or thermal images (possibly with a compensation applied for the expected temperature of the structure at the time of acquisition). As one possibility, the images could have been obtained using one or more gamma cameras or Geiger counters. In situations where there is not sufficient ambient light for the camera(s) to acquire images on their own, the cameras may be provided with one or more light sources, for example a permanent light or a timed flash.
Although the above has been described in relation to the geometric registration being performed first and the photometric registration being subsequently performed, the geometric and photometric registration algorithms could be combined so that geometric and photometric registration occurs at the same time. The adjustment of at least one of the two images based on the results of the geometric and photometric registrations may be performed: sequentially by first performing an adjustment based on the results of the geometric registration before then performing an adjustment based on the results of the photometric registration; or in a single, combined, step based upon the results of the geometric and photometric registrations. Further, although all of the at least one of the two images may be adjusted, in some cases only a portion of the at least one of the two images is adjusted. Such an approach can be useful in cases where images are very large, and/or where change is expected in only a region of one of the images.
The above has described, by way of example, the fitting of splines to the deformation field and/or the photometric map (the intermediate computational results) in order to produce spline representations of the respective deformation field and photometric map (the adjusted intermediate computational results), however, the adjusted intermediate computational results could also be determined using other approaches. Approaches that could be employed in order to arrive at the adjusted intermediate computational results include: polynomial (including spline, thin-plate spline, and b-spline) representation of an intermediate computational result, using a collection of overlapping homographies to represent an intermediate computational result; and representing a deformation field as the deformation induced by the projection to a 3D model of the structure along with a radial distortion compensation element. Although in the above it has been the initially captured image that was warped subsequent to the geometric registration, the approaches described herein could be applied also in circumstances where it is the subsequently captured image that is warped. Likewise, although in the above it has been the initially captured image, as warped, that was adjusted based on the results of the photometric registration, the approaches described herein could be applied also in circumstances where it is the subsequently captured image that is adjusted. Further, although mention has been made of the initially captured image and the subsequently captured image, the approaches described herein could also be applied simply to a pair of two images.
Although the above is set out in terms of using a gradient map, a binary map, and a search algorithm to define an images subspace upon which control points are set out for subsequent fitting of the 2D thin plate spline array to each of the first and second maps of the deformation field, as another possibility, a set of control points that is set out across another portion, or all, of one or the first and second maps could be used for subsequent fitting of the 2D thin plate spline array to each of the first and second maps of the deformation field.
Although the above has been described by way of example with reference to a tunnel lining, the approaches described herein could also be applied to other structures, including, but not limited to, bridges, dams, roads, and buildings.
Although the above has been described with reference to Figure 1 in which an image capture device comprises a plurality of cameras mounted on a flatbed trolley, the approaches described herein could be applied to images acquired using other images capture and/or creation devices. Further, although the image capture device of Figure 1 has been described as being arranged to record the captured images and subsequently communicate them wirelessly to a computer, communication to the computer could be by other means, for example by way of a cable transfer and/or the physical transfer of a computer readable medium.
As one possibility, prior to performing any geometric or photometric registration, one or both of the images that are to be registered are corrected to remove image components that are present due to parallax caused by the two images being acquired from different positions. Such image components are to be expected at image locations adjacent to projecting or recessed surface features. Such removal can be achieved by acquiring information about the surface of the structure. Surface information could be acquired, for example, using a laser scanner to obtain a 3D model of the surface or using a 3D camera such as an MS Kinect active illumination camera and processor. A knowledge of the 3D surface along with the location and orientation of the camera that produced an image can be used to work out where in the image parallax is to be expected and so information can be removed or masked at corresponding pixel locations. As one possibility, in cases where a textured surface is available, the subsequently acquired image may be geometrically registered to the textured surface or to an image created therefrom - which may contain information that originated from a plurality of initially captured images.
As one possibility, the approaches described herein could be employed for the same structure at a plurality of time points. The progression of change in the structure could then be monitored. As one example, in addition or alternatively to showing single results images
(such as the subtraction image of Figure 15), a movie of the change (or lack of change) across time could be played or associated images could be presented next to a slider by which a user could move through a temporal sequence of images - such as subtraction images like those of Figure 15.
As one possibility, initial images could be acquired using an image capture device such as that described with reference to Figure 1 and a subsequent image could be acquired with a different image capture device, such as a camera on a tablet computer. This would enable a detailed initial set of images to be acquired and any heavy processing to be performed offline. The results could then be loaded onto the tablet computer which could conveniently be carried by a user who would then benefit from the previous image acquisition (and any processing) without the need to carry heavy imaging/processing equipment. By implementing the approaches described herein in real time, a user carrying such a tablet would be able to take images with the camera of their tablet and identify changes in real time.
In the approach described above, each of the registering steps comprises a step of regularizing a respective intermediate computational result to create a respective modified intermediate computational result and basing the result of that registering step upon the respective modified intermediate computational result. As one possibility, only one of the registering steps comprises a step of regularizing a respective intermediate computational result to create a respective modified intermediate computational result and basing the result of that one of the registering steps upon the respective modified intermediate computational result.
There is described herein a method of enhancing differences between two images comprises geometrically and photometrically registering the two images to correct for geometric and photometric differences therebetween and adjusting a portion of one of images based on the results of the geometric and photometric registrations.
The approaches described herein may be embodied on a computer readable medium, which may be a non-transitory computer readable medium. The computer readable medium carrying computer readable instructions arranged for execution upon a processor so as to make the processor carry out any or all of the methods described herein. The term computer readable medium as used herein refers to any medium that stores data and/or instructions for causing a processor to operate in a specific manner. Such a storage medium may comprise non-volatile media and/or volatile media. Non-volatile media may include, for example, optical or magnetic disks. Volatile media may include dynamic memory. Exemplary forms of storage medium include, a floppy disk, a flexible disk, a hard disk, a solid state drive, a magnetic tape, any other magnetic data storage medium, a CD- ROM, any other optical data storage medium, any physical medium with one or more patterns of holes or protrusions, a RAM, a PROM, an EPROM, a FLASH-EPROM, NVRAM, and any other memory chip or cartridge.

Claims

Claims
1. A method of enhancing differences between two images of a structure, the method comprising the steps of: geometrically registering the two images to correct for geometric differences therebetween; photometrically registering the two images to correct for photometric differences therebetween; and adjusting a portion of at least one of the two images based on the results of the geometric and photometric registrations, wherein one of the registering steps comprises a step of regularizing a respective intermediate computational result to create a respective modified intermediate computational result and basing the result of the one of the registering steps upon the respective modified intermediate computational result.
2. The method of claim 1 , wherein the other of the registering steps comprises a step of regularizing a respective intermediate computational result to create a respective modified intermediate computational result and basing the result of the other of the registering steps upon the respective modified intermediate computational result.
3. The method of claim 1 or 2, the method further comprising comparing the two images, as adjusted, to check for one or more changes in the structure.
4. The method of claim 1 , 2, or 3, the method further comprising, prior to performing the registering steps: receiving surface information about the surface of the structure; and using the surface information to identify image components in one or both of the two images that are present due to parallax caused by the two images being acquired from different positions.
5. The method of claim 4, the method further comprising, prior to performing the registering steps, removing the identified image components from one or both of the two images.
6. The method of any preceding claim, wherein the step of geometrically registering the two images comprises non-rigidly registering the two images, optionally using a Nearest Neighbour Field, NNF, approach, further optionally using a patch matching approach.
7. The method of any preceding claim, wherein the intermediate computational result of the geometrically registering step is a deformation field.
8. The method of claim 7, wherein the step of regularizing the intermediate computational result of the geometrically registering step comprises fitting a first plurality of polynomials to the deformation field and using the fitted first plurality of polynomials to define, as the result of the geometrically registering step, an adjusted deformation field.
9. The method of claim 7, wherein the first plurality of polynomials are splines.
10. The method of claim 8 or 9, wherein the step of adjusting a portion of at least one of the two images comprises warping a portion of one of the two images based upon the adjusted deformation field.
11. The method of claim 10, wherein the warping one of the two images comprises one of: replacing the pixel values of that one of the two images with corrected pixel values; and creating a deformation corrected image and subsequently using the deformation corrected image in place of that one of the two images.
12. The method of any of claims 8 to 1 1 , wherein the first plurality of polynomials is defined by a plurality of control points and the control points are arranged over a sub-portion of one of the two images.
13. The method of claim 12, further comprising creating a gradient map based on the deformation field, thresholding the gradient map to create a thresholded gradient map, and defining the sub-portion based on the thresholded gradient map.
14. The method of claim 13, further comprising, determining a group of connected pixels in the thresholded gradient map using the group of connected pixels as the sub-portion.
15. The method of any preceding claim, wherein the intermediate computational result of the photometrically registering step is a photometric map representing one of: the pixel brightness difference between the two images; the grayscale difference the two images; the quotient of the pixel brightnesses of the two images; and the quotient of the pixel grayscales of the two images.
16. The method of any preceding claim wherein the step of regularizing the intermediate computational result of the photometrically registering step comprises fitting a second plurality of polynomials to the photometric map and using the fitted second plurality of polynomials to define, as the result of the photometrically registering step, an adjusted photometric map.
17. The method of claim 16, wherein the second plurality of polynomials are splines.
18. The method of claim 16 or 17, wherein the step of adjusting a portion of at least one of the two images comprises using the adjusted photometric map to normalise the pixel brightnesses of a portion of one of the two images.
19. The method of any preceding claim, wherein the step of adjusting a portion of at least one of the two images comprises, following the geometrically registering step but before the photometrically registering step, adjusting a portion of at least one of the two images based on the result of the geometrically registering step and then using the two images, as adjusted, in the photometrically registering step.
20. The method of claim 19, wherein the step of adjusting a portion of at least one of the two images comprises, following the photometrically registering step, further adjusting a portion of at least one of the two images, as adjusted, based on the result of the photometrically registering step.
21. A computer readable medium carrying machine readable instructions arranged, when executed by a processor, to cause the processor to carry out the method of any of claims 1 to 20.
An apparatus configured to perform the method of any of claims 1 to 20.
23. A system comprising the apparatus of claim 22 and an image capture device arranged to capture at least one of the two images and to provide the captured image to the apparatus.
24. A method, apparatus, system, or computer readable medium substantially as described herein and with reference to the accompanying drawings.
PCT/GB2014/050857 2014-03-18 2014-03-18 A method, apparatus, system, and computer readable medium for enhancing differences between two images of a structure WO2015140484A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/GB2014/050857 WO2015140484A1 (en) 2014-03-18 2014-03-18 A method, apparatus, system, and computer readable medium for enhancing differences between two images of a structure

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/GB2014/050857 WO2015140484A1 (en) 2014-03-18 2014-03-18 A method, apparatus, system, and computer readable medium for enhancing differences between two images of a structure

Publications (1)

Publication Number Publication Date
WO2015140484A1 true WO2015140484A1 (en) 2015-09-24

Family

ID=50434224

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2014/050857 WO2015140484A1 (en) 2014-03-18 2014-03-18 A method, apparatus, system, and computer readable medium for enhancing differences between two images of a structure

Country Status (1)

Country Link
WO (1) WO2015140484A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109459439A (en) * 2018-12-06 2019-03-12 东南大学 A kind of Tunnel Lining Cracks detection method based on mobile three-dimensional laser scanning technique
CN116012378A (en) * 2023-03-24 2023-04-25 湖南东方钪业股份有限公司 Quality detection method for alloy wire used for additive manufacturing

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5640200A (en) * 1994-08-31 1997-06-17 Cognex Corporation Golden template comparison using efficient image registration
WO2013045651A1 (en) * 2011-09-30 2013-04-04 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Joint geometric and photometric multiview image registration

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5640200A (en) * 1994-08-31 1997-06-17 Cognex Corporation Golden template comparison using efficient image registration
WO2013045651A1 (en) * 2011-09-30 2013-04-04 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Joint geometric and photometric multiview image registration

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ALEKSANDRA A. SIMA ET AL: "Semi-Automated Registration Of Close-Range Hyperspectral Scans Using Oriented Digital Camera Imagery And A 3d Model", THE PHOTOGRAMMETRIC RECORD, vol. 29, no. 145, 13 March 2014 (2014-03-13), pages 10 - 29, XP055163332, ISSN: 0031-868X, DOI: 10.1111/phor.12049 *
HIP QUANG LUONG ET AL: "Joint photometric and geometric image registration in the total least square sense", PATTERN RECOGNITION LETTERS, vol. 32, no. 15, 2011, pages 2061 - 2067, XP028324745, ISSN: 0167-8655, [retrieved on 20110910], DOI: 10.1016/J.PATREC.2011.08.004 *
SALVI ET AL: "A review of recent range image registration methods with accuracy evaluation", IMAGE AND VISION COMPUTING, ELSEVIER, GUILDFORD, GB, vol. 25, no. 5, 21 February 2007 (2007-02-21), pages 578 - 596, XP005897137, ISSN: 0262-8856, DOI: 10.1016/J.IMAVIS.2006.05.012 *
STÜBL GERNOT ET AL LI XUE XUELIIOTATEE UQ EDU AU THE UNIVERSITY OF QUEENSLAND SCHOOL OF INFORMATION TECHNOLOGY AND ELECTRONIC ENGI: "On Approximate Nearest Neighbour Field Algorithms in Template Matching for Surface Quality Inspection", 10 February 2013, LECTURE NOTES IN COMPUTER SCIENCE; [LECTURE NOTES IN COMPUTER SCIENCE], SPRINGER VERLAG, DE, PAGE(S) 79 - 86, ISSN: 0302-9743, XP047267308 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109459439A (en) * 2018-12-06 2019-03-12 东南大学 A kind of Tunnel Lining Cracks detection method based on mobile three-dimensional laser scanning technique
CN109459439B (en) * 2018-12-06 2021-07-06 东南大学 Tunnel lining crack detection method based on mobile three-dimensional laser scanning technology
CN116012378A (en) * 2023-03-24 2023-04-25 湖南东方钪业股份有限公司 Quality detection method for alloy wire used for additive manufacturing

Similar Documents

Publication Publication Date Title
US10462362B2 (en) Feature based high resolution motion estimation from low resolution images captured using an array source
KR101175097B1 (en) Panorama image generating method
CN110349132B (en) Fabric flaw detection method based on light field camera depth information extraction
CA3040002A1 (en) A device and method for obtaining distance information from views
WO2017023210A1 (en) Generating a merged, fused three-dimensional point cloud based on captured images of a scene
JP2017520050A5 (en)
US9390511B2 (en) Temporally coherent segmentation of RGBt volumes with aid of noisy or incomplete auxiliary data
CN110009672A (en) Promote ToF depth image processing method, 3D rendering imaging method and electronic equipment
Gallo et al. Locally non-rigid registration for mobile HDR photography
JP2015011717A (en) Ghost artifact detection and removal methods in hdr image processing using multi-scale normalized cross-correlation
RU2012145349A (en) METHOD AND DEVICE FOR PROCESSING IMAGES FOR REMOVING DEPTH ARTIFacts
CN111080776B (en) Human body action three-dimensional data acquisition and reproduction processing method and system
KR20140141392A (en) Method and apparatus for processing the image
CN114550021B (en) Surface defect detection method and device based on feature fusion
JP2021520008A (en) Vehicle inspection system and its method
KR101495299B1 (en) Device for acquiring 3d shape, and method for acquiring 3d shape
CN110956661A (en) Method for calculating dynamic pose of visible light and infrared camera based on bidirectional homography matrix
JPWO2015132817A1 (en) Edge detection apparatus, edge detection method and program
JP2015524946A (en) Method and measuring apparatus for forming a super-resolution image with improved image resolution
JP6285686B2 (en) Parallax image generation device
JP2016213535A (en) Camera calibration device, method and program
US20160035107A1 (en) Moving object detection
WO2015140484A1 (en) A method, apparatus, system, and computer readable medium for enhancing differences between two images of a structure
KR101733028B1 (en) Method For Estimating Edge Displacement Againt Brightness
TWI530913B (en) Moving subject detecting system and method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14715064

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14715064

Country of ref document: EP

Kind code of ref document: A1