GB2507558A - Image processing with similarity measure of two image patches - Google Patents

Image processing with similarity measure of two image patches Download PDF

Info

Publication number
GB2507558A
GB2507558A GB1219844.6A GB201219844A GB2507558A GB 2507558 A GB2507558 A GB 2507558A GB 201219844 A GB201219844 A GB 201219844A GB 2507558 A GB2507558 A GB 2507558A
Authority
GB
United Kingdom
Prior art keywords
image
image patch
patch
value associated
intensity value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1219844.6A
Other versions
GB201219844D0 (en
Inventor
Atsuto Maki
Riccardo Gherardi
Oliver Woodford
Frank Perbet
Minh-Tri Pham
Bjorn Stenger
Sam Johnson
Roberto Cipolla
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toshiba Europe Ltd
Original Assignee
Toshiba Research Europe Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toshiba Research Europe Ltd filed Critical Toshiba Research Europe Ltd
Priority to GB1219844.6A priority Critical patent/GB2507558A/en
Publication of GB201219844D0 publication Critical patent/GB201219844D0/en
Priority to JP2013227733A priority patent/JP5752770B2/en
Priority to US14/072,427 priority patent/US20160148393A2/en
Publication of GB2507558A publication Critical patent/GB2507558A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • G06T7/0014Biomedical image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/97Determining parameters from multiple pictures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/755Deformable models or variational models, e.g. snakes or active contours
    • G06V10/7557Deformable models or variational models, e.g. snakes or active contours based on appearance, e.g. active appearance models [AAM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals

Abstract

Method, apparatus or system of calculating a measure of similarity between a first image, patch 112 and a second image patch 114, the first and second image patch comprises a plurality of respective intensity values each associated with an element, such as pixels Xi, Yi or voxels, the first patch and the second patch having a corresponding size and shape such that each element of the first and second patches corresponds to each other, the method determines a set of sub regions on the second patch, each sub region being elements of the second patch which correspond to elements of the first patch having first intensity values within a range of first intensity values defined for that sub region; for each sub region of the set of sub regions, calculating a variance, over all of the elements of that sub region, of a function, such as a difference or ratio, of the first and second intensity values associated with respective elements; and calculating the similarity measure as the sum over all sub regions of the calculated variances. The images may be two dimensional or three dimensional and of the same size and shape and may be transformed from different shape and sized images. The method may further derive a depth image by calculating disparities between pixels of the images and where the second image patches are centred on pixels on an epipolar line. Preferably the images are taken from different cameras of an underwater scene or medical modalities (fig. 13, 14).

Description

I
Image processing methods and apparatus
FIELD
Embodiments described herein relate generally to image processing methods which include the calculation of a similarity measure of two image patches.
BACKGROUND
The calculation of a similarity measure between regions of different images plays a fundamental role in many image analysis applications. These applications include stereo matching, multimodal image comparison and registration, motion estimation, image registration and tracking.
Matching and registration techniques in general need to be robust to a wide range of transformations that can arise from non-linear illumination changes caused by anisotropic radiance distribution functions, occlusions or different acquisition processes. Examples of different acquisition processes are visible and infrared, and different medical image acquisition techniques such as X-ray, magnetic resonance imaging and ultra sound.
BRIEF DESCRIPTION OF THE DRAWINGS
In the following, embodiments will be described with reference to the drawings in which: Figure 1 shows an image processing system according to an embodiment: Figure 2 shows a first image patch and a second image patch; Figure 3 shows a method of calculating a similarity measure between two image patches according to an embodiment; Figure 4 shows an exampLe of a joint histogram for two image patches; Figure 5 shows the effects of quantisation and displacement on a joint histogram; Figure 6 shows a comparison of results of the sum of conditional variances method and the sum of conditional variance of differences method; Figure 7 shows the results of comparing the performance of different similarity measures on a synthetic registration task using a gradient descent search; Figure 8 shows an example of the use of sum of conditional variance of differences method in tracking an object over frames of a video sequence; Figure 9 shows a method of calculating a measure of similarity between image patches according to an embodiment; Figure 10 shows an image processing apparatus according to an embodiment; Figure 11 shows the calculation of depth from disparity or the shift between a left image and a right Fmage of a stereo image pair; Figure 12 shows a method of generating a depth image from a stereo image pair according to an embodiment; Figure 13 shows two medical image capture devices; Figure 14 shows an image processing system according to an embodiment; Figure 15 shows a method of registering multimodal images according to an embodiment.
DETAILED DESCRIPTION
In an embodiment a method of calculating a measure of similarity between a first image patch and a second image patch, the first image patch comprising a plurality of first intensity values each associated with an element of the first image patch, the second image patch comprising a plurality of second intensity values each associated with an element of the second image patch, the first image patch and the second image patch having a corresponding size and shape such that each element of the first image patch corresponds to an element on the second image patch, comprises determining a set of sub regions on the second image patch, each sub region being determined as the set of elements of the second image patch which correspond to elements of the first image patch having first intensity values within a range of first intensity values defined for that sub region; for each sub region of the set of sub regions, calculating the variance, over all of the elements of that sub region, of a function of the second intensity value associated with that element and the first intensity value associated with the corresponding element of the first image patch; and calculating the similarity measure as the sum over all sub regions of the calculated variances.
In an embodiment the function of the second intensity value associated with an element and the first intensity value associated with the corresponding element of the first image patch is the difference between the second intensity value associated with the element and the first intensity value associated with the corresponding element of the first image patch.
In an embodiment the function of the second intensity value associated with an element and the first intensity value associated with the corresponding element of the first image patch is the ratio of the second intensity value associated with the element and the first intensity value associated with the corresponding element of the first image patch.
In an embodiment the first image patch and the second image patch are two dimensional images patches and the elements of the first image patch and the second image patch are pixels.
In an embodiment the first image patch and the second image patch are three dimensional images patches and the elements of the first image patch and the second image patch are voxels.
In an embodiment a method of deriving a depth image from a first image and a second image comprises calculating a plurality of disparities between pixels of the first image and the second image by, for each of a plurality of pixels of the first image, defining a first patch centred on a target pixel of the first image defining a plurality of second image patches centred on pixels of the second image; calculattng a measure of similarity between the first image patch and each second image patch of the plurality of second image patches using a method of calculating a measure of similarity between a first image patch and a second image patch according to an embodiment; selecting the second image patch having the best similarity measure as a match for the first image patch centred on the target pixel; and determining the disparity between the target pixel and the pixel of the second image in the centre of the second image patch selected as the match; and calculating a depth image from the plurality of disparities.
In an embodiment the plurality of second image patches are selected as patches centred on pixels on an epipolar line.
In an embodiment an image registration method of determining a transform between a first image and a second image, comprises calculating a measure of similarity between a first image patch of the first image and a second image patch of the second image.
In an embodiment the first image and the second image are obtained from different image capture modalities.
In an embodiment an image processing apparatus comprises a memory configured to store data indicative of a first image patch and a second image patch, the first image patch comprising a plurality of first intensity values each associated with an element of the first image patch, the second image patch comprising a plurality of second intensity values each associated with an element of the second image patch, the first image patch and the second image patch having a corresponding size and shape such that each element of the first image patch corresponds to an element on the second image patch; and a processor configured to determine a set of sub regions on the second image patch, each sub region being determined as the set of elements of the second image patch which correspond to elements of the first image patch having first intensity values within a range of first intensity values defined for that sub region; for each sub region of the set of sub regions, calculate the variance, over all of the elements of that sub region, of a function of the second intensity value associated with that element and the first intensity value associated with the corresponding element of the first image patch; and calculate a similarity measure between the first image patch and the second image patch as the sum over all sub regions of the calculated variances.
In an embodiment the function of the second intensity value associated with an element and the first intensity value associated with the corresponding element of the first image patch is the difference between the second intensity value associated with the element and the first intensity value associated with the corresponding element of the first image patch.
In an embodiment the function of the second intensity value associated with an element and the first intensity value associated with the corresponding element of the first image patch is the ratio of the second intensity value associated with the element and the first intensity value associated with the corresponding element of the first image patch.
In an embodiment the first image patch and the second image patch are two dimensional images patches and the elements of the first image patch and the second image patch are pixels.
In an embodiment the first image patch and the second image patch are three dimensional images patches and the elements of the first image patch and the second image patch are voxels.
In an embodiment an Imaging system comprising: a first camera configured to capture a first image of a scene a second camera configured to capture a second image of the scene; and a processing module configured to calculate a plurality of disparities between pixels of the first image and the second image by, for each of a plurality of pixels of the first image, defining a first patch centred on a target pixel of the first image; defining a plurality of second image patches centred on pixels of the second image; calculating a measure of similarity between the first image patch and each second image patch of the plurality of second image patches; selecting the second image patch having the best similarity measure as a match for the first image patch centred on the target pixel; and determining the disparity between the target pixel and the pixel of thesecond image in the centre of the second image patch selected as the match; and calculating a depth image of the scene from the plurality of disparities.
In an embodiment the processor is further configured to select the plurality of second image patches as patches centred on pixels on an epipolar line.
In an embodiment the imaging system is an underwater imaging system In an embodiment the processor is further configured to determine a transform between a first image and a second image, by calculating a measure of similarity between a first image patch of the first image and a second image patch of the second image.
In an embodiment the apparatus further comprises an input module configured to receive the first image and the second image from different image capture modalities.
In an embodiment a computer readable medium carries processor executable instructions which when executed on a processor cause the processor to carry out a method of calculating a measure of similarity between a first image patch and a second image patch.
Embodiments of the present invention can be implemented either in hardware or on software in a general purpose computer. Further embodiments of the present invention can be implemented in a combination of hardware and software. Embodiments of the present invention can also be implemented by a single processing apparatus or a distributed network of processing apparatus.
Since the embodiments of the present invention can be implemented by software, embodiments of the present invention encompass computer code provided to a general purpose computer on any suitable carrier medium. The carrier medium can comprise any storage medium such as a floppy disk, a CD ROM, a magnetic device or a programmable memory device, or any transient medium such as any signal e.g. an electrical, optical or microwave signal.
Figure 1 shows an image processing system according to an embodiment. The image processing system 100 comprises a memory 1W and a processor 120. The memory -110 stores a first image patch 112 and a second image patch 114. The processor 120 is programmed to carry out an image processing method to generate a measure of similarity between the first image patch 112 and the second image patch 114.
The image processing system 100 has an input for receiving image signals. The image signals comprise image data. The input may receive data from an image capture device. In an embodiment, the input may receive data from a network connection. In an embodiment, the data may comprise images from different image capture modalities.
Figure 2 shows the first image patch 112 and the second image patch 114. The first image patch has a plurality of pixels. In figure 2, the ith pixel of the first image patch is labelled as Xi. The second image patch 114 also has a plurality of pixels. The first image patch 112 and the second image patch 114 both have the same number of pixels. Each pixel in the first image patch 112 corresponds to a pixel of the second image patch 114. Figure 2 shows the LW pixel of the second image patch 114 as Yi.
The pixel Xi of the first image patch corresponds to the pixel YE of the second image patch. An intensity value is associated with each pixel.
While the image patches described above have the same shape and size, they may have been transformed or rectified from images of different sizes or shapes.
Figure 3 is a flowchart showing a method of calculating a similarity measure between a first image patch and a second image patch according to an embodiment. The method shown in Figure 3 may be implemented by the processor 120 shown in Figure 1 to calculate a measure of similarity between the first image patch 112 and the second image patch 114 shown in Figure 2.
In step S302, the second image patch is segmented into a pEurality of subregions. The second image patch is segmented by defining regions according to the intensity of the pixels of the first image patch. On the first image patch each subregion is defined as the set of pixels having intensities within a range of values. The subregions on the second image patch are defined as the sets of pixels of the second image patch which have locations corresponding to pixels within a given subregion on the first image patch.
In step S304 for each region on the second image patch the difference in intensity between the pixels of the second image patch and the corresponding pixels of the first image patch is calculated.
In step S306, the variance of the difference in intensity over each subregion is calculated.
In step S308, the sum of the variances over all subregions is calculated and taken as a measure of similarity between the first image patch and the second image patch.
The method described above in relation to figure 3 may be considered be the calculation of Sum of Conditional Variance of Differences (SCVD). The SCVO method is a variant of the Sum of Conditional Variances method (SCV).
The SCV method and the SCVD methods will now be described in more detail. Given a pair of images X and Y, the sum of conditional variances (SCV) matching measure prescribes to partition the pixels of V into b disjoinT bins Y (j) with j 1 nb, corresponding to bracketed intensity regions XC) of X (called the reference image which is analogous to the first image described above).
The value of the matching measure is then obtained summing the variances of the intensities within each bin Y U).
Sscv(X,Y) ) E[(Yj -E(Y1))2 X e X(j)] j=1-where X1 and Y1 with i = I N indicate the pixel intensities of X and Y respectively, Np being the total number of pixels. The conditions that appear in the sum are obtained uniformly partitioning the intensity range of X. Figure 4 shows an example of a joint histogram for images X and V. The behaviour of SCV can be characterised by the joint histogram. As shown in figure 4, the joint histogram can be interpreted as non-injective relation that maps the range of the first image to the second one.
A joint histogram Flxy can be interpreted as non-injective relation that maps the ranges of two images. Figure 4a shows the resulting joint histogram after linearly reducing the contrast of the reference image. Figure 4b shows the joint histogram for a non-linear intensity map. Hotter (brighter) colours correspond to more frequently occurring values.
The set of pixels that contributed to the non zero entry of each column (row) corresponds to one of the regions selected by the j-th condition. The number of discretisation levels nb is problem specific; for images quantised at byte precision, a typical choice is usually tib = 32 or 64. Larger intervals can help in achieving a wider convergence radius and offer more resilience to noise. The matching measure will not change as long as the pixels do not cross the current bin boundaries. On the other hand, narrow ranges will boost the matching accuracy and reduce the information that is lost during the quantisation step.
According to the SCV algorithm, the reference image is used solely to determine the subregions in which the variances of the equation above for Sscv(X,Y) should be computed.
In embodiments described herein a similarity measure based on the conditional variance of differences is used. Thus all the information present in both images is used leading to a more discriminative matching measure.
First, the variance of differences (VD) is defined as the second moment of the intensity differences between two templates: VV(X, Y) = Var[{Yj -N2] The variance of differences is minimal when the distribution of differences is uniform. It is bias invariant, scale sensitive and proportional to the zero-mean sum of squared differences.
The fact that it is proportional to the zero-mean sum of squared differences can be verified by the following: VV(X,Y) =E[(Y-X--E(Y-X))2] i where the mean of an image is understood to indicate its element-wise mean.
S Given two images X and Y, we define the sum of the conditional variance of differences (SCVD) as the sum of the variances over a partition of their difference. As before, the subsets are selected bracketing the range of the reference image to produce a set of bins X(j). In symbols: rib SSCVD(X,Y) =) VV(X1,Y1 X e X(j)) 3=1 In order for the difference to be meaningful, the two signals should be in direct re[ation; since the matching measure need be insensitive to changes in scale and bias, we maximise direct relation by adjusting the sign of one of them in accordance with the equation below: / =r [Er(E(1 X1eX(j))-E(Y X1eX(j-1))) where F indicates the step function mapping R to {-1, 1}. 4) encodes a cumulative result of comparisons between a pair of E(Y) in the adjacent histogram bins, so that the sign is properly adjusted. Hence, the requirement for the mapping from X and Y is to be weakly order preserving. That is, the function should be monotonic but is not required to be injective. This restriction, not present in the original SCV formulation1 makes it possible to make better use of the available information and largely valid, e.g. between signals captured for the same target with different modes.
Uniformly partitioning the intensity range of X into equally sized bins X(j) can lead subpar perfomiances when the intensity distribution is uneven: poorly sampled intensity ranges are noisy and their variance unreliable. Overly sampled regions of the spectrum conversely lead to compressing many pixels into a single bin, discarding a large amount of useful information in the process. The procedure is also inherently asymmetric, producing in general different results when swapping the images involved.
In embodiments the method can be modified in two non-mutually exclusive ways to address the issues discussed above. Each one of the modifications provides an independent performance boost to the baseline approach described.
Figure 5 shows the effects of quantisation and displacement. Figure 5a shows the histogram Hfor a pair of aligned images in this case, the joint histogram between an image and its gray scale inverse is shown.
Figure Sb shows the histogram for the same pair of images with a 5 pixel displacement to one of the images.
Figure 5c shows a histogram Hxy for the aligned images, where the intensity range of the image has been equalised.
Figure 5d shows a histogram Hfor the displaced images, where the intensity range of the image has been equalised.
As it can be seen, in Figures 5a and 5b the bins corresponding to the low and high end of the intensity spectrum are not receiving any vote, thus compressing the image information into a smaller number of regions.
To achieve a uniform bin utilisation, a histogram equalisation is performed on the reference image X. Figure 5c shows an Hy generated by replacing the input reference image X with its histogram equalized version, achieving full utilisation of the entire dynamic range.
As can be seen from Figure 5, equalising the reference image results in spreading the vote over a larger area, affecting the variance computation and resulting in a more discriminative measure.
Both SCV and SCVD are structurally asymmetrical since only one of the images is used to def[ne the partitions in which to compute the variance.
Generally, SSCV,SCVD} (X, Y) $ 8{SCv,SCVD} (Y, X) because the two quantities are computed over different subregions which depends on the reference image. As far as the task of image matching is concerned, no particular reason exists in choosing one image over the other as the reference; the process of quantization can thus be symnietrised computing S{SCVSCVD} bi-directionally: SgCVSCVD} = (S{scv,scyD}(XY) I-S(scygcvD}(YX)) / 2 Given the characteristics of SCVD (SCV), in presence of uneven quantizations one direction is usually much more discriminative than the other. The above formula is capable of successfully disambiguating such situations.
Figure 6 shows a comparison of the SCV approach, the SCVD approach and the modifications discussed above.
An image location, a direction and a displacement were selected all at random, and the measure between the selected reference window and the template was computed after applying the translation.
Notice that the template is negated in order to simulate multi-modal inputs. The size of the region was fixed to 50x50 pixels while the maximum distance was set to be half of its edge length, i.e. 25 pixels.
Figure 6 was produced averaging 20,000 iterations of this procedure, to remove the effects of noise (each single trial is roughly monotonic). As it can be seen, all SCVD versions are better at discriminating the minimum. Histogram equalized and symmetric variants obtain steeper gradients for both SCV and SCVD. When utilising both improvements1 SCVD shows a nearly constant slope, a crucial property in order to use optimization algorithms based on implicit derivatives.
Figure 7 shows the results of comparing the performance of different similarity measures on a synthetic registration task using a gradient descent search; given a random location and displacement as before, a cost function following the direction of the steepest gradient was optimised. The procedure terminates when reaching a local minima or the maximum number of allowed iterations. The maximum number of iterations was set to 50 in this case. Figure 7 was obtained averaging 4000 different trials; as it can be seen, each SCVD version beats the equivalent SCV measure using the same set of variants, which provide a non negligible performance boost.
Figure 8 shows an example of the use of SCVD in tracking an object over frames of a video sequence. Figure Ba shows one frame of a video sequence and its reference template. The subsequent frame has both photometric and geometric deformations.
Figure Sb shows the registration results for the SCVD method showing both the best matching quadrilateral on the frame and the regions back warped to the reference.
Figure 9 shows a method of calculating a measure of similarity between image patches according to an embodiment. In the methods discussed above, the conditional variance of differences is calculated. In the method shown in figure 9, the conditiona[ variance of ratios of intensity are calculated.
The method shown in Figure 9 may be implemented by the processor 120 shown in Figure ito calculate a measure of similarity between the first image patch 112 and the second Fmage patch 114 shown in Figure 2.
In step S902, the second image patch is segmented into a plurality of subregions. The second image patch is segmented using by defining regions according to the intensity of the pixels of the first image patch. On the first image patch each subregion is defined as the set of pixels having intensities within a range of values. The subregions on the second image patch are defined as the sets of pixe's of the second image patch which have locations corresponding to pixels within a given subregion on the first image patch.
In step S904 for each region on the second image patch the ratio of the intensity of the pixels of the second image patch and the intensity of corresponding pixels of the first image patch is calculated.
In step S906, the variance of the ratio of the intensity over each subregion is calculated.
In step S908, the sum of the variances over all subregions is calculated and taken as a measure of similarity between the first image patch and the second image patch.
Figure 10 shows an image processing apparatus according toan embodiment. The apparatus 1000 uses the methods described above to determine a depth image from two images. The apparatus 1000 comprises a left camera 1020 and a right camera 1040. The left camera 1020 and the right camera 1040 are arranged to capture images of approximately the same scene from different locations.
The image processing apparatus 1000 comprises an image processing system 1060 the image processing system 1060 has a memory 1062 and a processor 1068. The memory stores a left image 1064 and a right image 1066. The processor carries out a method to determine a depth image from the left image 1064 and the right image 1066.
Figure 11 shows how the depth z can be calculated from disparity, or the shift between the left image 1064 and the right image 1066.
The left camera 1020 has an image plane 1022 and a central axis 1024. The right camera has an image plane 1042 and a central axis 1044. The central axis 1024 of the left camera is separated from the central axis 1044 of the right camera by a distances-The left camera 1020 and the right camera 1040 each have a focal length of f. The cameras may comprise a charge coupled device or other device for detecting photons and converting the photons into electrical signals.
A point 1010 with coordinates (x, y, z) will be projected onto the image place 1022 of the left camera at a point 1026 which is separated from the central axis 1024 of the left camera by a distance x1. The point will be projected onto the image pEace 1022 of the right camera at a point 1046 which is separated from the central axis 1044 of the right camera by a distance x4?.
The depth z can be calculated as follows: The above equation comes from comparing the similar triangles formed by the Fine running from the left hand camera to the point at co-ordinates (x, y, z).
Similarly considering the line running from the right camera to the point at co-ordinates (x, y, z) the following equation can be derived: XS = X1,. z f
Combining the two equations gives: z=sf X'1 X',.
Thus, the depth can be obtained from the disparity, X'IX'r* Figure 12 shows a method of generating a depth image from a stereo image pair according to an embodiment.
In step Si 202. a search for pixels in the right hand image that correspond to pixels in the left hand image is carried out. For a plurality of pixels in the left hand image a search is carried out for a corresponding pixel in the right hand image. This search is carried out by forming a first image patch centred on a pixel in the left hand image.
Then, a search is carried out over the second image for a second image patch having the highest simiFarity measure. The similarity measure is calculated as described above. Once the image patch having the highest similarity measure is found, the pixel in the centre of that image patch is taken as the projection of the point onto the right hand image.
In step S1204, the disparity between the two pixels is calculated as the distance between them.
Once disparities have been calculated for a plurality of pixels in the left hand image, a depth image is derived from the disparities in step 81206.
The search carried out in step 81202 may be limited to pixels in the right hand image that are in the plane as the pixel in the left hand image. If the two cameras are aligned this may involve only searching for pixels with the same y coordinate. The plane passing through the camera centres and a given feature point is called the epipolar plane. The intersection of the epipolar plane with the image plane defines the epipolar line. If the epipolar lines of the two cameras are aligned, then every feature in one image will lie on the same row in the second image.
If the two cameras are not aligned the search may be carried out along an oblique epipolar line. The position of the oblique epipolar line may be determined using information on the relative positioning of the cameras. This informabon may be determined using a calibration board and determining the extent to which the images from one camera are rotated with respect to the other.
Alternatively, if the two cameras are not aligned, the image from one of the cameras may be transformed using the calibration information described above.
Because the methods of calculating similarity measures between image patches of embodiments have a high tolerance to noise in images it is anticipated that the depth calculation described above would be particularly suitable for noisy environments such as underwater environments.
Underwater imaging environments present a number of challenges. While travelling through water, light rays are absorbed and scattered when photons encounter particles in the water or water molecules. This effect depends on the wavelength and therefore has an impact on the colours finally measured by the image sensors and can lead to reduced contrast. Further, refraction when the light enters a camera housing from water into glass and then into air leads to distortion of images.
Because of the effects discussed above, in order to perform stereo image matching and generate a depth image, a similarity measure with a high robustness to noise is required such as that provided by embodiments described herein.
In an embodiment, the size of the image patches may be varied depending on local variations in intensity and the disparity. The image patch size may be varied for each pixel and the image patch size that minimises the uncertainty in the disparity may be selected.
Figure 13 shows two medical image capture devices. A first image capture device 1310 is configured to capture a first image 1320 of a patient 1350 using a first image capture modality. A second image capture device 1330 is configured to capture a second image 1340 of a patient using a second image capture modality.
For example, the first image capture modality may be x-ray and the second image capture modality may be magnetic resonance imaging.
Figure 14 shows an image processing system according to an embodiment. The image processing system 1400 is configured to register images obtained with different sensor modalities. For example as shown in Figure 13 both the first and the second image capture devices capture images of the patientFs leg.
The image processing system 1400 has a memory 1410 which stores a first image 1320 and a second image 1340. the image processing apparatus 1400 has a processor 1420 which carries out a method of registering the first image with the second image.
Figure 15 shows a method which is executed by the system 1400 to register the multimodal images.
in step 31502, a region of the first image is selected as a first image patch. In step S 1504, a second image patch is derived from the second image. The second image patch may be derived by transforming or warping parts of the second image. In step S1506 a similarity measure between the first image patch and the second image patch is calculated using one of the methods described above. Steps 51504 and 81506 are repeated until in step 81508 a second image patch having a best similarity measure is determined.
In step S1510 a registration between the images is determined.
The registration between the images may be determined as a transform matrix. The registration between the images may be stored as metadata according to a standard such as the Digital Imaging and Communications in Medicine (DICOM) standard.
While the example described above relates to registration of images from multimodal sensors, the method may also be adapted to the following applications. Atlas mapping: an image of a patient may be mapped to a stored medical atlas, for example a set of anatomical features of the brain. Images of a patient obtained over a period of time may be mapped to one-another. Multiple images of a patient may be stitched together.
While the description above relates to two dimensional images, those of skill in the art will appreciate that the methods and systems described could also be applied to three dtmensional images in which patches comprising a number of voxels would be compared to determine a similarity measure.
While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed the novel methods and systems described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of methods and systems described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms of modifications as would fall within the scope and spirit of the inventions.

Claims (20)

  1. CLAIMS: 1. A method of calculating a measure of similarity between a first image patch and a second image patch, the first image patch comprising a pluraLity of first intensity values each associated with an element of the first image patch, the second image patch comprising a plurality of second intensity values each associated with an element of the second image patch, the first image patch and the second image patch having a corresponding size and shape such that each element of the first image patch corresponds to an element on the second image patch, the method comprising determining a set of sub regions on the second image patch, each sub region being determined as the set of elements of the second image patch which correspond* to elements of the first image patch having first intensity values within a range of first intensity values defined for that sub region; for each sub region of the set of sub regions, calculating the variance, over all of the elements of that sub region, of a function of the second intensity value associated with that element and the first intensity value associated with the corresponding element of the first image patch; and calculating the similarity measure as the sum over all sub regions of the calculated variances.
  2. 2. The method of claim I wherein the function of the second intensity value associated with an element and the first intensity value associated with the corresponding element of the first image patch is the difference between the second intensity value associated with the element and the first intensity value associated with the corresponding element of the first image patch.
  3. 3. The method of claim I wherein the function of the second intensity value associated with an element and the first intensity value associated with the corresponding element of the first image patch is the ratio of the second intensity value associated with the element and the first intensity value associated with the corresponding element of the first image patch.
  4. 4. The method of claim 1 wherein the first image patch and the second image patch are two dimensional images patches and the elements of the first image patch and the second image patch are pixels.
  5. 5. The method of claim I wherein thefirst image patch and the second image patch are three dimensional images patches and the elements of the first image patch and the second image patch are voxels.
  6. 6. A method of deriving a depth image from a first image and a second image, the method comprising calculating a plurality of disparities between pixels of the first image and the second image by, for each of a plurality of pixels of the first image, defining a first patch centred on a target pixel of the first image defining a plurality of second image patches centred on pixels of the second image; calculating a measure of simElarity between the first image patch and each second image patch of the plurality of second image patches using the method of cLaim 1; selecting the second image patch having the best similarity measure as a match for the first image patch centred on the target pixel; and determining the disparity between the target pixel and the pixel of the second image in the centre of the second image patch selected as the match; and calculating a depth image from the plurality of disparities.
  7. 7. The method of claim 6 wherein the plurality of second image patches are selected as patches centred on pixels on an epipolar line.
  8. 5. An image registration method of determining a transform between a first image and a second image, comprising calculating a measure of similarity between a first image patch of the first image and a second image patch of the second image according to the method of claim 1.
  9. 9. An image registration method according to claims 8 wherein the first image and the second image are obtained from different image capture modalities.
  10. 10. An image processing apparatus comprising a memory configured to store data indicative of a first image patch and a second image patch, the first image patch comprising a plurality of first intensity values each associated with an element of the first image patch, the second image patch comprising a plurality of second intensity values each associated with an element of the second image patch, the first image patch and the second image patch having a corresponding size and shape such that each element of the first image patch corresponds to an element on the second image patch; and a processor configured to determine a set of sub regions on the second image patch, each sub region being determined as the set of elements of the second image patch which correspond to elements of the first image patch having first intensity values within a range of first intensity values defined for that sub region; for each sub region of the set of sub regions, calculate the variance, aver all of the elements of that sub region, of a function of the second intensity value associated with that element and the first intensity value associated with the corresponding element of the first image patch; and calculate a similarity measure between the first image patch and the second image patch as the sum over all sub regions of the calculated variances.
  11. 11. The apparatus of claim 10 wherein the function of the second intensity value associated with an element and the first intensity value associated with the corresponding element of the first image patch is the difference between the second intensity value associated with the element and the first intensity value associated with the corresponding element of the first image patch.
  12. 12. The apparatus of claim 10 wherein the function of the second intensity value associated with an element and the first intensity value associated with the corresponding element of the first image patch is the ratio of the second intensity value associated with the element and the first intensity value associated with the corresponding element of the first image patch.
  13. 13. the apparatus of cLaim 10 wherein the first image patch and the second image patch are two dimensional images patches and the elements of the first image patch and the second image patch are pixels.
  14. 14. The apparatus of claim 10 wherein the first image patch and the second image patch are three dimensional images patches and the elements of the first image patch and the second image patch are voxels.
  15. 15. An Imaging system comprising: a first camera configured to capture a first image of a scene a second camera configured to capture a second image of the scene; and a processing module configured to calculate a plurality of disparities between pixels of the first image and the second image by, for each of a pluralily of pixels of the first image, defining a first patch centred on a target pixel of the first image defining a plurality of second image patches centred on pixels of the second image; calculating a measure of similarity between the first image patch and each second image patch of the plurality of second image patches using the method of claim 1; selecting the second image patch having the best similarity measure as a match for the first image patch centred on the target pixel; and determining the disparity between the target pixel and the pixel of the second image in the centre of the second image patch selected as the match; and calculating a depth image of the scene from the plurality of disparities.
  16. 16. An imaging system according to claim 15 wherein the processor is further configured to select the plurality of second image patches as patches centred on pixels on an epipolar line.
  17. 17. An underwater imaging system comprising the imaging system of claim 15.
  18. 18. The apparatus of claim 101 wherein the processor is further configured to determine a transform between a first image and a second image, by calculating a measure of similarity between a first image patch of the first image and a second image patch of the second image.
  19. 19. The apparatus of claim 18 further comprising an input module configured to receive the first image and the second image from different image capture modalities.
  20. 20. A computer readable medium carrying processor executable instructions which when execute on a processor cause the processor to carry out a method according to claim 1.amended claims have been filed as follows:-CLAIMS: 1. A method of calculating a measure of similarity between a first image patch and a second image patch, the first image patch comprising a plurality of first intensity values each associated with an element of the first image patch, the second image patch comprising a plurality of second intensity values each associated with an element of the second image patch, the first image patch and the second image patch having a corresponding size and shape such that each element of the first image patch corresponds to an element on the second image patch, the method comprising determining a set of sub regions on the second image patch, each sub region being determined as the set of elements of the second image patch which correspond CV) 15 to elements of the first image patch having first intensity values within a range of first intensity values defined for that sub region; 0 for each sub region of the set of sub regions, calculating a variance, over all of the elements of that sub region, of a function of the second intensity value associated o with that element and the first intensity value associated with the corresponding CV) 20 element of the first image patch; and calculating the similarity measure as the sum over all sub regions of the calculated variances.2. The method of claim 1 wherein the function of the second intensity value associated with an element and the first intensity value associated with the corresponding element of the first image patch is a difference between the second intensity value associated with the element and the first intensity value associated with the corresponding element of the first image patch.3. The method of claim 1 wherein the function of the second intensity value associated with an element and the first intensity value associated with the corresponding element of the first image patch is a ratio of the second intensity value associated with the element and the first Thtensity value associated with the corresponding element of the first image patch.4. The method of claim I wherein the first image patch and the second image patch are two dimensional images patches and the elements of the first image patch and the second image patch are pixels.5. The method of claim I wherein the first image patch and the second image patch are three dimensional images patches and the elements of the first image patch and the second image patch are voxels.6. A method of deriving a depth image from a first image and a second image, the method comprising calculating a plurality of disparities between pixels of the first image and the second image by, for each of a plurality of pixels of the first image, defining a first patch centred on a target pixel of the first image defining a plurality of second image patches centred on pixels of the C') second image; ceEculating a measure of similarity between the first image patch and O each second image patch of the plurality of second image patches using the method of claim 1; O 20 selecting the second image patch having the best similarity measure C') as a match for the first image patch centred on the target pixel; and determining the disparity between the target pixel and the pixel of the second image in the centre of the second image patch selected as the match; and calculating a depth image from the plurality of disparities.7. The method of claim 6 wherein the plUrality of second image patches are selected as patches centred on pixels on an epipolar line.8. An image registration method of determining a transform between a first image and a second image, comprising calculating a measure of similarity between a first image patch of the first image and a second image patch of the second image according to the method of claim 1.9. An image registration method according to claims B wherein the first image and the second image are obtained from different image capture modalities.10. An image processing apparatus comprising a memory configured to store data indicative of a first image patch and a second image patch, the first image patch comprising a plurality of first intensity values each associated with an element of the first image patch, the second image patch comprising a plurality of second intensity values each associated with an element of the second image patch, the first image patch and the second image patch having a corresponding size and shape such that each element of the first image patch corresponds to an element on the second image patch; and a processor configured to determine a set of sub regions on the second image patch, each sub region being determined as the set of elements of the second image patch which correspond to elements of the first image patch having first intensity values within a C') range of first intensity values defined for that sub region; for each sub region of the set of sub regions, calculate a variance, over 0 all of the elements of that sub region, of a function of the second intensity value associated with that element and the first intensity value associated with the O 20 corresponding element of the first image patch; and C') calculate a similarity measure between the first image patch and the second image patch as the sum over all sub regions of the calculated variances.U. The apparatus of claim 10 wherein the function of the second intensity value associated with an element and the first intensity value associated with the corresponding element of the first image patch is a difference between the second intensity value associated with the element and the first intensity value associated with the corresponding element of the first image patch 12. The apparatus of claim 10 wherein the function of the second intensity value associated with an element and the first intensity value associated with the corresponding element of the first image patch is a ratio of the second intensity value associated with the element and the first intensity va'ue associated with the corresponding element of the first image patch.13. The apparatus of claim 10 wherein the fiist image patch and the second image patch are two dimensional images patches and the elements of the first image patch and the second image patch are pixels.14. The apparatus of claim 10 wherein the first image patch and the second image patch are three dimensional images patches and the elements of the first image patch and the second image patch are voxels.15. An Imaging system comprising: a first camera configured to capture a first image of a scene a second camera configured to capture a second image of the scene; and a processing module configured to calculate a plurarity of disparities between pixels of the first image and the second image by, for each of a plurality of pixels of the first irilage, defining a first patch centred on a target pixel of the first image C') defining a plurality of second image patches centred on pixels of the second image; 0 calculating a measure of similarity between the first image patch and each second image patch of the plurality of second image patches O 20 using the method of claim 1; C') selecting the second image patch having the best similarity measure as.a match for the first image patch centred on the target pixel; and determining the disparity between the target pixel and the pixel of the second image in the centre of the second image patch selected as the match; and calculating a depth image of the scene from the plurality of disparities.16. An imaging system according to claim 15 wherein the processor is further configured to select the plurality of second image patches as patches centred on pixels on an epipolar line.17. An underwater imaging system comprising the imaging system of claim 15.18. The apparatus of claim 10, wherein the processor is further configured to determine a transform between a first image and a second image, by calculating a measure of similarity between a first image patch of the first image and a second image patch of the second image.19. The apparatus of claim 18 further comprising an input module configured to receive the first image and the second image from different image capture modalities, 20. A computer readable medium carrying processor executable instructions which when execute on a processor cause the processor to carry out a method according to claim 1. C') r Co
GB1219844.6A 2012-11-05 2012-11-05 Image processing with similarity measure of two image patches Withdrawn GB2507558A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
GB1219844.6A GB2507558A (en) 2012-11-05 2012-11-05 Image processing with similarity measure of two image patches
JP2013227733A JP5752770B2 (en) 2012-11-05 2013-10-31 Image processing method and apparatus
US14/072,427 US20160148393A2 (en) 2012-11-05 2013-11-05 Image processing method and apparatus for calculating a measure of similarity

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1219844.6A GB2507558A (en) 2012-11-05 2012-11-05 Image processing with similarity measure of two image patches

Publications (2)

Publication Number Publication Date
GB201219844D0 GB201219844D0 (en) 2012-12-19
GB2507558A true GB2507558A (en) 2014-05-07

Family

ID=47429143

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1219844.6A Withdrawn GB2507558A (en) 2012-11-05 2012-11-05 Image processing with similarity measure of two image patches

Country Status (3)

Country Link
US (1) US20160148393A2 (en)
JP (1) JP5752770B2 (en)
GB (1) GB2507558A (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10057593B2 (en) * 2014-07-08 2018-08-21 Brain Corporation Apparatus and methods for distance estimation using stereo imagery
US10032280B2 (en) 2014-09-19 2018-07-24 Brain Corporation Apparatus and methods for tracking salient features
US10282623B1 (en) * 2015-09-25 2019-05-07 Apple Inc. Depth perception sensor data processing
US10977811B2 (en) * 2017-12-20 2021-04-13 AI Analysis, Inc. Methods and systems that normalize images, generate quantitative enhancement maps, and generate synthetically enhanced images
CN109191496B (en) * 2018-08-02 2020-10-02 阿依瓦(北京)技术有限公司 Motion prediction method based on shape matching

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7333651B1 (en) * 1998-04-13 2008-02-19 Korea Institute Of Science And Technology Method and apparatus for measuring similarity using matching pixel count
EP2386998A1 (en) * 2010-05-14 2011-11-16 Honda Research Institute Europe GmbH A Two-Stage Correlation Method for Correspondence Search
WO2013029675A1 (en) * 2011-08-31 2013-03-07 Metaio Gmbh Method for estimating a camera motion and for determining a three-dimensional model of a real environment

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05157528A (en) * 1991-12-03 1993-06-22 Nippon Steel Corp Three-dimensional analyzing method for shape of corrosion
JPH11167634A (en) * 1997-12-03 1999-06-22 Omron Corp Image area dividing method, image area dividing device, recording medium storing image area dividing program, image retrieving method, image retrieving device and recording medium storing image retrieval program.
GB0125774D0 (en) * 2001-10-26 2001-12-19 Cableform Ltd Method and apparatus for image matching
JP4556437B2 (en) * 2004-02-03 2010-10-06 ソニー株式会社 Video classification device, video classification method, video classification method program, and recording medium recording video classification method program
US7724944B2 (en) * 2004-08-19 2010-05-25 Mitsubishi Electric Corporation Image retrieval method and image retrieval device
US20060098897A1 (en) * 2004-11-10 2006-05-11 Agfa-Gevaert Method of superimposing images
US9366774B2 (en) * 2008-07-05 2016-06-14 Westerngeco L.L.C. Using cameras in connection with a marine seismic survey
JP5358856B2 (en) * 2009-04-24 2013-12-04 公立大学法人首都大学東京 Medical image processing apparatus and method
US8121400B2 (en) * 2009-09-24 2012-02-21 Huper Laboratories Co., Ltd. Method of comparing similarity of 3D visual objects
US20110075935A1 (en) * 2009-09-25 2011-03-31 Sony Corporation Method to measure local image similarity based on the l1 distance measure
US20110080466A1 (en) * 2009-10-07 2011-04-07 Spatial View Inc. Automated processing of aligned and non-aligned images for creating two-view and multi-view stereoscopic 3d images
US20110164108A1 (en) * 2009-12-30 2011-07-07 Fivefocal Llc System With Selective Narrow FOV and 360 Degree FOV, And Associated Methods
WO2013078404A1 (en) * 2011-11-22 2013-05-30 The Trustees Of Dartmouth College Perceptual rating of digital image retouching

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7333651B1 (en) * 1998-04-13 2008-02-19 Korea Institute Of Science And Technology Method and apparatus for measuring similarity using matching pixel count
EP2386998A1 (en) * 2010-05-14 2011-11-16 Honda Research Institute Europe GmbH A Two-Stage Correlation Method for Correspondence Search
WO2013029675A1 (en) * 2011-08-31 2013-03-07 Metaio Gmbh Method for estimating a camera motion and for determining a three-dimensional model of a real environment

Also Published As

Publication number Publication date
JP2014112362A (en) 2014-06-19
JP5752770B2 (en) 2015-07-22
GB201219844D0 (en) 2012-12-19
US20160148393A2 (en) 2016-05-26
US20140125773A1 (en) 2014-05-08

Similar Documents

Publication Publication Date Title
US8768046B2 (en) Determining model parameters based on transforming a model of an object
KR102415501B1 (en) Method for assuming parameter of 3d display device and 3d display device thereof
Zheng et al. Single-image vignetting correction
Heo et al. Joint depth map and color consistency estimation for stereo images with different illuminations and cameras
US20100315490A1 (en) Apparatus and method for generating depth information
CA3012721A1 (en) Systems and methods for automated camera calibration
US20210082086A1 (en) Depth-based image stitching for handling parallax
Müller et al. Illumination-robust dense optical flow using census signatures
US10853960B2 (en) Stereo matching method and apparatus
US9767383B2 (en) Method and apparatus for detecting incorrect associations between keypoints of a first image and keypoints of a second image
US20160148393A2 (en) Image processing method and apparatus for calculating a measure of similarity
Bastanlar et al. Multi-view structure-from-motion for hybrid camera scenarios
US10319105B2 (en) Method and system for calibrating an image acquisition device and corresponding computer program product
Jung et al. Object detection and tracking-based camera calibration for normalized human height estimation
CN105335959B (en) Imaging device quick focusing method and its equipment
US9721348B2 (en) Apparatus and method for raw-cost calculation using adaptive window mask
Chang et al. Robust stereo matching with trinary cross color census and triple image-based refinements
US10217233B2 (en) Method of estimating image depth using birefringent medium and apparatus thereof
Lee et al. Vehicle counting based on a stereo vision depth maps for parking management
KR101705333B1 (en) Estimation Method of Depth Vatiation around SIFT Features using a Stereo Camera
Lee et al. Simultaneous object tracking and depth estimation using color shifting property of a multiple color-filter aperture camera
Maki et al. Conditional variance of differences: A robust similarity measure for matching and registration
Lhuillier et al. Synchronization and self-calibration for helmet-held consumer cameras, applications to immersive 3d modeling and 360 video
Xiao et al. Accurate feature extraction and control point correction for camera calibration with a mono-plane target
Kadmin et al. Local Stereo Matching Algorithm Using Modified Dynamic Cost Computation [J]

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)