AU2009251208A1 - Estimation of image feature orientation - Google Patents

Estimation of image feature orientation Download PDF

Info

Publication number
AU2009251208A1
AU2009251208A1 AU2009251208A AU2009251208A AU2009251208A1 AU 2009251208 A1 AU2009251208 A1 AU 2009251208A1 AU 2009251208 A AU2009251208 A AU 2009251208A AU 2009251208 A AU2009251208 A AU 2009251208A AU 2009251208 A1 AU2009251208 A1 AU 2009251208A1
Authority
AU
Australia
Prior art keywords
orientation
image
computer program
gradient
double angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
AU2009251208A
Inventor
Nagita Mehrseresht
Alan Valev Tonisson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to AU2009251208A priority Critical patent/AU2009251208A1/en
Publication of AU2009251208A1 publication Critical patent/AU2009251208A1/en
Abandoned legal-status Critical Current

Links

Abstract

ESTIMATION OF IMAGE FEATURE ORIENTATION 5 A method for estimating an orientation of an image feature located in an image region is disclosed. The method comprises the steps of: determining a plurality of gradient values from image pixels in the image region, wherein each gradient value is determined from a pair of differences located about a point, each difference being the difference between diagonally adjacent image pixels (410); calculating a double angle orientation representation for each of 3 the plurality of gradient values (450); processing the plurality of double angle gradient representations to generate smoothed double angle orientation representations (460); calculating at least one second order gradient value based on differences of the first order gradient values (420); and generating an estimate of an orientation of the image feature based on the at least one second order gradient value and the smoothed double angle orientation 5 representations (470, 480, 490). 2453699 1 IRN: 921372 400 Image Calculate first 410 order gradient vector field 430 Apply high pass filter Calculate second 420 Convert to order gradient 450 double angle vector field representation 435 460 Apply averaging 440 Combine 465 '\ 44 470 Generate energy operator 480 Correct rotation 485 Apply averaging filter 490 Orientation Fig. 4 vector field 2461010 IRN: 921372

Description

S&F Ref: 921372 AUSTRALIA PATENTS ACT 1990 COMPLETE SPECIFICATION FOR A STANDARD PATENT Name and Address Canon Kabushiki Kaisha, of 30-2, Shimomaruko 3 of Applicant: chome, Ohta-ku, Tokyo, 146, Japan Actual Inventor(s): Alan Valev Tonisson, Nagita Mehrseresht Address for Service: Spruson & Ferguson St Martins Tower Level 35 31 Market Street Sydney NSW 2000 (CCN 3710000177) Invention Title: Estimation of image feature orientation The following statement is a full description of this invention, including the best method of performing it known to me/us: 5845c(2461409_1) - 1 ESTIMATION OF IMAGE FEATURE ORIENTATION TECHNICAL FIELD The present invention relates generally to image processing and more particularly to automatic estimation of the orientation of features in digitised images. 5 BACKGROUND Images contain a number of types of features that are of interest in image processing applications. Orientation is an important property of image features, such as lines and edges. Estimation of the orientation of image features is thus an important aspect of several image processing applications, including: 0 e Edge detection, * Pattern analysis, * Texture analysis, and " Image upsampling. Image features such as lines and edges have constant intensity in a given direction, hence, 5 the orientation of such features may be represented by a vector that is perpendicular to the isophotes (i.e., lines of constant intensity) of the image feature. The direction of the vector, representing the angle between the positive x-axis and the vector, may be taken to be between zero and 180 degrees, as a line or edge orientated at x degrees is indistinguishable from the same edge or line orientated at (x+180) degrees. Alternatively, to avoid this ambiguity, the 20 orientation of an image feature may be represented using a double angle representation, in which the angle between the x-axis and the orientation vector is doubled. The orientation of an image feature at a point of interest may be estimated by applying an orientation filter to pixels in a neighbourhood of the point of interest. The size of the pixel neighbourhood or image region surrounding the point of interest determines the frequencies 25 detectable by the orientation filter. A small neighbourhood enables detection of high frequency features, such as fine lines, using a small kernel, while a larger region and a larger kernel are required to detect low frequencies, such as slowly varying textures. It is not always possible to assign an orientation to all points in an image. Regions that do not have a well defined orientation are those that have constant intensity or contain isophotes 30 of rapidly varying orientation. These may be classified as intrinsically zero-dimensional (iOD) 2453699 I IRN: 921372 -2 and intrinsically two-dimensional (i2D) regions respectively. An orientation may be determined for regions which contain isophotes of near constant orientation. Such a region may be classified as an intrinsically one-dimensional (ilD) region. An image that includes examples of regions with different intrinsic dimensions is shown in 5 Figure 1. As region 100 (ilD) contains a single vertical line, all isophotes in the region have the same orientation hence the orientation of the region may be represented by a vector perpendicular to the isophote orientation (i.e., by a vector parallel to the horizontal axis). Conversely, an orientation cannot be determined for regions 110 (iOD) and 120 (i2D), as region 110 has constant intensity, and consequently there is no single isophote orientation, D while region 120 contains isophotes of rapidly varying orientation. Numerous methods for estimating the orientation of lines and edges in images are based on the gradient operator. A continuous (monochromatic) image may be treated as a continuous, differentiable real valued function I(x, y) of two variables. The function I(x, y) defines the intensity of the image at any given point (x, y). Such an intensity function is assumed to be 5 differentiable as this will always be true for real world images as there is some degree of blurring in any optical system, which is equivalent to smoothing the image intensity. Isophotes (e.g., edges) are perpendicular to the gradient in the direction of maximum increase in intensity. So the gradient operator provides a means of estimating the orientation of image features. The gradient operator may be approximated, for a sampled image, using a 0 pair of finite impulse response filters (e.g., a Sobel operator). A serious deficiency of orientation estimation methods based on the gradient operator occurs at the peaks and valleys of the image (i.e., at points corresponding to local maxima and minima of the intensity), where the gradient has zero magnitude and hence an orientation cannot be determined. In particular, the gradient approaches zero and reverses direction at the 25 centres of line features, which are typically important for image analysis. This deficiency may be avoided by the use of a smoothed gradient square tensor (GST). However, since the smoothed GST is still based on the gradient operator, orientation estimates obtained from the smoothed GST can be unreliable for some features where the response of the smoothed GST may vary in amplitude. 30 A more sophisticated complex valued orientation energy operator has been proposed for the demodulation of fringe patterns which has been shown to be more accurate and more sensitive to fine lines than the smoothed GST. The orientation energy operator has the 2453699 1 IRN: 921372 -3 advantage over the GST that it is has a phase invariant response to narrow band signals, which provides a stable response across a wide variety of image features. The orientation energy operator is a continuous operator that is defined for continuous images, however, it is unclear how best to apply the operator to discrete images. Accordingly, 5 there remains a need for a reliable and efficient method for estimating the orientation of features in digitised images. SUMMARY An aspect of the present invention provides a method for estimating an orientation of an 3 image feature located in an image region. The method comprises the steps of: determining a plurality of gradient values from image pixels in the image region, wherein each gradient value is determined from a pair of differences located about a point, each difference being the difference between diagonally adjacent image pixels; calculating a double angle orientation representation for each of the plurality of gradient values; processing the plurality of double 5 angle orientation representations by applying a smoothing filter to generate smoothed double angle orientation representations; calculating at least one second order gradient value based on differences of the gradient values; and generating an estimate of an orientation of the image feature based on the at least one second order gradient value and the smoothed double angle orientation representations. 0 Further aspects of the present invention provide a computer system and a computer program product for performing the foregoing method for estimating an orientation of an image feature located in an image region. BRIEF DESCRIPTION OF THE DRAWINGS 25 One or more embodiments of the present invention will be described hereinafter with reference to the following drawings, in which: Figure 1 is an illustration of an image comprising intrinsically zero, one and two dimensional regions; Figures 2A and 2B are schematic block diagrams of a computer system with which 30 embodiments of the present invention may be practised; Figure 3 is a schematic block diagram of an architecture for performing an orientation adaptive image upscaling method; 2453699 1 IRN: 921372 -4 Figure 4 is a flow diagram of a method for estimating the orientation of an image feature in accordance with an embodiment of the present invention; Figure 5 is a flow diagram of a method for estimating consistency of orientation over an image region in accordance with an embodiment of the present invention; and 5 Figure 6 is an illustration of the spatial relationship between the pixel grids of an input image, a first order gradient vector field and a second order gradient vector field, as used in embodiments of the present invention. DETAILED DESCRIPTION D As mentioned hereinbefore, the orientation of an image feature such as a line or edge may be used by an image upsampling method or algorithm which employs orientation-adaptive interpolation. However, as not all regions of an image may have an inherent orientation, an image upsampling method or algorithm that involves orientation-adaptive filtering should also support upsampling of intrinsically zero-dimensional and intrinsically two-dimensional 5 regions. In order to do so, the upsampling method must be capable of distinguishing between an intrinsically one-dimensional region, which may be upsampled by an orientation-adaptive method, and an intrinsically zero-dimensional or intrinsically two-dimensional region, which may be upsampled by an orientation-independent method. Such regions can be distinguished by measuring the consistency of orientation vectors over the region. For example, for a region 0 in which all the orientation vectors are parallel, the consistency of orientation is high and the region is intrinsically one-dimensional. Conversely, for a region in which the directions of the orientation vectors vary widely, the consistency of orientation is low and the region is intrinsically two-dimensional. A region in which the pixel values are all equal is referred to as intrinsically zero dimensional, in which case scaling and adaptive processing is trivial. 25 Embodiments of the present invention provide methods, systems and computer program products for estimating orientation of image features in an image region. Figures 2A and 2B collectively form a schematic block diagram of a general purpose computer system 200, with which embodiments of the present invention can be practiced. 30 Specifically, the computer system 200 may be programmed to perform the steps of the methods described hereinafter. As shown in Figure 2A, the computer system 200 is formed by a computer module 201, input devices such as a keyboard 202, a mouse pointer device 203, a scanner 226, a camera 2453699_1 IRN: 921372 -5 227, and a microphone 280, and output devices including a printer 215, a display device 214 and loudspeakers 217. An external Modulator-Demodulator (Modem) transceiver device 216 may be used by the computer module 201 for communicating to and from a communications network 220 via a connection 221. The network 220 may be a wide-area network (WAN), 5 such as the Internet or a private WAN. Where the connection 221 is a telephone line, the modem 216 may be a traditional "dial-up" modem. Alternatively, where the connection 221 is a high capacity (e.g., cable) connection, the modem 216 may be a broadband modem. A wireless modem may also be used for wireless connection to the network 220. The computer module 201 typically includes at least one processor 205 and a memory 206, 3 for example, formed from semiconductor random access memory (RAM) and semiconductor read only memory (ROM). The at least one processor 205 may comprise multiple processors, for example, arranged in a pipelined or parallel configuration. The module 201 also includes an number of input/output (I/O) interfaces including an audio-video interface 207 that couples to the video display 214, loudspeakers 217 and microphone 280, an 1/O interface 213 for the 5 keyboard 202, mouse 203, scanner 226, camera 227 and optionally a joystick (not illustrated), and an interface 208 for the external modem 216 and printer 215. In some implementations, the modem 216 may be incorporated within the computer module 201, for example within the interface 208. The computer module 201 also has a local network interface 211 which, via a connection 223, permits coupling of the computer system 200 to a local computer network 0 222, known as a Local Area Network (LAN). As also illustrated, the local network 222 may also couple to the wide network 220 via a connection 224, which would typically include a so called "firewall" device or device of similar functionality. The interface 211 may be formed by one or more of an Ethernetim arrangement, a Bluetooth T M wireless arrangement or an IEEE 802.11 wireless arrangement. 25 The interfaces 208 and 213 may afford either or both of serial and parallel connectivity, the former typically being implemented according to the Universal Serial Bus (USB) standards and having corresponding USB connectors (not illustrated). Storage devices 209 are provided and typically include a hard disk drive (HDD) 210. Other storage devices such as a floppy disk drive and a magnetic tape drive (riot illustrated) may also be used. An optical disk drive 30 212 is typically provided and acts as a non-volatile source of data. Portable memory devices, such as optical disks (e.g., CD-ROM, DVD), USB-RAM, and floppy disks for example may then be used as appropriate sources of data to the system 200. 2453699 I IRN: 921372 -6 The components 205 to 213 of the computer module 201 typically communicate via an interconnected bus 204 and in a manner which results in a conventional mode of operation of the computer system 200 known to those skilled in the relevant art. Examples of computers with which the arrangements or embodiments described herein can be practiced include IBM 5 PCs and compatibles, Sun Sparcstations, Apple MacTM or similar computer systems. In other embodiments, the arrangements or embodiments described herein can be practiced using embedded-type computer systems such as computer systems embedded within equipment including (but not limited to): digital cameras, printers, scanners, etc. The methods or processes described hereinafter may be implemented as software, such as one or more application programs 233 executable within the computer system 200. In particular, the steps of the methods or processes described hereinafter may be implemented as programmed instructions 231 in the software 233 that are executed by the computer system 200. The software instructions 231 may be formed as one or more code modules, each for 5 performing one or more particular tasks. The software may also be divided into two separate parts, in which a first part and the corresponding code modules performs the methods described herein and a second part and the corresponding code modules manage a user interface between the first part and the user. The software 233 is generally loaded into the computer system 200 from a computer 0 readable medium (the software 233 and computer readable medium together form a computer program product), and is then typically stored in the HDD 210, as illustrated in Figure 2A, or the memory 206, after which the software 233 can be executed by the computer system 200. In some instances, the application programs 233 may be supplied to the user encoded on one or more CD-ROM 225 and read via the corresponding drive 212 prior to storage in the .5 memory 210 or 206. Alternatively the software 233 may be read by the computer system 200 from the networks 220 or 222 or loaded into the computer system 200 from other computer readable media. A computer readable storage media refers to any storage medium that participates in providing instructions and/or data to the computer system 200 for execution and/or processing. Examples of such storage media include floppy disks, magnetic tape, CD 30 ROM, a hard disk drive, a ROM or integrated circuit, USB memory, a magneto-optical disk, or a computer readable card such as a PCMCIA card and the like, whether or not such devices are internal or external to the computer module 201. Examples of computer readable transmission media that may also participate in the provision of software, application 2453699 1 IRN: 921372 -7 programs, instructions and/or data to the computer module 201 include radio or infra-red transmission channels as well as a network connection to another computer or networked device, and the Internet or Intranets including email transmissions and information recorded on Websites and the like. 5 The second part of the application programs 233 and the corresponding code modules mentioned above may be executed to implement one or more graphical user interfaces (GUls) to be rendered or otherwise represented upon the display 214. Through manipulation of typically the keyboard 202 and the mouse 203, a user of the computer system 200 and the application may manipulate the interface in a functionally adaptable manner to provide D controlling commands and/or input to the applications associated with the GUI(s). Other forms of functionally adaptable user interfaces may also be implemented, such as an audio interface utilizing speech prompts output via the loudspeakers 217 and user voice commands input via the microphone 280. 5 Figure 2B is a detailed schematic block diagram of the at least one processor 205 and a "memory" 234. While only a single processor is shown in Figures 2A and 2B, those skilled in the art will appreciate that multiple processors or processor cores may be used to practice embodiments of the present invention. The memory 234 represents a logical aggregation of all the memory devices (including the HDD 210 and semiconductor memory 206) that can be 0 accessed by the computer module 201 in Figure 2A. When the computer module 201 is initially powered up, a power-on self-test (POST) program 250 executes. The POST program 250 is typically stored in a ROM 249 of the semiconductor memory 206. A program permanently stored in a hardware device such as the ROM 249 is sometimes referred to as firmware. The POST program 250 examines hardware 25 within the computer module 201 to ensure proper functioning, and typically checks the processor 205, the memory (209, 206), and a basic input-output systems software (BIOS) module 251, also typically stored in the ROM 249, for correct operation. Once the POST program 250 has run successfully, the BIOS 251 activates the hard disk drive 210. Activation of the hard disk drive 210 causes a bootstrap loader program 252 that is resident on the hard 30 disk drive 210 to execute via the processor 205. This loads an operating system 253 into the RAM memory 206 upon which the operating system 253 commences operation. The operating system 253 is a system level application, executable by the processor 205, to fulfil various 2453699_ I IRN: 921372 -8 high level functions, including processor management, memory management, device management, storage management, software application interface, and generic user interface. The operating system 253 manages the memory 209, 206 in order to ensure that each process or application running on the computer module 201 has sufficient memory in which to 5 execute without colliding with memory allocated to another process. Furthermore, the different types of memory available in the system 200 must be used properly so that each process can run effectively. Accordingly, the aggregated memory 234 is not intended to illustrate how particular segments of memory are allocated (unless otherwise stated), but rather to provide a general view of the memory accessible by the computer system 200 and 3 how such is used. The processor 205 includes a number of functional modules including a control unit 239, an arithmetic logic unit (ALU) 240, and a local or internal memory 248, sometimes called a cache memory. The cache memory 248 typically includes a number of storage registers 244 246 in a register section. One or more internal buses 241 functionally interconnect these 5 functional modules. The processor 205 typically also has one or more interfaces 242 for communicating with external devices via the system bus 204, using a connection 218. The application program 233 includes a sequence of instructions 231 that may include conditional branch and loop instructions. The program 233 may also include data 232 which is used in execution of the program 233. The instructions 231 and the data 232 are stored in 0 memory locations 228-230 and 235-237 respectively. Depending upon the relative size of the instructions 231 and the memory locations 228-230, a particular instruction may be stored in a single memory location as depicted by the instruction shown in the memory location 230. Alternately, an instruction may be segmented into a number of parts each of which is stored in a separate memory location, as depicted by the instruction segments shown in the memory 25 locations 228-229. In general, the processor 205 is given a set of instructions which are executed therein. The processor 205 then waits for a subsequent input, to which it reacts to by executing another set of instructions. Each input may be provided from one or more of a number of sources, including data generated by one or more of the input devices 202, 203, data received from an 30 external source across one of the networks 220, 222, data retrieved from one of the storage devices 206, 209 or data retrieved from a storage medium 225 inserted into the corresponding reader 212. The execution of a set of the instructions may in some cases result in output of data. Execution may also involve storing data or variables to the memory 234. 2453699_ IRN: 921372 -9 The embodiments disclosed hereinafter may use input variables 254 that are stored in the memory 234 in corresponding memory locations 255-258. The embodiments disclosed hereinafter may produce output variables 261 that are stored in the memory 234 in corresponding memory locations 262-265. Intermediate variables may be stored in memory 5 locations 259, 260, 266 and 267. The register section 244-246, the arithmetic logic unit (ALU) 240, and the control unit 239 of the processor 205 work together to perform sequences of micro-operations needed to perform "fetch, decode, and execute" cycles for every instruction in the instruction set making up the program 233. Each fetch, decode, and execute cycle comprises: D (a)a fetch operation, which fetches or reads an instruction 231 from a memory location 228; (b)a decode operation in which the control unit 239 determines which instruction has been fetched; and (c)an execute operation in which the control unit 239 and/or the ALU 240 execute the 5 instruction. Thereafter, a further fetch, decode, and execute cycle for the next instruction may be executed. Similarly, a store cycle may be performed by which the control unit 239 stores or writes a value to a memory location 232. Each step or sub-process in the methods or processes described hereinafter is associated 0 with one or more segments of the program 233, and is performed by the register section 244 247, the ALU 240, and the control unit 239 in the processor 205 working together to perform the fetch, decode, and execute cycles for every instruction in the instruction set for the noted segments of the program 233. 25 A method for image upsampling that includes orientation-adaptive filtering is described hereinafter with reference to the example architecture depicted in Figure 3. The example architecture depicted in Figure 3 may comprise computer software program modules (e.g., the orientation filter 320, the orientation independent upsampler 310, the orientation adaptive upsampler 340 and the blend stage 350) for use with the computer system 200 described 30 hereinbefore with reference to Figures 2A and 2B. Specifically, the computer software program modules may be stored on the HDD 210 and/or memory 206 and/or may be downloaded from the networks 220 or 222. 2453699 1 IRN: 921372 -10 Referring to Figure 3, a low-resolution image 300 and a desired output resolution 380 are inputs to the method, which outputs a high-resolution image 370 at output resolution 380. The high-resolution image 370 is generated by combining a first image 315 and a second image 345 in a blend stage 350. 5 The first image 315 is generated by upsampling the low-resolution image 300 to the output resolution 380 by an orientation-independent upsampler or upsampling stage 310. Such upsampling may, for example, comprise bilinear or bicubic interpolation. Similarly, the second image 345 is generated by upsampling the low-resolution image 300 to the output resolution 380 by an orientation-adaptive upsampler or upsampling stage 340, such as a ) steerable filter. The orientation-adaptive upsampling stage 340 takes as inputs the low resolution image 300, the output resolution 380, and an estimate of the orientation 330 of each pixel of the low-resolution image 300. The orientation estimate 330 is determined or calculated by an orientation filter 320, which also outputs an estimate of the consistency of orientation 360 in the region surrounding each pixel of the low-resolution image 300 to the 5 blend stage 350. Based on the estimated consistency of orientation 360, the adaptively-upsampled image 345 and the interpolated image 315 are combined together on a per-pixel basis by the blend stage 350 to produce the high-resolution image 370. That is, each pixel of the high-resolution image 370 is selected from either the adaptively-upsampled image 345 or the interpolated 0 image 315, depending on the consistency of orientation 360. When the orientation consistency estimate 360 is high, indicating the presence of an orientated image feature, the adaptively upsampled image 345 is selected by the blend stage 350. Conversely, when the orientation consistency estimate 360 is low, the interpolated image 315 is selected. 25 A method that can be used to determine or calculate the orientation estimate 330 of Figure 3 is now described with reference to Figure 4. The method of Figure 4 may be practised as a computer software program for execution on a computer system such as the computer system 200 described hereinbefore with reference to Figures 2A and 2B. Referring to Figure 4, an orientation vector may be calculated for each pixel of the input 30 image 400 to produce an orientation vector field 490. The orientation vector field 490 may be considered as a complex valued image whereby each pixel value is a complex number consisting of real and imaginary parts. The orientation vector field 490 has the same resolution as the input image 400. 2453699_1 IRN: 921372 - II The orientation vector field 490 is obtained from an energy operator, which is formed from a first complex image 465 and a second complex image 445. To calculate the first complex image 465, a gradient vector field 415, comprising a plurality of complex gradient values, is calculated at step 410, in which the real part is proportional to the component of the gradient 5 of image 400 in the direction of u, and the imaginary part is proportional to the component of the gradient of image 400 in the direction of v. The vectors u and v are orthogonal, and defined as u = (1, -1) and v = (1, 1). The gradient vector field 415 is therefore represented with respect to the basis vectors u and v, which are obtained by rotating the standard basis vectors (1, 0) and (0, 1) by -45 degrees and scaling the resultant vectors by 42. The gradients in the 3 direction of u and v are calculated as the difference between diagonally adjacent pixels of the input image 400 by convolving the input image 400 with the kernels: 0 -l "1 0) -1 0 r (0 1 5 Each gradient value is effectively determined from a pair of differences between four pixel values in the image 400. The difference is determined for a point located at the centre of the four pixels and a difference between two diagonally adjacent pixels is used for the real part of the complex gradient value while a second difference between the remaining two diagonally .0 adjacent pixels is used for the imaginary part of the complex gradient value. The application of the above two kernels to produce a complex valued first order gradient field, as described above, is considered equivalent to applying a complex valued Roberts operator. As the kernels r, and r, are of even size, an image formed by convolving an input image !5 with either kernel is shifted by half a pixel, in both the horizontal and vertical direction, relative to the input image. Thus, the gradient vector field 415, which is considered as a complex valued image, is shifted by half a pixel space both horizontally and vertically relative to the input image 400. The direction of the half pixel shift is dependent on how edges are managed when convolving an image with an even sized kernel and may be different in 30 different embodiments. The complex valued pixels in the gradient vector field image 415 may 2453699 I IRN: 921372 - 12 therefore be considered as representing gradient values estimated at points lying between the pixels of the input image. As the gradient field is represented in complex space, the 180 degree periodicity of orientation may be avoided by converting the gradient vector field 415 to a double angle 5 orientation representation at step 450 by taking the square of each value of the gradient field. This results in a double angle orientation representation for each of the complex gradient values. For the purpose of calculating the squares, the vectors in the gradient field are treated as complex numbers. As a result of the squaring operation, the double angle representation is spanned by the basis vectors (0, -1) and (1, 0), which is a -90 degree rotation of the standard ) basis vectors (1, 0) and (0, 1). The first complex image 465 is then obtained by applying an averaging filter 460 to smooth the double angle orientation representation produced at step 450 to form a smoothed double angle orientation representation. The averaging filter applied in step 460 may be implemented by convolving the double angle orientation representation with a kernel b. A suitable kernel is: 5 b (1/4 1/4 1/4 1/4) As the averaging filter used in step 460 uses an even sized kernel, the first complex image 465 is shifted by half a pixel relative to the double angle orientation representation. As a 0 result, the values in the first complex image 465 represent quantities associated with spatial locations corresponding to pixels in the input image 400. That is, the half pixel shift introduced by the averaging filter in step 460 and the half pixel shift introduced by the complex Roberts operator in step 410 combine to produce a total shift of a whole pixel horizontally and vertically, such that the pixel values of the output of the averaging filter 25 correspond spatially to positions of pixels in the original image 400. The relationship between the pixel grids of the input image 400, the first order gradient vector field 415 and the first complex image 465 is illustrated in Fig. 6 by pixel grids 610, 620 and 630, respectively. The centre of each square of grid 610 represents one input pixel. Each 30 pixel of grid 620 is located in the centre of each 2x2 region of the input image pixel grid 610. Each pixel of grid 620 is associated with a gradient value that may be represented by a pair of numbers and may be considered as a complex number or vector having a magnitude and 2453699_ IRN: 921372 - 13 orientation. Similarly, each pixel of grid 630 is located in the centre of each 2x2 block of the gradient vector field grid 620, which corresponds to a pixel location in the input image grid 610. Pixel values of the smoothed double angle orientation representation will be located on the grid 630. To calculate the second complex image 445 that is used to generate the energy operator in step 470, a second order gradient vector field 425 of the input image 400 is required. This is calculated at step 420. The second order gradient vector field may also be considered as a complex image, and is calculated by applying the complex Roberts operator to the first order 3 gradient vector field 415. The real part of the second order gradient vector field 425 is equal to the difference between two real images. The first of these images is formed by the convolution of r, and the real part of the gradient vector field 415. Similarly, the second image is formed by the convolution of r, and the imaginary part of the gradient vector field 415. The imaginary part of the second order gradient vector field 425 is produced by doubling the real part of the 5 gradient vector field 415, and convolving the result with r. Each pixel value of the complex image of the second order gradient vector field has a real and an imaginary value as calculated above and at least one pixel value is required to generate an orientation of an image feature. Equivalently, the imaginary part of the second order gradient vector field 425 may be produced by doubling the imaginary part of the gradient vector field 415, and convolving the 0 result with r,. By applying the complex valued Roberts operator twice, the second order gradient vector field is shifted relative to the input image 400 by a whole pixel in the horizontal and vertical directions. In addition to the second order gradient vector field 425, a high-pass filtered image 435 of the input image 400 is used to calculate the second complex image 445 used to generate the 25 energy operator at step 470. The high-pass filtered image 435 is produced by applying a high pass filter at step 430, which is used to remove the DC component of the image 400 as this may interfere with the orientation estimation method. The high-pass filter applied in step 430 is implemented by convolving the image 400 with a kernel h. To ensure the high-pass filtered image 435 is aligned with the input image 400, an odd sized kernel is required. An example of 30 such a kernel is: 2453699_ IRN: 921372 -14 -1/16 -2/16 -1/16" h= -2/16 12/16 -2/16 K1/ 16 -2/16 -1/16j The second complex image 445 is then formed by combining the second order gradient vector field 425 and the high-pass filtered image 435 in step 440. This may be achieved by 5 multiplying the high-pass filtered image 435 by the second order gradient field 420. As the second order gradient field 420 is shifted by a whole pixel relative to the input image 400, the high-pass filtered image 435 is offset by a pixel in the horizontal and vertical directions prior to the multiplication operation. Using the two complex images, a complex energy operator is then generated in step 470 by subtracting the second complex image 445 from the first complex image 465. As the first complex image 465 is shifted by one pixel relative to the input image 400, the second complex image 445 must be offset by one pixel prior to the subtraction operation to ensure the first and second complex images are aligned. As the complex energy operator is represented 5 with respect to the basis vectors u and v, the complex energy operator is rotated in step 480 such that the rotated complex energy operator is represented with respect to the standard basis vectors (1, 0) and (0, 1). This may be achieved simply by multiplying the energy operator by the imaginary unit i, which corresponds to a rotation by 90 degrees. Finally, an averaging filter may be applied to the rotated complex energy operator output at step 485 to improve accuracy i0 of the orientation estimate in the presence of noise. This averaging filter may be implemented by convolving the output of step 480 with a Gaussian kernel, g. An example of a typical Gaussian kernel is: '0.061 0.125 0.061 g = 0.125 0.254 0.125 ,0.061 0.125 0.061, 25 Once the averaging filter has been applied to improve the accuracy, the orientation vector field 490 is complete. Each pixel of the orientation vector field 490 provides an estimate of an orientation for a corresponding pixel in the input image 400. Any image features found in the input image may use an orientation value from the orientation vector field 490 at a corresponding location to the image feature. 2453699_I IRN: 921372 - 15 In addition to the orientation estimate 330, the image upsampling architecture described hereinbefore with reference to Figure 3 requires an estimate of the consistency of orientation for each region. A measure of the consistency of orientation for a given region may be estimated by calculating a total orientation strength and normalizing the result by a maximum 5 orientation strength. In one embodiment of the present invention, the normalization is performed by dividing the total orientation strength by the corresponding maximum orientation strength to produce a consistency estimate. By normalizing the total orientation strength, the consistency estimate is constrained to a value between zero and one. The total orientation strength is calculated as the magnitude of the sum of all orientation vectors in a ) region, while the maximum orientation strength is defined as the sum of the magnitudes of all orientation vectors in a region. For a region that does not feature a dominant orientation, this method gives a consistency estimate with a value close to zero. Conversely, a consistency estimate close to one (unity) indicates uniform orientation across a region. In other embodiments of the present invention, the sums may be weighted sums calculated by 5 convolution with an averaging kernel. A method that can be used to estimate the consistency of orientation 360 in Figure 3 is now described with reference to Figure 5. The method of Figure 5 may be practised as a computer software program for execution on a computer system such as the computer system 200 0 described hereinbefore with reference to Figures 2A and 2B. The method of Figure 5 generates a reliability map 590, which is made up of a plurality of consistency estimates from an input image 500. The reliability map 590 is an image with the same resolution as the input image 500, in which the value of each pixel (x, y) is a consistency estimate between zero and one, with the value indicating the consistency of orientation in the 25 region surrounding the corresponding pixel (x, y) of image 500. The reliability map 590 is calculated from a total orientation strength map 575, made up of a plurality of total orientation strength values, and a maximum orientation strength map 545, made up of a plurality of maximum orientation values, which may be considered as images of the same resolution as the input image 500. The total orientation strength map 575 and the 30 maximum orientation strength map 545 represent the total orientation strength and maximum orientation strength, respectively, for the region surrounding each pixel of the image 500. As a first step toward calculating the total orientation strength map 575 and the maximum orientation strength map 545, a gradient vector field made up of a plurality of complex 2453699_ IRN: 921372 - 16 gradient values is calculated for the image 500 at step 510, as was done to calculate the first order gradient vector field stage in step 410 of Figure 4. Following step 510, the total orientation strength map 575 and the maximum orientation strength map 545 are separately calculated. 5 As a first step towards calculating the total orientation strength map 575, the 180 degree periodicity of orientation is removed by converting the gradient vector field to a double angle orientation representation 555 at step 550. As the gradient vector field is represented in complex space, the conversion may be implemented by taking the complex square of each vector of the gradient vector field. To measure the consistency of the orientation for each ) region, an averaging filter is applied to the double angle representation 555, at step 560, which produces a smoothed double angle representation. Steps 550 and 560 of Figure 5 are similar to steps 450 and 460, respectively, of Figure 4 and may use the smoothed double angle representation 465 as shown by the dotted line in Figure 5. The averaging filter determines the size of the region over which the consistency of the orientation is determined, as well as the 5 weighting that is applied to each component of the region. An example of a kernel that applies equal weighting to a 3x3 region is as follows: '1/9 1/9 1/9" k = 1/9 1/9 1/9 1/9 1/9 1/9) .0 However, those skilled in the art will appreciate that other kernels may alternatively be used. The size of the kernel determines the sensitivity of the consistency of orientation estimate to changes in orientation as well as the robustness of the measure under noise. Larger regions and larger kernels may be used to improve the robustness to noise. Following application of the averaging filter in step 560, the total orientation strength map 575 is 25 obtained by calculating a scalar field (i.e., taking the magnitude of the smoothed double angle representations output from step 560 or the smoothed double angle representation 465) at step 570. The value of each pixel of the total orientation strength map is proportional to the strength of orientation in the corresponding region of the input image 500. To calculate the maximum orientation strength map 545, a scalar field is calculated at step 30 520 by calculating the squared magnitude of each vector of the gradient vector field 510. The maximum orientation strength map 545 is then obtained by applying an averaging filter to the 2453699_1 IRN: 921372 - 17 scalar field at step 540. The averaging filter calculates a sum of the squared magnitudes and forms a maximum magnitude value for each value of the vector field 510. The maximum orientation strength map 545 comprises the maximum magnitude values. The averaging filter may be implemented using the same kernel used by the averaging filter in step 560, as described hereinbefore. Different filters may alternatively be used in steps 560 and 540, however, the resulting consistency estimates may then not always fall between zero and one. In one embodiment, the averaging filter applied in step 560 has a stronger response to higher frequencies than the averaging filter applied in step 540. This serves to localise the response more strongly to regions with sharp edge features. ) Finally, the reliability map 590 is generated by normalizing the total orientation strength map 575, at step 580. As stated above, the reliability map 590 is an image with the same resolution as the input image 500, in which the value of each pixel (x, y) is a consistency estimate between zero and one, with the value indicating the consistency of orientation in the region surrounding the corresponding pixel (x, y) of image 500. The reliability map is based 5 on a comparison of the maximum orientation strength map 545, comprising the maximum magnitude values, and the total orientation strength map 575 magnitudes. In one embodiment, normalization is performed by dividing the total orientation strength map 575 by the maximum orientation strength map 545 to generate a reliability map 590 made up of a plurality of ratios representing consistency estimates. In another embodiment, ) the consistency estimates comprise binary values determined by comparing the ratio of each total orientation strength value and its corresponding maximum orientation strength value with a predetermined threshold value. If the ratio is greater than the threshold, the corresponding consistency estimate is set to 1 and if the ratio is less than the threshold, the corresponding consistency estimate is set to 0. !5 An embodiment of the present invention provides a method for estimating an orientation of an image feature located in an image region. The method comprises the steps of: determining a plurality of gradient values from image pixels in the image region, wherein each gradient value is determined from a pair of differences located about a point, each difference being the 0 difference between diagonally adjacent image pixels; calculating a double angle orientation representation for each of the plurality of gradient values; processing the plurality of double angle orientation representations to generate smoothed double angle orientation representations; calculating at least one second order gradient value based on differences of 2453699 I IRN: 921372 - 18 the gradient values; and generating an estimate of an orientation of the image feature based on the at least one second order gradient value and the smoothed double angle orientation representations. The diagonally adjacent image pixels for a pair of differences may be located in a 2x2 5 block of pixels in the image region and the plurality of gradient values may each represent a gradient at a point located at the centre of the 2x2 block. The method may comprise the further steps of: applying a high pass filter to the image region; and combining the high pass filtered image region with the at least one second order gradient value. D The step of generating an estimate of an orientation of the image feature may comprise the sub-steps of: combining the at least one second order gradient value and the smoothed double angle orientation representations to generate an energy operator; and applying an averaging filter to the energy operator. The method may comprise the steps of: determining a squared magnitude for each of the 5 plurality of gradient values; calculating a sum of the squared magnitudes to form a maximum magnitude; determining a second magnitude of the smoothed double angle orientation representations; and determining a consistency of the estimated orientation based on a comparison of the maximum magnitude and the determined second magnitude. The sum of the first magnitudes may be calculated as a first weighted sum, and the second magnitude is 0 the magnitude of a second weighted sum of double angle representations. The consistency may be determined using a ratio calculated by dividing the second magnitude by the maximum magnitude. The consistency may be determined by comparing the ratio to a predetermined threshold. The weights used to calculate the first weighted sum may comprise the coefficients of a 25 first filter kernel, the weights used to calculate the second weighted sum may comprise the coefficients of a second filter kernel; and the filter represented by the second filter kernel may have a stronger response to high frequencies than the filter represented by the first filter kernel. 30 Another embodiment of the present invention provides a computer system for estimating an orientation of an image feature located in an image region. The computer system comprises: a memory for storing data and program instructions; and at least one processor coupled to the memory. The at least one processor is programmed to: calculate a plurality of 2453699_1 IRN: 921372 - 19 gradient values from image pixels in the image region, wherein each gradient value is determined from a pair of differences located about a point, each difference being the difference between diagonally adjacent image pixels; calculate a double angle orientation representation for each of the plurality of gradient values; process the plurality of double angle 5 orientation representations to generate smoothed double angle orientation representations; calculate at least one second order gradient value based on differences of the gradient values; and calculate an estimate of an orientation of the image feature based on the at least one second order gradient value and the smoothed double angle orientation representations. D Another embodiment of the present invention provides a computer readable medium comprising a computer program recorded therein for estimating an orientation of an image feature located in an image region. The computer program comprises: computer program code means for calculating a plurality of gradient values from image pixels in the image region, wherein each gradient value is determined from a pair of differences located about a point, 5 each difference being the difference between diagonally adjacent image pixels; computer program code means for calculating a double angle orientation representation for each of the plurality of gradient values; computer program code means for processing the plurality of double angle orientation representations to generate smoothed double angle orientation representations; computer program code means for calculating at least one second order 0 gradient value based on differences of the gradient values; and computer program code means for calculating an estimate of an orientation of the image feature based on the at least one second order gradient value and the smoothed double angle orientation representations. Embodiments of methods, systems and computer program products have been described 25 hereinbefore for estimating an orientation of an image feature located in an image region. When compared to existing arrangements, the embodiments described herein are advantageously accurate, robust in the presence of noise, and have a high degree of sensitivity to fine lines and/or texture in images or image regions. In certain embodiments, an indication of the accuracy of the estimated orientation, that is, a reliability estimate of the estimated 30 orientation, is provided. Where specific features, elements and steps referred to herein have known equivalents in the art to which the invention relates, such known equivalents are deemed to be incorporated 24536991 IRN: 921372 -20 herein as if individually set forth. Furthermore, features, elements and steps referred to or described in relation to one particular embodiment of the invention may form part of any of the other embodiments unless stated to the contrary. INDUSTRIAL APPLICABILITY The arrangements described herein are applicable to the computer and data processing industries and are particularly applicable to digital image capture and processing applications. The foregoing describes only a small number of embodiments of the present invention, and modifications and/or changes can be made thereto without departing from the scope and spirit 3 of the invention, the embodiments being illustrative and not restrictive. (Australia Only) In the context of this specification, the word "comprising" means "including principally but not necessarily solely" or "having" or "including", and not "cconsisting only of'. Variations of the word "comprising", such as "comprise" and "comprises" have correspondingly varied meanings. 2453699 I IRN: 921372

Claims (24)

1. A method for estimating an orientation of an image feature located in an image region, said method comprising the steps of: 5 determining a plurality of gradient values from image pixels in the image region, wherein each gradient value is determined from a pair of differences located about a point, each difference being the difference between diagonally adjacent image pixels; calculating a double angle orientation representation for each of the plurality of gradient values; D processing said plurality of double angle orientation representations by applying a smoothing filter to generate smoothed double angle orientation representations; calculating at least one second order gradient value based on differences of said gradient values; and generating an estimate of an orientation of the image feature based on the at least one 5 second order gradient value and the smoothed double angle orientation representations.
2. The method according to claim 1, comprising the further steps of: applying a high pass filter to said image region; and combining the high pass filtered image region with said at least one second order 0 gradient value.
3. The method according to claim 1, wherein said step of generating an estimate of an orientation of the image feature comprises the sub-steps of: combining said at least one second order gradient value and said smoothed double 25 angle orientation representations to generate an energy operator; and applying an averaging filter to said energy operator.
4. The method according to claim ., comprising the further steps of: determining a squared magnitude for each value of the plurality of gradient values; 30 calculating a sum of the squared magnitudes to form a maximum magnitude; determining a second magnitude of the smoothed double angle orientation representations; and 2453699_ IRN: 921372 - 22 determining a consistency of the estimated orientation based on a comparison of said maximum magnitude and said determined second magnitude.
5. The method according to claim 4, wherein said sum of the squared magnitudes is 5 calculated as a first weighted sum, and wherein said smoothed double angle orientation representation is calculated as a second weighted sum.
6. The method according to claim 4, wherein said consistency is determined using a ratio calculated by dividing said second magnitude by said maximum magnitude.
7. The method according to claim 6, wherein said consistency is determined by comparing said ratio to a predetermined threshold.
8. The method according to claim 5, wherein: 5 the weights used to calculate said first weighted sum comprise the coefficients of a first filter kernel; the weights used to calculate said second weighted sum comprise the coefficients of a second filter kernel; and the filter represented by said second filter kernel has a stronger response to high 0 frequencies than the filter represented by said first filter kernel.
9. The method according to claim 1, wherein the diagonally adjacent image pixels for a pair of differences are located in a 2x2 block of pixels in the image region and the plurality of gradient values each represent a gradient at a point located at the centre of the 2x2 block. ?5
10. A computer system for estimating an orientation of an image feature located in an image region, said computer system comprising: a memory for storing data and program instructions; and at least one processor coupled to said memory, said at least one processor programmed 30 to: calculate a plurality of gradient values from image pixels in the image region, wherein each gradient value is determined from a pair of differences located about a point, each difference being the difference between diagonally adjacent image pixels; 2453699 1 IRN: 921372 -23 calculate a double angle orientation representation for each of the plurality of gradient values; process said plurality of double angle orientation representations to generate smoothed double angle orientation representations; 5 calculate at least one second order gradient value based on differences of said gradient values; and calculate an estimate of an orientation of the image feature based on the at least one second order gradient value and the smoothed double angle orientation representations.
) 11. The computer system according to claim 10, wherein said at least one processor is further programmed to: calculate a squared magnitude for each value of the plurality of gradient values; calculate a sum of the squared magnitudes to form a maximum magnitude; calculate a second magnitude of the smoothed double angle gradient representations; 5 and determine a consistency of the estimated orientation based on a comparison of said maximum magnitude and said determined second magnitude.
12. The computer system according to claim 11, wherein said at least one processor is 0 programmed to: calculate said sum of the first magnitudes as a first weighted sum; and calculate said smoothed double angle orientation as a second weighted sum.
13. The computer system according to claim 11, wherein said at least one processor is 25 programmed to determine said consistency by calculating a ratio of said second magnitude to said maximum magnitude.
14. The computer system according to claim 13, wherein said at least one processor is programmed to determine said consistency by comparing said ratio to a predetermined 30 threshold.
15. The computer system according to claim 11, wherein: 2453699_ IRN: 921372 -24 the weights used to calculate said first weighted sum comprise the coefficients of a first filter kernel; the weights used to calculate said second weighted sum comprise the coefficients of a second filter kernel; and the filter represented by said second filter kernel has a stronger response to high frequencies than the filter represented by said first filter kernel.
16. A computer program product comprising a computer readable medium comprising a computer program recorded therein for estimating an orientation of an image feature located ) in an image region, said computer program product comprising: computer program code means for calculating a plurality of gradient values from image pixels in the image region, wherein each gradient value is determined from a pair of differences located about a point, each difference being the difference between diagonally adjacent image pixels; 5 computer program code means for calculating a double angle orientation representation for each of the plurality of gradient values; computer program code means for processing said plurality of double angle orientation representations to generate smoothed double angle orientation representations; computer program code means for calculating at least one second order gradient value D based on differences of said gradient values; and computer program code means for calculating an estimate of an orientation of the image feature based on the at least one second order gradient value and the smoothed double angle orientation representations. !5
17. The computer program product according to claim 16, further comprising: computer program code means for calculating a squared magnitude for each value of the plurality of gradient values; computer program code means for calculating a sum of the squared magnitudes to form a maximum magnitude; 30 computer program code means for calculating a second magnitude of the smoothed double angle gradient representations; and 2453699_ IRN: 921372 -25 computer program code means for determining a consistency of the estimated orientation based on a comparison of said maximum magnitude and said calculated second magnitude. 5
18. The computer program product according to claim 17, comprising: computer program code means for calculating said sum of the first magnitudes as a first weighted sum; and computer program code means for calculating said smoothed double angle orientation as a second weighted sum.
19. The computer program product according to claim 17, wherein said computer program code means for determining a consistency of the estimated orientation comprises computer program code means for calculating a ratio of said second magnitude to said maximum magnitude. 5
20. The computer program product according to claim 17, wherein said computer program code means for determining a consistency of the estimated orientation comprises computer program code means for comparing said ratio to a predetermined threshold. 0
21. The computer program product according to claim 18, wherein: the weights used to calculate said first weighted sum comprise the coefficients of a first filter kernel; the weights used to calculate said second weighted sum comprise the coefficients of a second filter kernel; and 25 the filter represented by said second filter kernel has a stronger response to high frequencies than the filter represented by said first filter kernel.
22. A method for estimating an orientation of an image feature located in an image region, said method substantially as herein described with reference to an embodiment as shown in in 30 one or more of the accompanying drawings. 2453699 I IRN: 921372 -26
23. A computer system for estimating an orientation of an image feature located in an image region, said computer system substantially as herein described with reference to an embodiment as shown in in one or more of the accompanying drawings.
24. A computer program product comprising a computer readable medium comprising a computer program recorded therein for estimating an orientation of an image feature located in an image region, said computer program product substantially as herein described with reference to an embodiment as shown in in one or more of the accompanying drawings. ) Dated 24 December, 2009 Canon Kabushiki Kaisha Patent Attorneys for the Applicant/Nominated Person SPRUSON & FERGUSON 2453699_1 IRN: 921372
AU2009251208A 2009-12-24 2009-12-24 Estimation of image feature orientation Abandoned AU2009251208A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2009251208A AU2009251208A1 (en) 2009-12-24 2009-12-24 Estimation of image feature orientation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2009251208A AU2009251208A1 (en) 2009-12-24 2009-12-24 Estimation of image feature orientation

Publications (1)

Publication Number Publication Date
AU2009251208A1 true AU2009251208A1 (en) 2011-07-14

Family

ID=45419837

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2009251208A Abandoned AU2009251208A1 (en) 2009-12-24 2009-12-24 Estimation of image feature orientation

Country Status (1)

Country Link
AU (1) AU2009251208A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2575333A (en) * 2018-12-21 2020-01-08 Imagination Tech Ltd Double-angle gradients
GB2587266A (en) * 2018-12-21 2021-03-24 Imagination Tech Ltd Double-angle gradients

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2575333A (en) * 2018-12-21 2020-01-08 Imagination Tech Ltd Double-angle gradients
EP3671637A1 (en) * 2018-12-21 2020-06-24 Imagination Technologies Limited Double-angle gradients
CN111354020A (en) * 2018-12-21 2020-06-30 畅想科技有限公司 Double angle gradient
GB2575333B (en) * 2018-12-21 2020-09-09 Imagination Tech Ltd Double-angle gradients
GB2587266A (en) * 2018-12-21 2021-03-24 Imagination Tech Ltd Double-angle gradients
GB2587266B (en) * 2018-12-21 2021-11-10 Imagination Tech Ltd Double-angle gradients
US11386571B2 (en) 2018-12-21 2022-07-12 Imagination Technologies Limited Double-angle gradients
EP4187488A1 (en) * 2018-12-21 2023-05-31 Imagination Technologies Limited Double-angle gradients
US11893754B2 (en) 2018-12-21 2024-02-06 Imagination Technologies Limited Determining dominant gradient orientation in image processing using double-angle gradients

Similar Documents

Publication Publication Date Title
Kyprianidis et al. Image abstraction by structure adaptive filtering.
US9117277B2 (en) Determining a depth map from images of a scene
Wei et al. Contrast-guided image interpolation
US9832456B2 (en) Multiscale depth estimation using depth from defocus
US9053542B2 (en) Image resampling by frequency unwrapping
US9836855B2 (en) Determining a depth map from images of a scene
US9589319B2 (en) Method, system and apparatus for forming a high resolution depth map
Coleman et al. Edge detecting for range data using laplacian operators
US20160350893A1 (en) Systems and methods for registration of images
Muhlich et al. Design and implementation of multisteerable matched filters
AU2014250719A1 (en) Image processing method, system and apparatus
Nehab et al. A fresh look at generalized sampling
US8208756B2 (en) Alpha-masked RST image registration
Magnier An objective evaluation of edge detection methods based on oriented half kernels
AU2009251208A1 (en) Estimation of image feature orientation
AU2005209703A1 (en) Grid orientation, scale, translation and modulation estimation
Skarbnik et al. The importance of phase in image processing
Shao et al. Edge-and-corner preserving regularization for image interpolation and reconstruction
Aberkane et al. Edge detection from Bayer color filter array image
US8483505B2 (en) Rendering piece-wise smooth image from colour values along paths
AU2011265340A1 (en) Method, apparatus and system for determining motion of one or more pixels in an image
Guanlei et al. Unified framework for multi‐scale decomposition and applications
AU2009251146A1 (en) Aliasing removal using local aliasing estimation
AU2008211991A1 (en) Frequency estimation under affine distortion
Kannan et al. Medical Image Demosaicing Based Design of Newton Gregory Interpolation Algorithm

Legal Events

Date Code Title Description
MK4 Application lapsed section 142(2)(d) - no continuation fee paid for the application