WO2014005783A1 - A method and system for correcting a distorted input image - Google Patents

A method and system for correcting a distorted input image Download PDF

Info

Publication number
WO2014005783A1
WO2014005783A1 PCT/EP2013/061611 EP2013061611W WO2014005783A1 WO 2014005783 A1 WO2014005783 A1 WO 2014005783A1 EP 2013061611 W EP2013061611 W EP 2013061611W WO 2014005783 A1 WO2014005783 A1 WO 2014005783A1
Authority
WO
WIPO (PCT)
Prior art keywords
tile
image
input image
distorted
distortion
Prior art date
Application number
PCT/EP2013/061611
Other languages
French (fr)
Inventor
Piotr Stec
Alexei Pososin
Mihai MUNTEANU
Corneliu Zaharia
Original Assignee
DigitalOptics Corporation Europe Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by DigitalOptics Corporation Europe Limited filed Critical DigitalOptics Corporation Europe Limited
Priority to EP13728365.1A priority Critical patent/EP2870585B1/en
Publication of WO2014005783A1 publication Critical patent/WO2014005783A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details

Definitions

  • This invention relates to a method and system for correcting a distorted input image.
  • FIG 1 shows an exemplary Wide Field of View (WFOV) lens system.
  • WFOV Wide Field of View
  • Such lens systems nominally have a hemispherical field of view mapped to a planar image sensor for example as shown in Figure 2.
  • mapping results in variations in acquired image distortion and resolution across the field of view. It can be desirable to correct for this distortion so that for example, features such as faces especially those located towards the periphery of the field of view do not appear distorted when displayed.
  • US 5,508,734 discloses a WFOV lens assembly designed to optimize the peripheral regions of the field of view to provide improved resolution matching between the peripheral region relative to a central region, the peripheral region tending to have a lower resolution than the central region.
  • WFOV lens systems disclose digital image acquisition devices including WFOV lens systems.
  • distorted WFOV images are read from a sensor via an imaging pipeline which can carry out simple pre-processing of an image, before being read across a system bus into system memory.
  • Such systems can employ hardware modules or sub-modules also connected directly or indirectly to the system bus for reading successive images stored in system memory from the bus and for processing the image before either returning the processed image to system memory or forwarding the processed image for further processing.
  • a WFOV correction module for example, a WFOV correction module
  • a system controller controls the various hardware modules, the system controller being responsive to, for example, commands received through a control interface from, for example, software applications running on the device with which a user interacts.
  • a zoom and pan module is connected to the controller and this in turn communicates with the WFOV correction module to determine which part of an acquired image needs to be read from system memory for correction and for example, display on the device viewfinder (not shown) and/or forwarding to the face detection module.
  • a mixer module for example, superimposes boundaries around faces which have been detected/tracked for display on the device viewfinder.
  • US 2010/01 1 1440, Chai discloses a distortion correction module which partitions coordinate points in a selected output image into tiles.
  • the output image is an undistorted rendition of a subset of the lens-distorted image. Coordinate points on a border of the tiles in the output image are selected. For each tile, coordinate points in the lens-distorted image corresponding to each selected coordinate point in the output image are calculated.
  • a bounding box on the lens- distorted image is selected. The bounding box includes the calculated coordinates in the lens-distorted image.
  • the bounding boxes are expanded so that they encompass all coordinate points in the lens-distorted image that map to all coordinate points in their respective corresponding tiles. Output pixel values are generated for each tile from pixel values in their corresponding expanded bounding boxes.
  • WFOV lens systems as well as being incorporated into hand-held digital image acquisition devices can be included in devices with various specialist applications, for example, fixed security cameras.
  • an overhead camera mounted towards a centre of a ceiling in a room might have a lens system which primarily emphasizes the circumferential field of view of the room and acquires relatively little detail in the region immediately below the camera.
  • the angle of incidence of their face to the camera means the camera view of their face becomes less frontal possibly making it more difficult for the camera to track and/or recognise the person's face.
  • Embodiments of the present invention obtain a locally uniform image frame by dynamically adjusting a mapping between rectangular grid regions within a desired view to be presented on a display, or otherwise stored for viewing and the actual sensor surface.
  • This mapping can change from frame to frame and indeed within a frame and is driven both by the relative position of a moving target relative to the image acquisition device as well as through user interaction with a camera application for example, determining a size of a region of interest (ROI) within a field of view i.e. zooming in on a field of view.
  • ROI region of interest
  • Embodiments of the invention provide a distortion adjusting engine which copes with multiple sources of distortion and which can dynamically adjust the overall mapping of pixels from the sensor surface to generate the final rectilinear grid of display pixels on an output display or for storing or compressing into a conventional video format.
  • Embodiments of the invention are particularly useful in security monitoring or for monitoring of stay-at-home elderly persons. According to a second aspect of the present invention there is provided a method as claimed in claim 17.
  • Figure 1 shows schematically a prior art WFOV lens system
  • Figure 2 shows a notional distortion pattern introduced by a prior art WFOV lens system
  • Figure 3 shows schematically a digital image acquisition device for acquiring and processing a succession of images
  • FIG. 4 shows schematically a geometrical distortion engine (GDE) according to an embodiment of the present invention
  • FIG. 5 shows data flow within the GDE of Figure 4.
  • FIG. 6 shows the structure of the Grid Formatter (GFU) of Figure 4;
  • Figure 7(a) shows tile transformation;
  • Figure 7(b) shows an example of a tile transformed according to an embodiment of the present invention
  • Figure 7(c) show an example of Bresenham's line algorithm for determining pixels lying along the border of a tile
  • Figure 7(d) shows how a tile border produced using the algorithm of Figure 7(b) is extended.
  • FIG. 8 shows the structure of a Geometrical Distortion Core (GDC) of Figure 4. Description of the Preferred Embodiment
  • the Geometrical Distortion Engine is capable of effectively removing distortions introduced by for example, a WFOV lens system, but also for
  • Such user- defined distortion can require an affine transformation, colour transformation or image morphing to apply particular effects to the image and indeed sequence of images being acquired by the device.
  • GDC geometrical distortion core
  • a single GDC can process each color plane sequentially or more instances of GDC (such as shown in Figure 4) can process all planes of an image at the same time.
  • grid is used to refer to an array of tiles. Each tile is defined by its four corners and these are referred to as nodes.
  • the GDC processes an input image plane tile by tile under the control of a Grid Formatter Unit (GFU).
  • GFU Grid Formatter Unit
  • the GDC fetches input tiles (tilejn) from the DRAM according to the addresses provided by the GFU and processes them, producing the corrected pixels for respective output tiles (gdc_out) in normal raster order.
  • the information required for correcting a given distorted tile of the input image is read from memory into a tile cache ( Figure 8) of the GDC.
  • the four nodes defined for each tile to be read from DRAM do not define a rectangle - they define a polygon which in turn is used to determine the image information read from DRAM for a tile.
  • the distortion function applied by the GDC to each tile is not alone governed by the need to correct for WFOV lens system distortion, but also for other distortion effects which can include camera shake, user defined distortion and lens-sensor misalignment (sensor tilt).
  • the GFU combines local grid information taken from DRAM, an affine transformation matrix and global grid information and produces the Low Level Distortion Descriptors (LLDD) for each tile of the grid from within a sensed image which is to be processed by the or each GDC.
  • LLDD Low Level Distortion Descriptors
  • Local Grid relates to the area of interest within field of view where the image is to be corrected for example for subsequent display.
  • a face detector such as shown in Figure 3
  • an application fed from the FD could decide that a (rectangular) region bounding this face represents a region of interest.
  • the coordinates defining this region are then written to a "Local Grid" region of DRAM and this region will be processed by the GDE.
  • at least one Local Grid must be stored in memory defining a region of the field of view which is to be corrected.
  • the Local Grid can be shifted from frame to frame. Equally, if more than one face is detected, more than one Local Grid will be stored in memory and each Local Grid will be processed by the GDE in turn (as such the description of the processing of each Local Grid can be treated independently).
  • the corrected grids of the complete image could for example be displayed superimposed over the remaining portions of the image, so that for example faces which are detected at the extreme edges of the field of view of a WFOV lens system can be displayed undistorted.
  • the Affine Transformation enables the GDC to correct either for example, for movement from frame to frame or indeed to compensate for changes or
  • mapping of node locations from one portion of the Local Grid of an input image to the output image could be different from the mapping from another portion of the Local Grid and this is implemented by specifying sequences of nodes for which given transformations apply as will be explained in more detail below.
  • the Global Transformation is in general fixed for a given lens.
  • the transformation takes into account the deviation caused by a given lens away from a nominal mapping of field of view to an image sensor such as shown in Figure 2.
  • the Global Transformation is fixed for a given focal length; or for a lens such as used sometimes in a security camera where the image sensor can rotate relate to the lens system, the Global Transformation is fixed for a given angle of the image sensor to the lens system. This mapping is therefore stored within the GFU where it is only rarely updated or, except in the case of a zoom lens, it is at least unlikely to be updated on a real-time frame-by-frame basis as an image stream is captured.
  • an output formatter takes corrected image tiles (of_in) and writes these into Output Images in bursts back to the DRAM memory.
  • Extra "Out Tile” processing blocks can be inserted between the GDC and the output formatter.
  • the output format of each GDC is in a standard frame format so each tile output by the GDC can be treated as a separate image, meaning that any "Out Tile" processing block that has a frame interface input/output can be inserted between the GDC and output formatter.
  • the extra processing blocks can be any blocks that process a pixel deep image stream, for example gamma correction, colour enhancement or high dynamic range processing. They can also be blocks where a second image source is needed, for example, for alpha blending.
  • the configuration from the CPU is copied into internal shadow registers via the cfg interface.
  • the main purpose of the shadow registers bank is to provide constant configuration inputs to the internal GDE blocks during processing of a given image frame while allowing the CPU to prepare the configuration for the next image frame. As such the contents of the shadow registers are in general stable for the whole processing time of a frame.
  • the GFU fetches a local grid header from the DRAM.
  • the header contains information about local grid location in the output image space, grid dimension expressed in number of nodes, and the single cell (tile) size.
  • the output image grid is rectangular and so this information is enough to describe the whole local grid in the output image space.
  • the header is followed by the nodes containing coordinates in the input image space and variations in those coordinates describe the distortion associated with the local grid. So referring to the example of Figure 7(a), the local grid transformation (Lt) defines for each node of the Local Grid, the change in node coordinate information.
  • the local grid comprising nodes N1 ...Nn undergoes a flip and perspective transformation and so could compensate for varying rotation of the image sensor relative to the lens assembly such as illustrated in Figure 1 as well as for simulating or compensating for a mirror view. So we can see node N1 moves from the top left to the bottom right and vice versa with node Nn, and that the transformed top of the local grid becomes more compressed than the bottom.
  • the effect of the local grid transformation on the geometry and location of a specific tile is also illustrated.
  • the local grid transformation can compensate for varying distortion caused by changes in perspective for different local regions within an input image - particularly in the case of WFOV systems.
  • the local grid can help to compensate for the greater degree of distortion found in faces at the edge of a wide field of view vis-avis those located (or detected) at the centre of the field of view.
  • Values from the local grid header are used by L Grid Calc to setup registers responsible for accessing the local grid information from DRAM.
  • the GFU starts to read local grid node coefficients from DRAM one by one.
  • the transformed coordinates for the grid nodes are then passed to an Affine block (if enabled).
  • the Affine block multiplies input node coordinates u,v by a 2x3 matrix comprising coefficients a1 ...a6 of the Affine Transformation (At) in order to produce output coordinate values u',v':
  • the second mode allows for dynamic transformation updates and correction for example, of rolling shutter distortions together with camera shake compensation.
  • the Affine Transformation comprises a formulaic transformation of node coordinate locations from the local transformation (Lt).
  • transformation (At) comprises a global affine transformation rotating the entire grid and so could compensate for a rotational misalignment in a plane parallel to the plane of the image sensor between lens and image sensor.
  • the coordinates that are produced by the Affine block of Figure 6 are then passed to the global correction block G Grid Calc.
  • This block applies warping distortion to the input local and/or affine transformed node coordinates with the distortion defined by means of a regular grid of nodes (G in Figure 7(a)) with nodes distributed regularly in the input coordinates space and the values stored in the nodes point to locations on the sensor image space.
  • This provides mapping from the regular to distorted coordinate system with intermediate coordinate values (not belonging to the grid nodes) obtained using bi-cubic interpolation.
  • the values of the coordinates of the global grid nodes are stored in the internal registers and can be updated after the end of each frame to allow for dynamic changes to the correction for lens distortion that is required in case of zoom lenses where distortion changes shape with change of the focal length.
  • the final node coordinate values from the global transformation are passed to the LLDD Calculator input queue.
  • the global transformation (Gt) comprises a mapping of node coordinates Gn1 to Gnn of a Global Grid for example to take into account lens distortion.
  • G Grid Calc finds the nodes of the Global Grid surrounding that location and interpolates the mapping of those nodes of the Global Grid tile to the node of the local and/or affine transformed node of the local grid to determine the Global Transform (Gt) of that node location.
  • Gt Global Transform
  • the LLDD Calculator input queue contains a number of nodes equal to grid width + 2 (a full tile), it uses them to prepare an LLDD descriptor that contains a full definition of the input tile.
  • the definition contains location of the tile on the sensor image and partial differences that will be used by an address calculator ( Figure 8) to calculate locations of all the pixels belonging to this particular tile.
  • the complete LLDD descriptors are loaded to the LLDD FIFO (lldd_out).
  • the GFU fills the LLDD FIFO of the GDC with descriptor data for each tile to be processed
  • the GDC fetches the input image tile by tile, with a new tile for every LLDD FIFO entry.
  • the GDC processes each tile and outputs the corrected tiles in frame interface format.
  • a backpressure signal path from the output formatter to the GFU enables the GFU to stall the GDC if the output formatter is full.
  • the output formatter writes the corrected tiles (tile out) of the output image into the memory.
  • the output formatter When processing of a given Local Grid tile is completed and when the frame is completed, the output formatter signals this using an interrupt signal provided through a status interface (sts) to an interrupt controller.
  • the shadow register values are updated for the next frame.
  • the GFU combines basic local grid information obtained by the L Grid Calc block from DRAM, affine transformation matrix and the global distortion grid information stored within the block G Grid and obtained from the Shadow Registers and generates the LLDD descriptor
  • the L Grid Calc block starts reading the local distortion grid (defining Lt in Figure 7(a)) from the Local Grid information in DRAM.
  • Multiple local grids are stored one after another in the memory, each local grid comprising a header followed by the nodes of the grid that contain points coordinates in the distorted image space.
  • the Affine transformation block applies a user defined affine transformation (At in Figure 7(a)) to the (u, v) coordinates produced by the L Grid Calc block.
  • the Affine block performs a simple affine transformation on the (u,v) coordinates from the L Grid Calc block.
  • the Affine block has two operating modes where a) the affine coefficients are taken from the internal GFU registers corresponding to the shadow registers meaning that they are constant through the frame; and b) the coefficients are taken from the internal GFU registers and can change during the frame.
  • the G Grid calculation block calculates the final distorted grid, by performing spline interpolation based on the global grid points (Gn in Figure 7(a)) obtained from G Grid.
  • L Grid Calc When reading the last node of the last tile of the current local grid, L Grid Calc asserts an End of Grid (EOG) flag.
  • EOG End of Grid
  • the grid coordinates in input space (u,v) and output space (x,y) together with the EOG flag are sent to the next block in the pipe - in this case Affine.
  • the next blocks in the pipe (Affine, Global Grid calculation) use the same interface, meaning that the Affine or the Global Grid Calculator blocks can be swapped or removed from the pipeline.
  • the (u,v) coordinate is processed by the Affine and Global Grid calculator - other image fields in the header are passed down the pipeline unchanged.
  • the final distortion descriptors for each tile of the grid are calculated by an LLDD Calculator.
  • the LLDD Calculator block combines the header information provided on an Ighead interface with the descriptor fields and sends them on the lldd_out interface.
  • the L Grid Calc block does not start processing a new grid until the LLDD Calculator block signals with an EOG signal that the last tile of the current grid is processed. This ensures that the signals on the Ighead interface are constant for all tiles of a local grid.
  • Figure 7(b) shows a tile in the output (right) and the input (left) image space.
  • the tile contains 4x4 pixels.
  • the LLDD Calculator gets the coordinates of the four corners (u1 ,v1 ) to (u4,v4) and calculates the partial differences (dudx, dvdx, etc) needed by an addresser within the GDC for the linear interpolation of each pixels (u,v) coordinates.
  • the LLD calculator can determine the required area of input image space defined by (u1 ,v1 )...(u4,v4) to bounded output image space defined by nodes 1 , 2, 3, 4.
  • the LLDD Calculator could therefore be used to determine the memory addresses corresponding to the tile border and to extend the memory addresses around this border for each tile using for example, a variation of Bresenham's line algorithm.
  • Figure 7(c) shows an example of the steps performed by such an LLDD Calculator module.
  • the module takes the first and last point of each edge (u1 ,v1 and u2,v2; u2,v2 and u4,v4 etc) and computes (one by one) the coordinates of the pixels located on the line that is described by those 2 points.
  • Each edges (x,y) coordinates are analyzed and the minimum and maximum x coordinates of each line in DRAM from which tile information is to be read by the GDC are stored in respective memories Max and Min.
  • the y coordinate represents the memory address.
  • an edge tracer within LLDD Calculator finishes all 4 edges of a tile, it sends a ready indication to a tile border extender module within LLDD Calculator.
  • This extender module extends the start/end coordinates produced by the edge tracer. The extension is needed because a 4x4 pixels area is needed around each pixel and the coordinates computed by the edge tracer must be changed to include all the pixels needed.
  • the extender module reads the 2 memories Max and Min and determines the final start/end coordinates of the pixels of each line of the tile as shown in Figure 7(d).
  • the above LLDD Calculator takes transformed node coordinates for a tile provided by G Grid Calc (or indeed any of the previous transformations) and provides the non-rectangular strips of memory addresses running from Min to Max for each line of the input image for a tile to be read from memory by the GDC when correcting the input image.
  • LLDD Calculator simply provides the tile descriptor information illustrated in Figure 7(b) and the tracing/extending
  • GDC Geometric Distortion Core
  • Tile Cache - a double buffered cache which contains a Burst Calculation module (not shown) which calculates the burst accesses needed to fill the tile cache and load the cache with data from the DRAM
  • Resampler - a bicubic resampler which produces an interpolated pixel value from a 4x4 pixel input.
  • the GDC operates generally as follows: Prepare input tile and load to Tile Cache:
  • the GDC control block requests a new distortion descriptor LLDD for a tile.
  • LLDD Calculator provides descriptor information as shown in Figure 7(b) from the GFU and memory addresses are calculated by the Tile Cache;
  • the Burst Calculation module within the tile cache starts working on the LLDD descriptor data from the GFU;
  • the Burst Calculation module calculates one by one the burst requests for the tile
  • the Burst Calculation module requests the burst data from the DRAM based on LLDD information
  • the Burst data is received from the DRAM and written to the Tile cache.
  • the addresser calculates the address of each 4x4 pixels block and the parameters for the Resampler
  • the Resampler calculates the resampled output pixel
  • embodiments of the present invention provide an efficient mechanism for performing complex distortion compensation on an input image in a processor and memory efficient manner with minimal demands on the system bus.
  • tiles have been described as rectangular and defined by four nodes.
  • the invention could also be innplennented with non-rectangular tiles defined by 3 or more nodes; and indeed the local grid need not be defined by a uniform array of tiles, these could in certain applications be non-uniform.
  • the invention is not limited to the embodiment(s) described herein but can be amended or modified without departing from the scope of the present invention.

Abstract

A method for correcting a distorted input image comprises determining a local region of an image to be displayed and dividing said region into an array of rectangular tiles, each tile corresponding to a distorted tile with a non-rectangular boundary within said input image. For each tile of the local region, maximum and minimum memory address locations of successive rows of said input image sufficient to span said boundary of said distorted tile are determined. Successive rows of the distorted input from between said maximum and minimum addresses are read. Distortion of the non-rectangular portion of said distorted input image is corrected to provide a tile of a corrected output image which is stored.

Description

A method and system for correcting a distorted input image
Field
This invention relates to a method and system for correcting a distorted input image. Background
Figure 1 shows an exemplary Wide Field of View (WFOV) lens system. Such lens systems, nominally have a hemispherical field of view mapped to a planar image sensor for example as shown in Figure 2.
It will be appreciated that this mapping results in variations in acquired image distortion and resolution across the field of view. It can be desirable to correct for this distortion so that for example, features such as faces especially those located towards the periphery of the field of view do not appear distorted when displayed.
Separately, it is appreciated that such WFOV systems especially introduce heavy and in some cases non-uniform distortion patterns across the field of view so that acquired images (or indeed different colour planes of an acquired image) do not uniformly conform to the ideal mapping shown in Figure 2. Thus, again it can be desirable to correct for this distortion.
US 5,508,734 discloses a WFOV lens assembly designed to optimize the peripheral regions of the field of view to provide improved resolution matching between the peripheral region relative to a central region, the peripheral region tending to have a lower resolution than the central region.
Referring to Figure 3, applications such as PCT/EP201 1/052970 (Our Ref:
P100750pc00 / FN-353-PCT) and US Application No. 13/077,891 (Ref: FN-369A- US) disclose digital image acquisition devices including WFOV lens systems. Here, distorted WFOV images are read from a sensor via an imaging pipeline which can carry out simple pre-processing of an image, before being read across a system bus into system memory. Such systems can employ hardware modules or sub-modules also connected directly or indirectly to the system bus for reading successive images stored in system memory from the bus and for processing the image before either returning the processed image to system memory or forwarding the processed image for further processing. In Figure 3, for example, a WFOV correction module
successively reads distorted images or image portions and provides corrected images or image portions to a face detection (FD) and tracking module. A system controller controls the various hardware modules, the system controller being responsive to, for example, commands received through a control interface from, for example, software applications running on the device with which a user interacts. In Figure 3, a zoom and pan module is connected to the controller and this in turn communicates with the WFOV correction module to determine which part of an acquired image needs to be read from system memory for correction and for example, display on the device viewfinder (not shown) and/or forwarding to the face detection module. In this case, a mixer module, for example, superimposes boundaries around faces which have been detected/tracked for display on the device viewfinder.
US 2010/01 1 1440, Chai discloses a distortion correction module which partitions coordinate points in a selected output image into tiles. The output image is an undistorted rendition of a subset of the lens-distorted image. Coordinate points on a border of the tiles in the output image are selected. For each tile, coordinate points in the lens-distorted image corresponding to each selected coordinate point in the output image are calculated. In addition, for each tile, a bounding box on the lens- distorted image is selected. The bounding box includes the calculated coordinates in the lens-distorted image. The bounding boxes are expanded so that they encompass all coordinate points in the lens-distorted image that map to all coordinate points in their respective corresponding tiles. Output pixel values are generated for each tile from pixel values in their corresponding expanded bounding boxes. In modern high definition image acquisition devices, enormous amounts of information are received and transmitted across the system bus at high frame acquisition speeds. This places pressure on the many processing modules, such as the correction modules of Figure 3 and Chai which are connected to the system bus to ensure their demands on the system bus are within an allocated budget and so do not interfere with other processing, but also that the processing modules themselves are implemented with the minimal hardware footprint so as to minimize device production costs. Part of any correction module footprint is cache memory. On the one hand it is desirable to minimize cache size to minimize device cost, yet on the other hand, it is desirable to minimize I/O access by hardware modules across the system bus. So for example, where multiple forms of distortion are to be corrected, it would not be possible or acceptable to successively read from, correct and write back to memory an image for each form of distortion to be corrected.
Separately, it will be appreciated that WFOV lens systems as well as being incorporated into hand-held digital image acquisition devices can be included in devices with various specialist applications, for example, fixed security cameras. In some cases, for example, an overhead camera mounted towards a centre of a ceiling in a room might have a lens system which primarily emphasizes the circumferential field of view of the room and acquires relatively little detail in the region immediately below the camera. When a person walks across such a room they move closer to the camera, but the angle of incidence of their face to the camera means the camera view of their face becomes less frontal possibly making it more difficult for the camera to track and/or recognise the person's face. In a case such as this, as well as correcting for the distortion introduced by the non-linear mapping of the circumferential view of the room onto the planar surface of the acquisition system imaging sensor, it may be desirable to adjust either the sensor, or lens angle to improve the view of a target person (clearly involving some loss resolution in other regions of the field of view). Depending on the nature of the lens assembly, it may be preferable to tilt the lens, rather than the sensor. However, if the lens is a large optical assembly, for example, for providing long-range optical quality for security applications, then it could also be desirable to tilt the image sensor assembly, as indicated by the arrows of Figure 1 , to optimize the view of a person's face as they approach the camera. This tilting of the sensor introduces additional distortion into the image over that of the non-linear optical structure of the lens.
It will also be appreciated that as a person approaches the camera, their face will become elongated towards the chin and bulbous towards the top of the head. It may be thus desirable to counter this non-linear distortion of the person's face.
From the foregoing, it is clear that several different distortions occur as a person walks across the field of view (FOV) towards the lens assembly: (i) a non-linear lens distortion which can be a function of the location within the FOV of the lens; (ii) distortion due to possible relative movement of the lens and sensor surfaces; and (iii) distortion effects in local areas such as faces which vary according to both the vertical and horizontal distance from the camera unit. Other distortions "rolling shutter" distortion and again caused by movement within the field of view while an image is being read from a sensor - thus without correcting for this distortion, portions of an image can appear wrongly shifted related to others. In other applications, it may be desirable to flip an acquired image before it is displayed and again this can be considered as a form of distortion which needs to be corrected.
It is an object of the present invention to provide an improved correction module for a digital image acquisition device addressing the above problems.
Summary
According to the present invention there is provided a method as claimed in claim 1 . Embodiments of the present invention obtain a locally uniform image frame by dynamically adjusting a mapping between rectangular grid regions within a desired view to be presented on a display, or otherwise stored for viewing and the actual sensor surface. This mapping can change from frame to frame and indeed within a frame and is driven both by the relative position of a moving target relative to the image acquisition device as well as through user interaction with a camera application for example, determining a size of a region of interest (ROI) within a field of view i.e. zooming in on a field of view.
Embodiments of the invention provide a distortion adjusting engine which copes with multiple sources of distortion and which can dynamically adjust the overall mapping of pixels from the sensor surface to generate the final rectilinear grid of display pixels on an output display or for storing or compressing into a conventional video format.
Embodiments of the invention are particularly useful in security monitoring or for monitoring of stay-at-home elderly persons. According to a second aspect of the present invention there is provided a method as claimed in claim 17.
Brief Description of the Drawings
An embodiment of the invention will now be described, by way of example, with reference to the accompanying drawings, in which:
Figure 1 shows schematically a prior art WFOV lens system;
Figure 2 shows a notional distortion pattern introduced by a prior art WFOV lens system;
Figure 3 shows schematically a digital image acquisition device for acquiring and processing a succession of images;
Figure 4 shows schematically a geometrical distortion engine (GDE) according to an embodiment of the present invention;
Figure 5 shows data flow within the GDE of Figure 4;
Figure 6 shows the structure of the Grid Formatter (GFU) of Figure 4; Figure 7(a) shows tile transformation;
Figure 7(b) shows an example of a tile transformed according to an embodiment of the present invention;
Figure 7(c) show an example of Bresenham's line algorithm for determining pixels lying along the border of a tile;
Figure 7(d) shows how a tile border produced using the algorithm of Figure 7(b) is extended; and
Figure 8 shows the structure of a Geometrical Distortion Core (GDC) of Figure 4. Description of the Preferred Embodiment
Referring now to Figure 4, the basic structure of an engine for handling geometrical distortion within images in a digital image acquisition device according to an embodiment of the present invention is shown. As will be explained in detail below, the Geometrical Distortion Engine (GDE) is capable of effectively removing distortions introduced by for example, a WFOV lens system, but also for
compensating for distortion caused by for example, camera shake, and for correcting distortion introduced by a device user through interaction with an application running on or in communication with the acquisition device. Such user- defined distortion can require an affine transformation, colour transformation or image morphing to apply particular effects to the image and indeed sequence of images being acquired by the device.
In the embodiment, distortion processing on each color plane of an image, for example RGB, YUV or LAB is independent of the others, and so within the GDE, a single geometrical distortion core (GDC) processes only one color plane, so providing greater flexibility at the system level. A single GDC can process each color plane sequentially or more instances of GDC (such as shown in Figure 4) can process all planes of an image at the same time. In the present specification, the term grid is used to refer to an array of tiles. Each tile is defined by its four corners and these are referred to as nodes. A
transformation maps the coordinates of nodes within a grid according to a given distortion to be corrected. The GDC processes an input image plane tile by tile under the control of a Grid Formatter Unit (GFU). The GDC fetches input tiles (tilejn) from the DRAM according to the addresses provided by the GFU and processes them, producing the corrected pixels for respective output tiles (gdc_out) in normal raster order.
Typically, in prior art systems such as Chai, information for each distorted tile of the input image is read in rectangular blocks from DRAM, each rectangular block bounding a distorted tile. However, as can be appreciated, for a heavily distorted input image tile, this can mean that quite a lot of information is read from DRAM across the system bus and is then not used in mapping the distorted input image tile (tilejn) to the output image tile (gdc_out).
In embodiments of the present invention, only the information required for correcting a given distorted tile of the input image is read from memory into a tile cache (Figure 8) of the GDC. Thus, the four nodes defined for each tile to be read from DRAM do not define a rectangle - they define a polygon which in turn is used to determine the image information read from DRAM for a tile.
In embodiments of the invention, the distortion function applied by the GDC to each tile is not alone governed by the need to correct for WFOV lens system distortion, but also for other distortion effects which can include camera shake, user defined distortion and lens-sensor misalignment (sensor tilt).
As will be described in more detail in relation to Figure 6, the GFU combines local grid information taken from DRAM, an affine transformation matrix and global grid information and produces the Low Level Distortion Descriptors (LLDD) for each tile of the grid from within a sensed image which is to be processed by the or each GDC. These descriptors are employed by the or each GDC to read correct image tile information from memory and to correct the image tile.
In the present description, Local Grid relates to the area of interest within field of view where the image is to be corrected for example for subsequent display. So for example, if in an image stream, a face detector (FD) such as shown in Figure 3, detects a face region within a field of view, an application fed from the FD could decide that a (rectangular) region bounding this face represents a region of interest. The coordinates defining this region are then written to a "Local Grid" region of DRAM and this region will be processed by the GDE. Thus, in this embodiment, for any given frame, at least one Local Grid must be stored in memory defining a region of the field of view which is to be corrected. As a face moves across the field of view of the camera, the Local Grid can be shifted from frame to frame. Equally, if more than one face is detected, more than one Local Grid will be stored in memory and each Local Grid will be processed by the GDE in turn (as such the description of the processing of each Local Grid can be treated independently).
The corrected grids of the complete image could for example be displayed superimposed over the remaining portions of the image, so that for example faces which are detected at the extreme edges of the field of view of a WFOV lens system can be displayed undistorted.
The Affine Transformation enables the GDC to correct either for example, for movement from frame to frame or indeed to compensate for changes or
misalignment between lens and image sensor (Global Affine); or for example, distortion caused by rolling shutter (Local Affine). Thus, in the case of local affine transformation, the mapping of node locations from one portion of the Local Grid of an input image to the output image could be different from the mapping from another portion of the Local Grid and this is implemented by specifying sequences of nodes for which given transformations apply as will be explained in more detail below.
The Global Transformation is in general fixed for a given lens. For a typical WFOV lens, the transformation takes into account the deviation caused by a given lens away from a nominal mapping of field of view to an image sensor such as shown in Figure 2. For a zoom lens, the Global Transformation is fixed for a given focal length; or for a lens such as used sometimes in a security camera where the image sensor can rotate relate to the lens system, the Global Transformation is fixed for a given angle of the image sensor to the lens system. This mapping is therefore stored within the GFU where it is only rarely updated or, except in the case of a zoom lens, it is at least unlikely to be updated on a real-time frame-by-frame basis as an image stream is captured.
Referring back to Figure 4, an output formatter takes corrected image tiles (of_in) and writes these into Output Images in bursts back to the DRAM memory.
Extra "Out Tile" processing blocks can be inserted between the GDC and the output formatter. In embodiments, the output format of each GDC is in a standard frame format so each tile output by the GDC can be treated as a separate image, meaning that any "Out Tile" processing block that has a frame interface input/output can be inserted between the GDC and output formatter. The extra processing blocks can be any blocks that process a pixel deep image stream, for example gamma correction, colour enhancement or high dynamic range processing. They can also be blocks where a second image source is needed, for example, for alpha blending.
Referring now to Figure 5, which shows the operation of the GDE for a given image plane in more detail:
0 The CPU programs the GFU and the other blocks.
1 When the GDE block is enabled, the configuration from the CPU is copied into internal shadow registers via the cfg interface. The main purpose of the shadow registers bank is to provide constant configuration inputs to the internal GDE blocks during processing of a given image frame while allowing the CPU to prepare the configuration for the next image frame. As such the contents of the shadow registers are in general stable for the whole processing time of a frame.
2 Referring to Figure 6, the GFU fetches a local grid header from the DRAM. The header contains information about local grid location in the output image space, grid dimension expressed in number of nodes, and the single cell (tile) size. The output image grid is rectangular and so this information is enough to describe the whole local grid in the output image space. The header is followed by the nodes containing coordinates in the input image space and variations in those coordinates describe the distortion associated with the local grid. So referring to the example of Figure 7(a), the local grid transformation (Lt) defines for each node of the Local Grid, the change in node coordinate information. In the example of Figure 7(a), the local grid comprising nodes N1 ...Nn undergoes a flip and perspective transformation and so could compensate for varying rotation of the image sensor relative to the lens assembly such as illustrated in Figure 1 as well as for simulating or compensating for a mirror view. So we can see node N1 moves from the top left to the bottom right and vice versa with node Nn, and that the transformed top of the local grid becomes more compressed than the bottom. The effect of the local grid transformation on the geometry and location of a specific tile is also illustrated. Also, the local grid transformation can compensate for varying distortion caused by changes in perspective for different local regions within an input image - particularly in the case of WFOV systems. Thus, the local grid can help to compensate for the greater degree of distortion found in faces at the edge of a wide field of view vis-avis those located (or detected) at the centre of the field of view.
Values from the local grid header are used by L Grid Calc to setup registers responsible for accessing the local grid information from DRAM. After this, the GFU starts to read local grid node coefficients from DRAM one by one. The transformed coordinates for the grid nodes are then passed to an Affine block (if enabled). In the embodiment, the Affine block multiplies input node coordinates u,v by a 2x3 matrix comprising coefficients a1 ...a6 of the Affine Transformation (At) in order to produce output coordinate values u',v':
Figure imgf000012_0001
The values of those matrix coefficients a1 ...a6 are stored in the registers internal to the GFU. These internal GFU registers holding coefficient values can be
programmed twofold: in a first mode, Global Affine Transform mentioned above, they can be programmed by the CPU before the start of the frame processing and their values are kept constant for all local grids of a whole frame; and in the second mode, Local Affine Transform, values of the shadow registers are read from DRAM together with a node index that indicates when a new set of affine transformation coefficients must be loaded. For example, if a first set of node coefficients is loaded together with an index 100, this transform is applied to the nodes 0 to 99 and before node 100 is processed a new set of transformation coefficients is loaded from DRAM and applied to the subsequent nodes until the next change is indicated. As mentioned above, the second mode allows for dynamic transformation updates and correction for example, of rolling shutter distortions together with camera shake compensation. Thus, it will be seen that in this example, the Affine Transformation comprises a formulaic transformation of node coordinate locations from the local transformation (Lt). In the present example show in Figure 7(a), the affine
transformation (At) comprises a global affine transformation rotating the entire grid and so could compensate for a rotational misalignment in a plane parallel to the plane of the image sensor between lens and image sensor.
The coordinates that are produced by the Affine block of Figure 6 are then passed to the global correction block G Grid Calc. This block applies warping distortion to the input local and/or affine transformed node coordinates with the distortion defined by means of a regular grid of nodes (G in Figure 7(a)) with nodes distributed regularly in the input coordinates space and the values stored in the nodes point to locations on the sensor image space. This provides mapping from the regular to distorted coordinate system with intermediate coordinate values (not belonging to the grid nodes) obtained using bi-cubic interpolation. The values of the coordinates of the global grid nodes are stored in the internal registers and can be updated after the end of each frame to allow for dynamic changes to the correction for lens distortion that is required in case of zoom lenses where distortion changes shape with change of the focal length. The final node coordinate values from the global transformation are passed to the LLDD Calculator input queue.
So again referring to the example of Figure 7(a), the global transformation (Gt) comprises a mapping of node coordinates Gn1 to Gnn of a Global Grid for example to take into account lens distortion. For a given node coordinate after Affine
Transformation (At) and or Local Grid transformation (Lt), G Grid Calc finds the nodes of the Global Grid surrounding that location and interpolates the mapping of those nodes of the Global Grid tile to the node of the local and/or affine transformed node of the local grid to determine the Global Transform (Gt) of that node location. In the example of Figure 7(a), for nodes within the tile Tg of the global grid, the mapping of the coordinates of the tile Tg to the transformed file Tg' is interpolated and applied to local and/or affine transformed node coefficients to finalise the transformation of the original node coordinates. Thus for any node N, the complete transformation becomes Gt(At(Lt(N))).
When the LLDD Calculator input queue contains a number of nodes equal to grid width + 2 (a full tile), it uses them to prepare an LLDD descriptor that contains a full definition of the input tile. The definition contains location of the tile on the sensor image and partial differences that will be used by an address calculator (Figure 8) to calculate locations of all the pixels belonging to this particular tile. The complete LLDD descriptors are loaded to the LLDD FIFO (lldd_out).
3 Referring back to Figure 5, the GFU fills the LLDD FIFO of the GDC with descriptor data for each tile to be processed
4 The GDC fetches the input image tile by tile, with a new tile for every LLDD FIFO entry.
5 The GDC processes each tile and outputs the corrected tiles in frame interface format. A backpressure signal path from the output formatter to the GFU enables the GFU to stall the GDC if the output formatter is full.
6 Optional processing algorithms can be applied on the GDC corrected tiles.
7 The output formatter writes the corrected tiles (tile out) of the output image into the memory.
8 When processing of a given Local Grid tile is completed and when the frame is completed, the output formatter signals this using an interrupt signal provided through a status interface (sts) to an interrupt controller.
9 If the GDE is still enabled when the frame is completed (EOF), the shadow register values are updated for the next frame. Referring to Figure 6, as indicated above, the GFU combines basic local grid information obtained by the L Grid Calc block from DRAM, affine transformation matrix and the global distortion grid information stored within the block G Grid and obtained from the Shadow Registers and generates the LLDD descriptor
information for each tile.
When the GFU is enabled, the L Grid Calc block starts reading the local distortion grid (defining Lt in Figure 7(a)) from the Local Grid information in DRAM. There must be at least one local grid for a frame - otherwise whatever application is running in the acquisition device has determined that there is no particular region of interest where distortion correction is required, for example, no faces have been detected or are being tracked. Multiple local grids are stored one after another in the memory, each local grid comprising a header followed by the nodes of the grid that contain points coordinates in the distorted image space. The Affine transformation block applies a user defined affine transformation (At in Figure 7(a)) to the (u, v) coordinates produced by the L Grid Calc block. The Affine block performs a simple affine transformation on the (u,v) coordinates from the L Grid Calc block. As described at step 2 above, the Affine block has two operating modes where a) the affine coefficients are taken from the internal GFU registers corresponding to the shadow registers meaning that they are constant through the frame; and b) the coefficients are taken from the internal GFU registers and can change during the frame. The G Grid calculation block calculates the final distorted grid, by performing spline interpolation based on the global grid points (Gn in Figure 7(a)) obtained from G Grid.
When reading the last node of the last tile of the current local grid, L Grid Calc asserts an End of Grid (EOG) flag. The grid coordinates in input space (u,v) and output space (x,y) together with the EOG flag are sent to the next block in the pipe - in this case Affine. The next blocks in the pipe (Affine, Global Grid calculation) use the same interface, meaning that the Affine or the Global Grid Calculator blocks can be swapped or removed from the pipeline. The (u,v) coordinate is processed by the Affine and Global Grid calculator - other image fields in the header are passed down the pipeline unchanged.
The final distortion descriptors for each tile of the grid are calculated by an LLDD Calculator. The LLDD Calculator block combines the header information provided on an Ighead interface with the descriptor fields and sends them on the lldd_out interface. The L Grid Calc block does not start processing a new grid until the LLDD Calculator block signals with an EOG signal that the last tile of the current grid is processed. This ensures that the signals on the Ighead interface are constant for all tiles of a local grid.
Figure 7(b) shows a tile in the output (right) and the input (left) image space. For exemplary purposes, the tile contains 4x4 pixels. The LLDD Calculator gets the coordinates of the four corners (u1 ,v1 ) to (u4,v4) and calculates the partial differences (dudx, dvdx, etc) needed by an addresser within the GDC for the linear interpolation of each pixels (u,v) coordinates. As indicated above, knowing the various transformations required to compensate for camera, movement and user determined distortion, the LLD calculator can determine the required area of input image space defined by (u1 ,v1 )...(u4,v4) to bounded output image space defined by nodes 1 , 2, 3, 4.
However, when interpolating input image data to calculate output image values, data for points outside the boundary defined by the vertices (u1 ,v1 )...(u4,v4) can be required.
The LLDD Calculator could therefore be used to determine the memory addresses corresponding to the tile border and to extend the memory addresses around this border for each tile using for example, a variation of Bresenham's line algorithm. Figure 7(c) shows an example of the steps performed by such an LLDD Calculator module. Here, the module takes the first and last point of each edge (u1 ,v1 and u2,v2; u2,v2 and u4,v4 etc) and computes (one by one) the coordinates of the pixels located on the line that is described by those 2 points. Each edges (x,y) coordinates are analyzed and the minimum and maximum x coordinates of each line in DRAM from which tile information is to be read by the GDC are stored in respective memories Max and Min. The y coordinate represents the memory address. After an edge tracer within LLDD Calculator finishes all 4 edges of a tile, it sends a ready indication to a tile border extender module within LLDD Calculator. This extender module extends the start/end coordinates produced by the edge tracer. The extension is needed because a 4x4 pixels area is needed around each pixel and the coordinates computed by the edge tracer must be changed to include all the pixels needed. The extender module reads the 2 memories Max and Min and determines the final start/end coordinates of the pixels of each line of the tile as shown in Figure 7(d).
Thus, the above LLDD Calculator takes transformed node coordinates for a tile provided by G Grid Calc (or indeed any of the previous transformations) and provides the non-rectangular strips of memory addresses running from Min to Max for each line of the input image for a tile to be read from memory by the GDC when correcting the input image.
In an alternative implementation, rather than providing the actual memory addresses to be read by the GDC, LLDD Calculator simply provides the tile descriptor information illustrated in Figure 7(b) and the tracing/extending
functionality described above for converting this descriptor information to memory addresses is implemented within the GDC as described below. Referring to Figure 8, there is shown a block diagram of the Geometrical Distortion Core. The main sub-blocks are:
Geometric Distortion Core (GDC) Control - the main control sub-block LLDD Registers - Low Level Distortion Description Registers. Each time the LLDD for a new tile is requested from the GFU, these registers are shifted. There are two such registers as there can be data for up to three tiles in the pipe at one time.
Tile Cache - a double buffered cache which contains a Burst Calculation module (not shown) which calculates the burst accesses needed to fill the tile cache and load the cache with data from the DRAM
Addresser - for each pixel in the output tile (in raster order), it calculates:
• coordinates of the 4x4 pixels window from the required from the Tile cache · sub-pixels (dx,dy) for the Resampler
• color offset and gain for the resampler output
Resampler - a bicubic resampler which produces an interpolated pixel value from a 4x4 pixel input.
Referring to the steps indicated in Figure 8, the GDC operates generally as follows: Prepare input tile and load to Tile Cache:
1 The GDC control block requests a new distortion descriptor LLDD for a tile. In this example, it is assumed that LLDD Calculator provides descriptor information as shown in Figure 7(b) from the GFU and memory addresses are calculated by the Tile Cache;
2 Once the pipeline allows a new tile to be prepared, the Burst Calculation module within the tile cache starts working on the LLDD descriptor data from the GFU;
3 The Burst Calculation module calculates one by one the burst requests for the tile;
4 The Burst Calculation module requests the burst data from the DRAM based on LLDD information;
5 The Burst data is received from the DRAM and written to the Tile cache.
Process Tile:
6 For each output tile pixel, the addresser calculates the address of each 4x4 pixels block and the parameters for the Resampler
7 The 4x4 pixels window is fetched from the Tile Cache
8 The Resampler calculates the resampled output pixel
9. The signals for the gdc_out interface are assembled together. It contains:
• pixel data from the Resampler
• Frame control signals from the addresser
· Tile control signals from the LLDD FIFO
• Output local grid information from the Ighead register
It will therefore be seen from the above description that embodiments of the present invention provide an efficient mechanism for performing complex distortion compensation on an input image in a processor and memory efficient manner with minimal demands on the system bus.
It will be appreciated that the illustrated embodiment is provided for exemplary purposes only and that many variations of the implementation are possible. For example, some functionality shown as being implemented in one module could be migrated to other modules.
In the illustrated embodiment, tiles have been described as rectangular and defined by four nodes. However, it will be appreciated that although more complex, the invention could also be innplennented with non-rectangular tiles defined by 3 or more nodes; and indeed the local grid need not be defined by a uniform array of tiles, these could in certain applications be non-uniform. The invention is not limited to the embodiment(s) described herein but can be amended or modified without departing from the scope of the present invention.

Claims

Claims:
1 . A method for correcting a distorted input image comprising:
a) determining a local region of an image to be displayed and dividing said region into an array of tiles, each tile having a boundary defined by a plurality of nodes, each node having a set of coordinates within said image space;
b) for each tile of the local region, transforming said node coordinates according to a first local transformation to take into account a local distortion of said input image in said local region;
c) determining a second global transformation mapping coordinates for a global array of nodes to take into account a global distortion of said input image;
d) for each node of each tile of the local region:
i) determining nodes of the global array immediately surrounding the node;
ii) interpolating the transformation of the surrounding global node
coordinates to transform the at least locally transformed coordinates of the node according to the global transformation;
e) for each tile of the local region:
i) reading a non-rectangular portion of said distorted input image
corresponding to the locally and globally transformed node coordinates of said tile;
ii) correcting the distortion of said non-rectangular portion of said
distorted input image to provide a tile of a corrected output image; and f) storing said corrected tiles for said output image.
2. A method according to claim 1 wherein said tiles of said local region and said corrected tiles of said output image are rectangular and defined by four nodes.
3. A method according to claim 1 further comprising:
determining an affine transformation applicable to at least a portion of the nodes of said local region of an image; and
transforming at least said portion of the node coordinates of said local region according to said affine transformation.
4. A method according to claim 1 wherein said affine transformation is applied to said node coordinates either before or after said local transforming of step b).
5. A method according to claim 3 wherein said affine transformation comprises multiplying input node coordinates u,v with the following matrix transformation:
Figure imgf000021_0001
to produce transformed coordinate values u',v'.
6. A method according to claim 5 wherein said matrix coefficients a1 ...a6 are changed for successive different subsets of nodes of the local region to compensate for rolling shutter distortion.
7. A method according to claim 5 wherein said matrix coefficients a1 ...a6 are changed for different subsets of nodes of the local region to apply a user defined distortion to said input image.
8. A method according to claim 5 wherein said matrix coefficients a1 ...a6 are set to compensate for inter-frame camera shake.
9. A method according to claim 1 further comprising, for each tile of the local region, determining maximum and minimum memory address locations of successive rows of said input image sufficient to span said tile boundary, said maximum and minimum addresses for each of tile the input image corresponding to a non-rectangular region of the input image.
10. A method according to claim 9 further comprising extending the memory locations to be read from each row of said distorted input image to ensure at least four pixels immediately outside the distorted tile boundary are available for correcting the distortion of said tile.
1 1 . A method according to claim 1 comprising acquiring said input image with a wide field of view, WFOV, lens system and wherein said global transformation is arranged to compensate for non-uniformities in said WFOV lens system.
12. A method according to claim 1 comprising acquiring said input image with a zoom lens system and responsive to changing image acquisition focal length, changing said global transformation to compensate for different non-uniformities in said lens system.
13. A method according to claim 1 comprising changing one of said global or said local transformation in response to adjusting an angle of an image acquisition sensor relative to a lens system.
14. A method according to claim 13 wherein said angle comprises a rotation angle about an axis normal to a plane of the image sensor.
15. A method according to claim 13 wherein said angle comprises a rotation angle about an axis parallel to a plane of the image sensor.
16. A method according to claim 1 wherein said local transformation is arranged to compensate for changes in perspective caused by a lens system for different local regions of said input image.
17. A method for correcting a distorted input image comprising:
a) determining a local region of an image to be displayed and dividing said region into an array of rectangular tiles, each tile corresponding to a distorted tile with a non-rectangular boundary within said input image;
b) for each tile of the local region:
i) determining maximum and minimum memory address locations of successive rows of said input image sufficient to span said boundary of said distorted tile;
ii) reading successive rows of said distorted input from between said maximum and minimum addresses; and
iii) correcting the distortion of said non-rectangular portion of said distorted input image to provide a tile of a corrected output image; and
f) storing said corrected tiles for said output image.
18. A method according to claim 17 further comprising extending the memory locations to be read from each row of said distorted input image to ensure at least four pixels immediately outside the distorted tile boundary are available for correcting the distortion of said tile.
19. A method according to claim 17 further comprising extending the memory locations to be read from each row of said distorted input image sufficiently to enable interpolation of all input image pixels within and including the distorted tile boundary.
20. An image acquisition system comprising:
a memory for storing a distorted input image acquired from an image sensor and a lens system and a corrected output image;
a system bus connected to said memory; and
a distortion correction module connected to said system bus for reading distorted input image information from said memory and writing corrected output image information to said memory, said module being arranged to perform the steps of claim 1 .
PCT/EP2013/061611 2012-07-03 2013-06-05 A method and system for correcting a distorted input image WO2014005783A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP13728365.1A EP2870585B1 (en) 2012-07-03 2013-06-05 A method and system for correcting a distorted image

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US13/541,650 US8928730B2 (en) 2012-07-03 2012-07-03 Method and system for correcting a distorted input image
US13/541,650 2012-07-03

Publications (1)

Publication Number Publication Date
WO2014005783A1 true WO2014005783A1 (en) 2014-01-09

Family

ID=48613598

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2013/061611 WO2014005783A1 (en) 2012-07-03 2013-06-05 A method and system for correcting a distorted input image

Country Status (3)

Country Link
US (2) US8928730B2 (en)
EP (1) EP2870585B1 (en)
WO (1) WO2014005783A1 (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2983136A2 (en) 2013-03-18 2016-02-10 FotoNation Limited A method and apparatus for motion estimation
EP3101622A2 (en) 2015-06-02 2016-12-07 FotoNation Limited An image acquisition system
WO2017032468A1 (en) 2015-08-26 2017-03-02 Fotonation Limited Image processing apparatus
US9665799B1 (en) 2016-01-29 2017-05-30 Fotonation Limited Convolutional neural network
WO2017129325A1 (en) 2016-01-29 2017-08-03 Fotonation Limited A convolutional neural network
US9743001B1 (en) 2016-02-19 2017-08-22 Fotonation Limited Method of stabilizing a sequence of images
WO2017140566A1 (en) 2016-02-19 2017-08-24 Fotonation Limited A method for correcting an acquired image
WO2017140438A1 (en) 2016-02-19 2017-08-24 Fotonation Limited A method of stabilizing a sequence of images
EP3379202A1 (en) 2017-03-24 2018-09-26 FotoNation Limited A method for determining bias in an inertial measurement unit of an image acquisition device
US10148943B2 (en) 2016-08-08 2018-12-04 Fotonation Limited Image acquisition device and method based on a sharpness measure and an image acquistion parameter
US10334152B2 (en) 2016-08-08 2019-06-25 Fotonation Limited Image acquisition device and method for determining a focus position based on sharpness
US10462370B2 (en) 2017-10-03 2019-10-29 Google Llc Video stabilization
US10497089B2 (en) 2016-01-29 2019-12-03 Fotonation Limited Convolutional neural network
EP3602480A4 (en) * 2017-08-22 2020-03-04 Samsung Electronics Co., Ltd. Electronic devices for and methods of implementing memory transfers for image warping in an electronic device
US10853909B2 (en) 2018-11-05 2020-12-01 Fotonation Limited Image processing apparatus
EP3796639A1 (en) 2019-09-19 2021-03-24 FotoNation Limited A method for stabilizing a camera frame of a video sequence
US11190689B1 (en) 2020-07-29 2021-11-30 Google Llc Multi-camera video stabilization
US11227146B2 (en) 2018-05-04 2022-01-18 Google Llc Stabilizing video by accounting for a location of a feature in a stabilized view of a frame
US11477382B2 (en) 2016-02-19 2022-10-18 Fotonation Limited Method of stabilizing a sequence of images

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9294667B2 (en) 2012-03-10 2016-03-22 Digitaloptics Corporation MEMS auto focus miniature camera module with fixed and movable lens groups
US9280810B2 (en) * 2012-07-03 2016-03-08 Fotonation Limited Method and system for correcting a distorted input image
US8928730B2 (en) * 2012-07-03 2015-01-06 DigitalOptics Corporation Europe Limited Method and system for correcting a distorted input image
US9242602B2 (en) 2012-08-27 2016-01-26 Fotonation Limited Rearview imaging systems for vehicle
US10841668B2 (en) * 2013-08-09 2020-11-17 Icn Acquisition, Llc System, method and apparatus for remote monitoring
KR101493946B1 (en) * 2014-04-16 2015-02-17 하이네트(주) Correction method for image from wide angle lense and apparatuss thereof
DE102014112078A1 (en) * 2014-08-22 2016-02-25 Connaught Electronics Ltd. Method for transforming an input image into an output image, driver assistance system and motor vehicle
KR102279026B1 (en) * 2014-11-07 2021-07-19 삼성전자주식회사 Apparatus and Method for Providing Image of Detecting and Compensating Image Including at least One of Object
JP6374777B2 (en) * 2014-11-28 2018-08-15 ルネサスエレクトロニクス株式会社 Data processing method, program, and data processing apparatus
CN104767928B (en) * 2014-12-30 2018-05-11 上海孩子国科教设备有限公司 Method of adjustment, client, system and the device of correlation rule view data
US10275863B2 (en) 2015-04-03 2019-04-30 Cognex Corporation Homography rectification
US9542732B2 (en) * 2015-04-03 2017-01-10 Cognex Corporation Efficient image transformation
US10186012B2 (en) 2015-05-20 2019-01-22 Gopro, Inc. Virtual lens simulation for video and photo cropping
JP6563358B2 (en) * 2016-03-25 2019-08-21 日立オートモティブシステムズ株式会社 Image processing apparatus and image processing method
CN106780291B (en) * 2017-01-03 2020-06-23 珠海全志科技股份有限公司 Real-time distortion image processing accelerating device
CN107220925B (en) * 2017-05-05 2018-10-30 珠海全志科技股份有限公司 A kind of real-time virtual reality accelerating method and device
US20180332219A1 (en) 2017-05-10 2018-11-15 Fotonation Limited Wearable vision system and method of monitoring a region
CN108230263B (en) * 2017-12-11 2021-11-23 中国航空工业集团公司洛阳电光设备研究所 High-security-level image distortion correction device and method
US10726522B2 (en) * 2018-01-24 2020-07-28 Fotonation Limited Method and system for correcting a distorted input image
US10356346B1 (en) * 2018-02-26 2019-07-16 Fotonation Limited Method for compensating for off-axis tilting of a lens
US11275941B2 (en) * 2018-03-08 2022-03-15 Regents Of The University Of Minnesota Crop models and biometrics
CN110544209B (en) * 2018-05-29 2022-10-25 京东方科技集团股份有限公司 Image processing method and equipment and virtual reality display device
US10909225B2 (en) * 2018-09-17 2021-02-02 Motorola Mobility Llc Electronic devices and corresponding methods for precluding entry of authentication codes in multi-person environments
US10909668B1 (en) 2019-07-31 2021-02-02 Nxp Usa, Inc. Adaptive sub-tiles for distortion correction in vision-based assistance systems and methods
US11483547B2 (en) 2019-12-04 2022-10-25 Nxp Usa, Inc. System and method for adaptive correction factor subsampling for geometric correction in an image processing system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5508734A (en) 1994-07-27 1996-04-16 International Business Machines Corporation Method and apparatus for hemispheric imaging which emphasizes peripheral content
US5912676A (en) * 1996-06-14 1999-06-15 Lsi Logic Corporation MPEG decoder frame memory interface which is reconfigurable for different frame store architectures
US20070291047A1 (en) * 2006-06-16 2007-12-20 Michael Harville System and method for generating scale maps
WO2008014472A2 (en) * 2006-07-27 2008-01-31 Qualcomm Incorporated Efficient memory fetching for motion compensation video decoding process
US20100111440A1 (en) 2008-10-31 2010-05-06 Motorola, Inc. Method and apparatus for transforming a non-linear lens-distorted image

Family Cites Families (54)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US1906509A (en) 1928-01-17 1933-05-02 Firm Photogrammetrie G M B H Correction for distortion the component pictures produced from different photographic registering devices
US3251283A (en) 1964-02-11 1966-05-17 Itek Corp Photographic system
US3356002A (en) 1965-07-14 1967-12-05 Gen Precision Inc Wide angle optical system
EP0146770B1 (en) 1981-08-24 1987-11-19 MEIER, Walter Lens attachment for the projection of stereoscopic images anamorphotically compressed on a spherical surface
US5594363A (en) * 1995-04-07 1997-01-14 Zycad Corporation Logic cell and routing architecture in a field programmable gate array
US5850470A (en) 1995-08-30 1998-12-15 Siemens Corporate Research, Inc. Neural network for locating and recognizing a deformable object
US6466254B1 (en) 1997-05-08 2002-10-15 Be Here Corporation Method and apparatus for electronically distributing motion panoramic images
US6044181A (en) * 1997-08-01 2000-03-28 Microsoft Corporation Focal length estimation method and apparatus for construction of panoramic mosaic images
JP3695119B2 (en) 1998-03-05 2005-09-14 株式会社日立製作所 Image synthesizing apparatus and recording medium storing program for realizing image synthesizing method
JPH11298780A (en) 1998-04-10 1999-10-29 Nhk Eng Service Wide-area image-pickup device and spherical cavity projection device
US6219099B1 (en) * 1998-09-23 2001-04-17 Honeywell International Inc. Method and apparatus for calibrating a display using an array of cameras
US6175454B1 (en) 1999-01-13 2001-01-16 Behere Corporation Panoramic imaging arrangement
US20050084175A1 (en) * 2003-10-16 2005-04-21 Olszak Artur G. Large-area imaging by stitching with array microscope
US6697521B2 (en) * 2001-06-15 2004-02-24 Nokia Mobile Phones Ltd. Method and system for achieving coding gains in wavelet-based image codecs
US7184609B2 (en) 2002-06-28 2007-02-27 Microsoft Corp. System and method for head size equalization in 360 degree panoramic images
US7058237B2 (en) 2002-06-28 2006-06-06 Microsoft Corporation Real-time wide-angle image correction system and method for computer image viewing
JP4402400B2 (en) 2003-08-28 2010-01-20 オリンパス株式会社 Object recognition device
US8427538B2 (en) * 2004-04-30 2013-04-23 Oncam Grandeye Multiple view and multiple object processing in wide-angle video camera
JP4426436B2 (en) 2004-12-27 2010-03-03 株式会社日立製作所 Vehicle detection device
JP4847150B2 (en) 2005-02-21 2011-12-28 富士フイルム株式会社 Wide-angle imaging lens
US7613357B2 (en) 2005-09-20 2009-11-03 Gm Global Technology Operations, Inc. Method for warped image object recognition
JP4841929B2 (en) 2005-10-21 2011-12-21 富士フイルム株式会社 Wide-angle imaging lens
JP4841928B2 (en) 2005-10-21 2011-12-21 富士フイルム株式会社 Wide-angle imaging lens
US7970239B2 (en) 2006-01-19 2011-06-28 Qualcomm Incorporated Hand jitter reduction compensating for rotational motion
WO2007120370A1 (en) 2006-04-10 2007-10-25 Alex Ning Ultra-wide angle objective lens
JP4396674B2 (en) 2006-08-11 2010-01-13 船井電機株式会社 Panorama imaging device
JP4908995B2 (en) 2006-09-27 2012-04-04 株式会社日立ハイテクノロジーズ Defect classification method and apparatus, and defect inspection apparatus
KR100826571B1 (en) 2006-10-24 2008-05-07 주식회사 나노포토닉스 Wide-angle lenses
JP4902368B2 (en) 2007-01-24 2012-03-21 三洋電機株式会社 Image processing apparatus and image processing method
US7773118B2 (en) 2007-03-25 2010-08-10 Fotonation Vision Limited Handheld article with movement discrimination
JP2009063941A (en) 2007-09-10 2009-03-26 Sumitomo Electric Ind Ltd Far-infrared camera lens, lens unit, and imaging apparatus
JP5194679B2 (en) 2007-09-26 2013-05-08 日産自動車株式会社 Vehicle periphery monitoring device and video display method
ATE452379T1 (en) 2007-10-11 2010-01-15 Mvtec Software Gmbh SYSTEM AND METHOD FOR 3D OBJECT RECOGNITION
US8311344B2 (en) 2008-02-15 2012-11-13 Digitalsmiths, Inc. Systems and methods for semantically classifying shots in video
KR101188588B1 (en) 2008-03-27 2012-10-08 주식회사 만도 Monocular Motion Stereo-Based Free Parking Space Detection Apparatus and Method
US8525871B2 (en) 2008-08-08 2013-09-03 Adobe Systems Incorporated Content-aware wide-angle images
US8340453B1 (en) * 2008-08-29 2012-12-25 Adobe Systems Incorporated Metadata-driven method and apparatus for constraining solution space in image processing techniques
US8264524B1 (en) 2008-09-17 2012-09-11 Grandeye Limited System for streaming multiple regions deriving from a wide-angle camera
US8116587B2 (en) * 2010-02-16 2012-02-14 Ricoh Co., Ltd. Method and apparatus for high-speed and low-complexity piecewise geometric transformation of signals
US8692867B2 (en) * 2010-03-05 2014-04-08 DigitalOptics Corporation Europe Limited Object detection and rendering for wide field of view (WFOV) image acquisition systems
US20100283868A1 (en) 2010-03-27 2010-11-11 Lloyd Douglas Clark Apparatus and Method for Application of Selective Digital Photomontage to Motion Pictures
US8903468B2 (en) * 2010-10-13 2014-12-02 Gholam Peyman Laser coagulation of an eye structure from a remote location
US20120162449A1 (en) * 2010-12-23 2012-06-28 Matthias Braun Digital image stabilization device and method
US8896703B2 (en) 2011-03-31 2014-11-25 Fotonation Limited Superresolution enhancment of peripheral regions in nonlinear lens geometries
US8947501B2 (en) 2011-03-31 2015-02-03 Fotonation Limited Scene enhancements in off-center peripheral regions for nonlinear lens geometries
US8723959B2 (en) 2011-03-31 2014-05-13 DigitalOptics Corporation Europe Limited Face and other object tracking in off-center peripheral regions for nonlinear lens geometries
US8493460B2 (en) * 2011-09-15 2013-07-23 DigitalOptics Corporation Europe Limited Registration of differently scaled images
US9124895B2 (en) * 2011-11-04 2015-09-01 Qualcomm Incorporated Video coding with network abstraction layer units that include multiple encoded picture partitions
US9740912B2 (en) * 2011-12-20 2017-08-22 Definiens Ag Evaluation of co-registered images of differently stained tissue slices
WO2013116394A2 (en) * 2012-01-30 2013-08-08 Mark Olsson Adjustable variable resolution inspection systems and methods
US8928730B2 (en) * 2012-07-03 2015-01-06 DigitalOptics Corporation Europe Limited Method and system for correcting a distorted input image
US9280810B2 (en) * 2012-07-03 2016-03-08 Fotonation Limited Method and system for correcting a distorted input image
US9242602B2 (en) 2012-08-27 2016-01-26 Fotonation Limited Rearview imaging systems for vehicle
US9349210B2 (en) * 2012-11-30 2016-05-24 Arm Limited Methods of and apparatus for using textures in graphics processing systems

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5508734A (en) 1994-07-27 1996-04-16 International Business Machines Corporation Method and apparatus for hemispheric imaging which emphasizes peripheral content
US5912676A (en) * 1996-06-14 1999-06-15 Lsi Logic Corporation MPEG decoder frame memory interface which is reconfigurable for different frame store architectures
US20070291047A1 (en) * 2006-06-16 2007-12-20 Michael Harville System and method for generating scale maps
WO2008014472A2 (en) * 2006-07-27 2008-01-31 Qualcomm Incorporated Efficient memory fetching for motion compensation video decoding process
US20100111440A1 (en) 2008-10-31 2010-05-06 Motorola, Inc. Method and apparatus for transforming a non-linear lens-distorted image
WO2010051147A2 (en) * 2008-10-31 2010-05-06 General Instrument Corporation Method and apparatus for transforming a non-linear lens-distorted image

Cited By (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2983136A2 (en) 2013-03-18 2016-02-10 FotoNation Limited A method and apparatus for motion estimation
US10229504B2 (en) 2013-03-18 2019-03-12 Fotonation Limited Method and apparatus for motion estimation
EP3101622A2 (en) 2015-06-02 2016-12-07 FotoNation Limited An image acquisition system
EP3101622A3 (en) * 2015-06-02 2016-12-21 FotoNation Limited An image acquisition system
CN108352054A (en) * 2015-08-26 2018-07-31 快图有限公司 Image processing equipment
WO2017032468A1 (en) 2015-08-26 2017-03-02 Fotonation Limited Image processing apparatus
CN108352054B (en) * 2015-08-26 2022-05-03 快图有限公司 Image processing apparatus
US11106894B2 (en) 2015-08-26 2021-08-31 Fotonation Limited Image processing apparatus
US10115003B2 (en) 2015-08-26 2018-10-30 Fotonation Limited Image processing apparatus
US10497089B2 (en) 2016-01-29 2019-12-03 Fotonation Limited Convolutional neural network
US11087433B2 (en) 2016-01-29 2021-08-10 Fotonation Limited Convolutional neural network
WO2017129325A1 (en) 2016-01-29 2017-08-03 Fotonation Limited A convolutional neural network
US9665799B1 (en) 2016-01-29 2017-05-30 Fotonation Limited Convolutional neural network
US9934559B2 (en) 2016-02-19 2018-04-03 Fotonation Limited Method for correcting an acquired image
WO2017140566A1 (en) 2016-02-19 2017-08-24 Fotonation Limited A method for correcting an acquired image
US9743001B1 (en) 2016-02-19 2017-08-22 Fotonation Limited Method of stabilizing a sequence of images
US10515439B2 (en) 2016-02-19 2019-12-24 Fotonation Limited Method for correcting an acquired image
US11257192B2 (en) 2016-02-19 2022-02-22 Fotonation Limited Method for correcting an acquired image
WO2017140438A1 (en) 2016-02-19 2017-08-24 Fotonation Limited A method of stabilizing a sequence of images
US11477382B2 (en) 2016-02-19 2022-10-18 Fotonation Limited Method of stabilizing a sequence of images
US10148943B2 (en) 2016-08-08 2018-12-04 Fotonation Limited Image acquisition device and method based on a sharpness measure and an image acquistion parameter
US10334152B2 (en) 2016-08-08 2019-06-25 Fotonation Limited Image acquisition device and method for determining a focus position based on sharpness
US11223764B2 (en) 2017-03-24 2022-01-11 Fotonation Limited Method for determining bias in an inertial measurement unit of an image acquisition device
US10757333B2 (en) 2017-03-24 2020-08-25 Fotonation Limited Method for determining bias in an inertial measurement unit of an image acquisition device
US10097757B1 (en) 2017-03-24 2018-10-09 Fotonation Limited Method for determining bias in an inertial measurement unit of an image acquisition device
EP3379202A1 (en) 2017-03-24 2018-09-26 FotoNation Limited A method for determining bias in an inertial measurement unit of an image acquisition device
EP3602480A4 (en) * 2017-08-22 2020-03-04 Samsung Electronics Co., Ltd. Electronic devices for and methods of implementing memory transfers for image warping in an electronic device
US11064119B2 (en) 2017-10-03 2021-07-13 Google Llc Video stabilization
US11683586B2 (en) 2017-10-03 2023-06-20 Google Llc Video stabilization
US10462370B2 (en) 2017-10-03 2019-10-29 Google Llc Video stabilization
US11227146B2 (en) 2018-05-04 2022-01-18 Google Llc Stabilizing video by accounting for a location of a feature in a stabilized view of a frame
US10853909B2 (en) 2018-11-05 2020-12-01 Fotonation Limited Image processing apparatus
US10983363B2 (en) 2019-09-19 2021-04-20 Fotonation Limited Method for stabilizing a camera frame of a video sequence
US11531211B2 (en) 2019-09-19 2022-12-20 Fotonation Limited Method for stabilizing a camera frame of a video sequence
EP3796639A1 (en) 2019-09-19 2021-03-24 FotoNation Limited A method for stabilizing a camera frame of a video sequence
US11190689B1 (en) 2020-07-29 2021-11-30 Google Llc Multi-camera video stabilization
US11856295B2 (en) 2020-07-29 2023-12-26 Google Llc Multi-camera video stabilization

Also Published As

Publication number Publication date
US8928730B2 (en) 2015-01-06
US9262807B2 (en) 2016-02-16
US20150178897A1 (en) 2015-06-25
EP2870585A1 (en) 2015-05-13
US20140009568A1 (en) 2014-01-09
EP2870585B1 (en) 2016-12-14

Similar Documents

Publication Publication Date Title
EP2870585B1 (en) A method and system for correcting a distorted image
US9280810B2 (en) Method and system for correcting a distorted input image
JP5437311B2 (en) Image correction method, image correction system, angle estimation method, and angle estimation device
US8855441B2 (en) Method and apparatus for transforming a non-linear lens-distorted image
US11599975B2 (en) Methods and system for efficient processing of generic geometric correction engine
JP3450833B2 (en) Image processing apparatus and method, program code, and storage medium
US20200143516A1 (en) Data processing systems
JP4715334B2 (en) Vehicular image generation apparatus and method
CN104917955A (en) Image Transformation And Multi-View Output Systems And Methods
US20210176395A1 (en) Gimbal system and image processing method thereof and unmanned aerial vehicle
US11915442B2 (en) Method and apparatus for arbitrary output shape processing of an image
CN105741233B (en) Video image spherical surface splicing method and system
US20210090220A1 (en) Image de-warping system
EP3101622B1 (en) An image acquisition system
JP2013101525A (en) Image processing device, method, and program
DK3189493T3 (en) PERSPECTIVE CORRECTION OF DIGITAL PHOTOS USING DEPTH MAP
US11244431B2 (en) Image processing
KR101140953B1 (en) Method and apparatus for correcting distorted image
US20210042891A1 (en) Method and apparatus for dynamic block partition of an image
US10223766B2 (en) Image processing including geometric distortion adjustment
KR20150019192A (en) Apparatus and method for composition image for avm system
TW201843648A (en) Image Perspective Conversion Method and System Thereof
US9781353B2 (en) Image processing apparatus, electronic apparatus, and image processing method
JP6273881B2 (en) Image processing apparatus, image processing method, and program
GB2561170A (en) Image processing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13728365

Country of ref document: EP

Kind code of ref document: A1

REEP Request for entry into the european phase

Ref document number: 2013728365

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2013728365

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE