EP0548080A1 - Codage et compression de donnees d'images - Google Patents

Codage et compression de donnees d'images

Info

Publication number
EP0548080A1
EP0548080A1 EP19910910434 EP91910434A EP0548080A1 EP 0548080 A1 EP0548080 A1 EP 0548080A1 EP 19910910434 EP19910910434 EP 19910910434 EP 91910434 A EP91910434 A EP 91910434A EP 0548080 A1 EP0548080 A1 EP 0548080A1
Authority
EP
European Patent Office
Prior art keywords
image
data
intensity
edges
representing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP19910910434
Other languages
German (de)
English (en)
Inventor
Peng Seng Mechanical Engineer. Bldg Toh
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Axiom Innovation Ltd
Original Assignee
Axiom Innovation Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from GB909011908A external-priority patent/GB9011908D0/en
Application filed by Axiom Innovation Ltd filed Critical Axiom Innovation Ltd
Publication of EP0548080A1 publication Critical patent/EP0548080A1/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/20Contour coding, e.g. using detection of edges

Definitions

  • the invention relates to image data encoding and compression.
  • raw image data input from an image source such as an electronic camera is preprocessed to provide a higher level representation of the image before the image is analysed.
  • the raw image is said to be at the lowest level and the level of representation increases as the degree of abstraction increases.
  • Structural matching also known as high level or relational matching, uses high-level features such as regions, bodies or the relationship between features as a matching primitive.
  • High-level features have some kind of semantic description of the object and can be represented in various forms such as graphs, stars, networks and circuits. The distinctive characteristic of these representations is the existence of a hierachcial structure.
  • one known approach is to group high-level features into a hierarchical structure comprising bodies, surfaces, curves, junctions and edgels.
  • the body feature is the highest level and is formed by several surfaces located in the hierarchy one level below. Surfaces are formed by curves and junctions. The lowest level consists of edgels which make up the curves and junctions.
  • the highest level in a structure in this example the body, has the most distinctive attributes and should result in a less ambiguous match. Matching is then traversed down the hierarchy until the lowest level is reached.
  • a star structure approach can be used to define at a node the relationship with all neighbouring nodes including the node itself plus all the links to the neighbouring nodes.
  • the advantage of structural matching as awhole is the ability to avoid local mismatches and this leads directly to a meaningful 3D description of the scene. Views with larger separations and transformations are more likely to be matched usingstructural matching than they are using other primitives.
  • Feature matching starts from the basis that correspondence cannot take place at all points in the image and can only be applied to those points which can be identified without ambiguity in the two images.
  • Intensity based matching has , despite many arguments against it , enjoyed some success .
  • One known method of intensity matching is an application of a stati sti cal paradigm of combining independent measurements. Many measurements are combined statistically to produce a more robust indication of correspondence than is possible with fewer measurements. In short, the improvement arises from the association of more attributes to each matching primitive.
  • the intensity based method is generally more time-consuming due to the vast numbers of matching candidates and one of the major setbacks is its inability to handle homogeneous or uniform brightness areas. These areas do not have gray level variation which is essential to correlation measurement.
  • Another disadvantage is the need to define a local area in which correspondence is sought. The size of the local area, usually in the form of a correlation mask, is crucial to the method and yet it is always chosen arbitrarily.
  • an image processing system in which an acquired image is processed to identify edges in theimage and to represent the intensity profile of image portions between detected edges as a respective mathematical expression thereby to reduce the amount of data used to define the image.
  • an image processing system comprising an image source for supplying digital electronic image data, an edge detector for detecting edges in the supplied image and for creating an edge map therefrom, and an integrating processor for combining in the edge map data mathematical expressions representing the intensity of image portions between the detected edges, thereby to reduce the amount of data defining the image.
  • a method of encoding data representing an image comprising smoothing initial image data to suppress noise and fitting a continuous equation to image intensity profile portions bounded by abrupt intensity changes.
  • the invention provides a multiple view vision system in which features in different images representing a scene viewed from different respective locations are marked by comparing one set of encoded data representing intensity profiles for image portions defined between abrupt intensity changes in one image with a similar set of data representing another image.
  • the invention provides a system for processing image data, the system comprising acquiring means for acquiring at least one image, first storing means for temporarily storing data representing the acquired image, detecting means for detecting edges in the acquired image, defining means for defining intensity profiles between the detected edges as respective mathematical expressions on an image line by line basis.
  • a feature point may be regarded as an active element representing an abruptly changing feature such as an edge and a non-feature point may be regarded as a picture element representing, together with other picture elements in its vicinity, a slowly changing feature such as a change in surface shade.
  • Figure 1 is a schematic view of a system according to the invention
  • Figure 2 shows a) an image portion and b) an intensity profile associated with the image portion
  • Figure 3 illustrates image geometry
  • Figure 4 shows a weighting function for deemphasising edges
  • Figure 5 shows corresponding intensity profiles in two differently viewed images of the same scene
  • Figure 6 is a flow diagram of a multiple pass matching technique
  • Figure 7 is an image restoration algorithm
  • Figure 8 is an image edges decompressing algorithm.
  • Image data from an image source 2 which may be an electronic camera for example is input to a smoothing circuit 3. Encoding is carried out in two stages. In the first of these stages the image data is smoothed to suppress noise and edges are detected by an edge detector 4, and in the second stage, as will be described in greater detail hereinafter, a polynomial is fitted to shading information between detected edges, which information is held in a shading store 5.
  • the image data is subjected to smoothing by the smoothing circuit 3.
  • the smoothing circuit 3 uses a standard convolution such as a Gaussianconvolution to suppress noise such as spikes or other glitches in the incoming image data.
  • the smoothing circuit 3 delivers the smoothed image to the edge detector 4 which is arranged to detect edges as sharp intensity changes or discontinuities using any suitable known method of edge detection.
  • Global or two dimensional edge detection is preferred though linear scan line or one dimensional edge detection can instead be used. The reason why one dimensional edge detection can be used is that, as will become clearer from the description that follows, the encoding technique only preserves nonrhorizontal edges, assuming a horizontally scanned image raster, and does not preserve horizontal edges.
  • the edge detector 4 outputs an edge map which represents edges tietected in the image and which is held in any suitable store 6. Once the edge map has been created it is used by an integrator 7 to define boundaries or anchor points in the image. These anchor points define positions in the image between which a polynomial function can be fitted to the shading profile of the image.
  • the polynomial function is preferably obtained by least square fitting to the shading profile.
  • Figure 2 of the accompanying drawings shown (a) an exemplary image 10 and (b) an exemplary intensity profile 11 along a horizontal scan line 12 in the image 10.
  • the image 10 includes edges 13 which are detected by the edge detector 4 and areas of shading, ie. varying or constant intensity between the edges 13.
  • points on the line 12 corresponding to edges 13 in the image are seen as discontinuities at X 0 , X 1 , X 2 , X 3 , X 4 and X 5 in the intensity profile 11. Between these points the intensity profile is constant or continuously and smoothly varying.
  • the intensity profile portions between X 0 and X 1 , and between X 1 and X 2 , and so on can each be represented by a polynomial equation represented in Figure 2 as I 1 (x), l 2 (x), l 3 (x), I 4 (x) and I 5 (x).
  • a polynomial can approximate to a large number of pixels using only a few parameters.
  • least-square fitting with a polynomial reduces noise, such as interference spikes and camera noise.
  • very slight intensity variations due to surface texture which are of course undesirable, are also removed.
  • a polynomial fit is easily implemented by numerical algorithms on any suitable computer or image processor. Nevertheless, the application of a polynomial least-square method is not without difficulties.
  • the intensity profile along the entire length of a scanline is a complex curve, and this curve cannot be represented simply by a polynomial.
  • thepresent embodiment overcomes this problem by segmenting the scan line into several low order polynomials which are preferred because of their stability.
  • the joints between these segments correspond to edges in the image and therefore correspond also to discontinuities in the intensity profile of the image. This ensures that the low order fitted polynomial will be accurate because there will be no discontinuities within the segment of the intensity profile to which the polynomial function is being fitted. Since the polynomial function is fitted strictly to the profile in-between edge points, the condition of smoothness can be well satisfied.
  • Each intensity profile portion is approximated by a polynomial function I i (x) as follows:
  • each line in the image is expressed as a collection of edge coordinates x 1 ... x 5 for example interleaved with polynomial or other continuous equations defining the intensity profile between consecutive edge coordinates.
  • the number of samples involved in generating I i and I j is usually different ie. s t T h e
  • independent variables x l and x r denote the horizontal coordinates of the left and right images respectively.
  • is the surface albedo and is constant across a strip 31 on the surface 30 because any discontinuity in p also appears as intensity discontinuity and would be detected by the edge detector;
  • S is the intensity of the incident light whose variation across the strip is negligible
  • N and L are the space vectors of the normal of the surface orientation and the direction of the incident light respectively.
  • the dependency of the expression on image coordinate space is omitted for the sake of clarity.
  • the intensity I and the surface normal N are different along the strip.
  • the intensity derivatives dl and d 2 I correspond to the order of the curvature of the strip 31. If the strip 31 has a planar curvature, dN is constant and therefore dl is also constant. It follows that the second and higher intensity derivatives will be zero for a planar surface. If however, a strip has a surface which is defined by a second order polynomial, then d 2 I will not be zero. It follows that if d 2 I is not zero the strip is non-planar.
  • the polynomial representation of the combined profile is extracted from the image using a trapezoidal weighting function, as shown in Figure 4.
  • the purpose of this weighting function is to suppress the influence of sudden changes in intensity at edges in the image.
  • the weighting function is maximum in a central area between edges and tapers to a minimum in the vicinity of detected edges.
  • This weighting function is first applied to the intensity profile and a suitably adjusted polynomial representing the profile is then calculated as previously or as a product of the weighting function and a previously calculated polynomial.
  • the classification criterion for classifying a surface as planar or non-planar is very simple. Any stripwith an intensity profit represented by polynomial order higher than one is non-planar, and a strip is planar if its polynomial order is less than or equal to one. Local shading analysis suggests that a planar curve is one whose second order derivative is zero, or equivalently, the intensity is at most first order.
  • e is a preset threshold
  • a small threshold ⁇ 1 is first chosen for matching candidates along the scanline. This sets a very strict test to be passed and under such stringent criterion, very few pairs of matches will normally occur.
  • the value of the threshold ⁇ is then progressively relaxed for subsequent passes of matching for the same scanline until an upper limit ⁇ 2 is reached, or all candidates are matched.
  • the reliability of the matching varies with the value of the threshold ⁇ . For instance, a smaller threshold will produce a more reliable result and a reliability factor can thus be assigned to the matched result and this will facilitate further refinement if required.
  • edge map only may be reconstituted by way of the algorithm shown in Figure 8 of the accompanying drawings.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

Un système de traitement d'images comprend une source de données d'images électroniques et numériques devant être traitées. Les données d'images sont traitées ligne par ligne (par un processeur) qui détecte les bords dans l'image tels que par exemple des discontinuités dans l'intensité, et qui représente le profil d'intensité entre les bords détectés comme une expression mathématique telle une fonction polynomiale. Des données représentant les bords détectés et des données relatives aux expressions mathématiques sont mémorisées comme un ensemble de données destinées à être subséquemment traitées. L'ensemble de données pour une image peut être comparé à celui d'une autre image analogue afin de faire correspondre les particularités entre les images et afin de produire ainsi des informations en trois dimensions à propos d'une scène.
EP19910910434 1990-05-29 1991-05-29 Codage et compression de donnees d'images Withdrawn EP0548080A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
GB9011908 1990-05-29
GB909011908A GB9011908D0 (en) 1990-05-29 1990-05-29 Improvements in image data compression
GB9028204 1990-12-31
GB9028204A GB2244805A (en) 1990-05-29 1990-12-31 Image data encoding and compression

Publications (1)

Publication Number Publication Date
EP0548080A1 true EP0548080A1 (fr) 1993-06-30

Family

ID=26297124

Family Applications (1)

Application Number Title Priority Date Filing Date
EP19910910434 Withdrawn EP0548080A1 (fr) 1990-05-29 1991-05-29 Codage et compression de donnees d'images

Country Status (3)

Country Link
EP (1) EP0548080A1 (fr)
AU (1) AU7986491A (fr)
WO (1) WO1991019263A1 (fr)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0562672A3 (en) * 1992-03-22 1994-07-13 Igp Res & Dev Ltd Process of picture representation by data compression
CA2312983C (fr) 1997-12-05 2008-06-03 Force Technology Corp. Compression et expansion continues de gradation de donnees d'image ou acoustiques basees sur une approximation de fonction polynomiale

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4340940A (en) * 1980-08-26 1982-07-20 Rca Corporation Hardware reduction by truncation of selected number of most significant bits for digital video system using subsampling and adaptive reconstruction

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO9119263A1 *

Also Published As

Publication number Publication date
WO1991019263A1 (fr) 1991-12-12
AU7986491A (en) 1991-12-31

Similar Documents

Publication Publication Date Title
US5537494A (en) Video image data encoding and compression system using edge detection and luminance profile matching
Di Gesù et al. Distance-based functions for image comparison
Nevatia Depth measurement by motion stereo
US5081689A (en) Apparatus and method for extracting edges and lines
Galvin et al. Recovering Motion Fields: An Evaluation of Eight Optical Flow Algorithms.
US6301370B1 (en) Face recognition from video images
Wada et al. Shape from shading with interreflections under a proximal light source: Distortion-free copying of an unfolded book
Koschan et al. Color stereo vision using hierarchical block matching and active color illumination
US6798897B1 (en) Real time image registration, motion detection and background replacement using discrete local motion estimation
Maître et al. Using models to improve stereo reconstruction
US7245768B1 (en) Depth map compression technique
US20030011596A1 (en) View-dependent image synthesis
WO2011117069A1 (fr) Procédé et agencement d'étalonnage multi-caméras
Mousavi et al. The performance evaluation of multi-image 3D reconstruction software with different sensors
Babbar et al. Comparative study of image matching algorithms
Arakawa et al. Fractal modeling of natural terrain: Analysis and surface reconstruction with range data
CN105787870A (zh) 一种图形图像拼接融合系统
EP1580684B1 (fr) Reconnaissance de visages à partir d'images vidéo
JPH04130587A (ja) 3次元画像評価装置
EP0548080A1 (fr) Codage et compression de donnees d'images
Jang et al. 3D human modeling from a single depth image dealing with self-occlusion
WO1999027493A2 (fr) Detection de correspondance d'images a l'aide de la similitude cumulative radiale
Yang et al. Depth from water reflection
GB2244805A (en) Image data encoding and compression
Rziza et al. Estimation and segmentation of a dense disparity map for 3D reconstruction

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 19930422

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): DE ES FR GB GR IT NL

17Q First examination report despatched

Effective date: 19960620

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 19980718