WO1997004420A1 - Traitement de signal video de maniere a modifier une image representee par ledit signal - Google Patents

Traitement de signal video de maniere a modifier une image representee par ledit signal Download PDF

Info

Publication number
WO1997004420A1
WO1997004420A1 PCT/GB1996/001768 GB9601768W WO9704420A1 WO 1997004420 A1 WO1997004420 A1 WO 1997004420A1 GB 9601768 W GB9601768 W GB 9601768W WO 9704420 A1 WO9704420 A1 WO 9704420A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
eye
areas
iris
parts
Prior art date
Application number
PCT/GB1996/001768
Other languages
English (en)
Inventor
David John Machin
Original Assignee
British Telecommunications Public Limited Company
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by British Telecommunications Public Limited Company filed Critical British Telecommunications Public Limited Company
Priority to CA002227626A priority Critical patent/CA2227626A1/fr
Priority to EP96925030A priority patent/EP0843869A1/fr
Priority to AU65283/96A priority patent/AU6528396A/en
Publication of WO1997004420A1 publication Critical patent/WO1997004420A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/44Morphing

Definitions

  • This invention relates to a method of processing a video signal representing an image which includes at least one eye of a human face, the processing being such as to change the direction in which the eye(s) appear to be looking in the represented image, the method comprising locating a part or parts of the signal which represent areas of the image whose boundaries window an eye and modifying this part or parts in such manner as to move, within each of the represented areas, the pupil and iris of the corresponding eye relative to the boundary of that area.
  • the invention also relates to a signal processor for carrying out such a method.
  • Such services may take the form, for example, of video links added to basic telephone links, so that parties to telephone calls between locations can not only hear, but also see, the people with whom they are communicating.
  • video displays are to be a reasonable size, for example commensurate with displays provided by conventional non-portable television receivers, a problem arises because a video camera also has to be provided at each location. Because a party to a call will normally be looking at his display he will not also be looking at the camera unless this is effectively or actually located in the same direction as the display.
  • Locating the camera actually in front of the display will obviously cause it to obscure part of the display and various proposals have been made, for example using semi-transparent mirrors, to enable it to be located effectively rather than actually in front of the display.
  • various proposals have been made, for example using semi-transparent mirrors, to enable it to be located effectively rather than actually in front of the display.
  • drawbacks inherent in all of these proposals and it is usual at present to locate the camera adjacent one edge of the display, resulting in each party to a call appearing to the or each other party as if he is not looking at him, because he is looking at the display rather than the camera.
  • WO-A-92/14340 proposes that this situation be artificially corrected by modifying the video signals from each camera in such a way as to change the directions in which people present in the images represented by the signals appear to be looking, more particularly so that they appear to be looking at the camera.
  • What is proposed in WO-A-92/14340 appears to be a method of the general kind set forth in the first paragraph of the present document. In the proposed method it appears that the parts of each video signal which represent the whites of the eyes of a person represented are located and, within these, the parts which represent the pupils (and posssibly the irises as well) of the eyes.
  • the latter parts are apparently moved within the video signal so as to take up new positions within the former parts so that, in the represented image, the pupils (and irises) adopt changed positions within the whites of the eyes.
  • Changing the positions of the pupils (and irises) in this way results, of course, in the creation of voids at positions which have been vacated, and these are filled in some unspecified manner with the colour of the whites of the eyes.
  • a disadvantage, recognised in WO-A-92/14340, with such a method is that discontinuities are produced in the image represented by the video signal, which discontinuities have to be smoothed. It can also be a disadvantage in some circumstances that, with such a method, it is only possible to displace the pupils and irises in a horizontal direction using such a method (assuming the face is vertical) because that is the only direction in which whites of the eyes are normally visible. It is an object of the present invention to enable these disadvantages to be mitigated.
  • a method of processing a video signal representing an image which includes at least one eye of a human face comprises locating a part or parts of the signal which represent areas of the image whose boundaries window an eye and modifying this part or parts in such manner as to move, within each of the represented areas, the pupil and iris of the corresponding eye relative to the boundary of that areais characterised in that said parts are moreover modified in such manner as to apply, within each of the represented areas, a polynomial spatial warp to the surroundings of the iris of the corresponding eye, which spatial warp is such as to cause these surroundings to accommodate the movement of the iris and pupil of the corresponding eye.
  • each successive image frame of the video signal is temporarily stored in a frame store, templates of the pupils of left and right eyes are scanned over respective regions of the stored image frame, the difference between each template and the region of the stored image frame which it covers is determined for each position of that template, and the locations of said first and second areas for the corresponding image frame are determined by the positions of the two templates for which the corresponding said difference is smallest.
  • the processing may then be adapted to the size of the face in the image by scaling the sizes of the first and second areas in accordance with the separation between said positions of the two templates for the corresponding image frame.
  • the locations of said respective regions may be determined by the positions of the two templates which determined the locations of the first and second areas for the immediately preceding stored image frame.
  • the invention provides apparatus for processing a video signal representing an image which includes at least one eye of a human face so as to change the direction in which the eye(s) appear to be looking in the represented image, the apparatus comprising locating means for locating parts of the signal which represent areas of the image whose boundaries window an eye and modifying means for modifying these parts in such manner as to move, within each of the represented areas, the pupil and iris of the corresponding eye relative to the boundary of that area, characterised in that the modifying means is moreover arranged to modify said parts in such manner as to apply, within each of the represented areas, a polynomial spatial warp to the surroundings of the iris of the corresponding eye, which spatial warp is such as to cause these surroundings to accommodate the movement of the iris and pupil of the corresponding eye.
  • Figure 1 (a), Figure 1 (b), Figure 1 (c) and Figure 1 (d) illustrate some examples of polynomial spatial warp which may be applied within a rectangular window
  • Figure 2 shows how the window of Figure 1 can be centred on a human eye in an image
  • Figure 3 is a block diagram of a video signal processing apparatus
  • FIG. 4 is a flow diagram of various operations carried out by the apparatus of Figure 3.
  • Figure 5 illustrates how the locations of areas containing the eyes may be obtained by processing a head and shoulders silhouette. Spatial warps are discussed in many publications, for example at pages
  • n is the order of the warp.
  • r and c are given by:
  • the values of the constants in the expressions for r and c can be calculated provided that a sufficient number of control or tie points are known, i.e. points or pixels in the source and destination images which correspond to each other. If a first order warp is employed, so that r and c are given by equations (1 ) and (2) above, knowledge of four control or tie points will enable the eight constants a,b,d,e,f,g,h and k to be calculated by insertion of the values of (x,y) and (r,c) for each point into the equations in turn, thereby producing eight equations in the eight unknown constants. (If a warp of order two is employed, nine control points will be required, sixteen for a warp of order three, and so on).
  • first-order warps will be considered although it is to be understood that higher-order warps may be employed in implementing the invention, if desired.
  • a quadrilateral which has a tie-point at each of its corners in the source image.
  • positions within the quadrilateral will be mapped to positions within the quadrilateral which has the four tie-points at its corners in the destination image.
  • a polynomial warp is required to be applied to an image or image portion having a rectilinear fixed boundary, the area within the boundary has to be divided into sub-areas in such a way that at least those sub-areas which are bounded in part by the area boundary are quadrilateral in form (so that the area boundary is defined in full in both the source and the destination image portion).
  • a polynomial warp is applied to only a portion of an image, if discontinuities are not to occur at the boundary between that portion and the neighbouring portion of the image the relationship between the tie-points which define the boundary must be the same in both the source and destination images.
  • Figures 1 a - 1 d each relate to the application of a first-order polynomial spatial warp within a fixed rectangular window having corners 1 , 2, 3 and 4.
  • Figure 1 (a) shows a very simple example in which the area within the window is divided into two sub-areas 10, 1 1 , having tie-points 1 ,4,5,6 and 2,3,5,6 respectively.
  • the warp produced by displacement, as illustrated, of the tie-points 5 and 6 by equal amounts in the same direction, from 5a to 5b and 6a to 6b, is in fact a linear contraction of sub-area 10 in the x-direction and a compensating linear expansion of sub-area 1 1 b. (Of course it is not essential that tie-points 5 and 6 be displaced by equal amounts, or even in the same direction; to do otherwise would simply result in a more complicated warp).
  • Figure 1 (b) shows a similar example in which tie-points 7,8 are displaced in the Y-direction from 7a, 8a to 7b, 8b, resulting in linear contraction of sub-area 1 2 and compensating linear expansion of sub-area 1 3.
  • Figure 1 (c) shows what is effectively a combination of the procedures shown in Figures 1 (a) and Kb).
  • the area inside the window is in this case divided into four sub-areas 14, 1 5, 1 6, 17 having tie-points 1 ,5,7,9; 2,5,8,9; 3,6,8,9 and 4,6,7,9 respectively, where 9 is the point of intersection of straight lines joining points 7,8 and 5,6.
  • FIG. 14 With displacements of the tie-points shown in Figure 1 (c), sub-area 14 undergoes a linear contraction in both the X and Y directions, sub-area 1 5 undergoes a linear contraction in the Y direction, sub-area 1 6 undergoes a linear expansion in both the X and Y directions, and sub-area 17 undergoes a linear expansion in the Y direction and a linear contraction in the X direction.
  • Figure 1 (d) illustrates a warp which results in sub-area (26) within the rectangle 1 ,2,3,4 being translated intact when the warp is imposed.
  • the eye having a pupil 27, an iris 28, and eyelids 29 and 30, and is centred on the pupil of the eye, then the pupil can be moved within the rectangle in the X direction by imposing a warp of the kind illustrated in Figure 1 (a), in the Y direction by imposing a warp of the kind illustrated in Figure Kb), or in both the X and Y directions by imposing warps of the kinds illustrated in Figures Kc) and 1 (d).
  • the direction in which the eye appears to be looking can be changed so that, for example, it appears to be looking at a video camera which is generating a signal representing the image when it is in fact looking elsewhere, for example at a video display.
  • FIG. 3 is a block diagram of apparatus which may be used for carrying out a method in accordance with the invention. Only those parts of the apparatus which are particularly relevant for the purposes of the present description are indicated.
  • the apparatus of Figure 3 comprises a video camera 31 which has a video signal output 32 and a synchronising signal input 33, and a programmed video signal processor 34 which includes a central processing unit (CPU) 35, memory 36, and an analogue-to-digital (A/D) converter 37.
  • the memory 36 includes three video signal frame stores 38, 39 and 40.
  • the processor 34 has a video signal input 41 which is fed from the video signal output 32 of camera 31 , a video signal output 42, and a (line and field) synchronising signal output 43 which is connected to the synchronising signal input 33 of camera 31 .
  • the processor 34 is basically programmed to write in an ongoing fashion the digitised video signal generated by camera 31 corresponding to successive frames of the image picked up by the camera alternately into frame stores 38 and 39, and to read out the contents of each location of each of the stores 38 and 39 onto the output 42 just before it is newly written each time.
  • a complete frame of image information is present alternately in the frame store 38 and the frame store 39 for processing, each time for the duration of one line period before it is read out, and the processor 34 imparts a delay of two frame periods between its input 41 and its output 42.
  • the processor 34 is furthermore programmed to initiate the sequence of steps shown in Figure 4 of the drawings whenever instructed, e.g. upon the entering of an appropriate command.
  • the various blocks etc have the following significances:
  • step 53 Search the approximate areas of the stored image determined in step 52 for the pupils of the two eyes and thereby determine the coordinates of the pupils in the stored image.
  • processor 34 After being instructed to initiate the sequence of steps shown in Figure 4, processor 34 has two basic tasks: to determine the coordinates of the pupils of the eyes of a person included in a newly-stored image frame and to impart spatial warps to areas of the stored image within which the pupils are located. It first of all waits, by means of test 51 , until the storage of an image frame in one of the frame stores 38 and 39 which is currently taking place is completed. When this is the case it carries out processing steps 52 and 53 to determine the coordinates of the pupils of the eyes of a person included in the newly-stored image frame.
  • the scenario depicted in the stored image has certain pre-determined characteristics, more particularly that the image is of the head and the upper part of the body of a person who is not looking straight ahead, as could be arranged to occur in a video-telephone application in which a seated person faces a display screen to the side of which is provided the camera 31 of Figure 3 to produce an image of the seated person.
  • This assumption is made because the processing required in a search for the pupils in the image is considerable, and becomes well-nigh prohibitive if the complete image has to be searched. If the search can be initially narrowed to relatively small areas of the image the problem is considerably alleviated.
  • processor 34 determines in step 52 the head-and- shoulders silhouette of the person. In order to do this in the present embodiment use is made of the fact that a person is never completely still, so that the exact position of his silhouette will change from frame to frame. Accordingly, processor 34 subtracts the brightness information for each pixel in one of the stores 38 and 39 from the corresponding information in the other of these stores, subjects the moduli of the results to a threshold, and writes a single bit corresponding to each pixel into the corresponding location in frame store 40, the single bit indicating whether or not the threshold was exceeded for the corresponding pixel.
  • the resulting image in frame store 40 may be as is shown diagrammatically as a full line in Figure 5 of the drawings, which line indicates the pixels for which the threshold was exceeded.
  • processor 34 In order to analyse the image in frame store 40, and thereby determine the approximate areas in the stored images in frame stores 38 and 39 where the person's two eyes are located, processor 34 then addresses columns of pixels or groups of adjacent columns of pixels in store 40 one by one, starting at one side of the image.
  • the object is to detect a column or group of adjacent columns containing a number of pixels for which the threshold was exceeded, which number is greater than a second threshold. It will be evident that if the second threshold is appropriately chosen and the scanning starts at the left-hand side of the image in frame store 40, the first such column or such group of columns encountered will be at the area of column XT in Figure 5.
  • the second such column or group of columns will be at the area of column X 2 ; the value of X 2 is one of those required to ascertain the approximate areas in which the eyes are present and is therefore stored.
  • Similar addressing of columns or groups of columns is then carried out from the other side of the image to ascertain the value of X 4 and then similar addressing of rows or groups of rows starting from the top of the image to ascertain the values of Yi and Y 2 . Having ascertained these values then it is known that the eyes are likely to be located within respective halves of a rectangle 60 which is centred on the coordinates (X - X 2 )/2, (Yi - Y 2 )/2.
  • the size of the rectangle 60 may be fixed, or may be made proportional to the values of (X - X 2 ) and (YT - Y 2 ) if there is a large potential variation in these values.
  • processor 34 searches within respective halves of a corresponding rectangle applied to the last stored image in store 38 or 39 for the pupils of the left and right eyes of the person present in the image (step 53 in Figure 5). In the present embodiment this is done by first locating the darkest areas within each half of the window and then scanning a template of a pupil of a left eye or a right eye as appropriate over these areas to determine the positions where the correlation coefficient between the respective templates and the image is a maximum. The coordinates of these positions are the coordinates of the pupils of the two eyes. (Each template is preferably derived from the pupils of the corresponding eyes of several people by averaging).
  • Processor 34 then waits (test 54) until storage of another image frame in one of the frame stores 38 and 39 which is currently taking place is completed. When this is the case it searches for the pupils of the eyes in the newly stored image, using correlation between the pupil templates and the image once again, in the areas of the newly stored image which are in the vicinities of the coordinates determined in step 53, it being assumed that movement of the pupils from image frame to image frame is small. In this way updated coordinates are obtained in step 55.
  • Respective warp templates for instance as shown in the left-hand half of Figure 1 (d), are then centred on the pupils in the newly stored image in step 56 and an appropriate warp is applied to the parts of the image within the template boundaries (cf . the transition from the left half of Figure 1 (d) to the right half thereof).
  • the image is then outputted on output 42 and steps 55 and 56 are repeated for each new image frame to be stored, the coordinates employed for the search in each repeated step 55 being those determined in the immediately preceding step 55.
  • the size of the warp template of Figure 1 (d) used in each step 56 is preferably chosen so that the outer rectangle 1 ,2,3,4 encompasses the eyelid and eyelashes of the relevant eye but not the eyebrows, and the inner rectangle or square 26 is just large enough to encompass the iris of the relevant eye. If desired, provision may be made for the size of the warp template to be automatically scaled in accordance with the distance between the pupils of the left and right eyes (determined by means of the coordinates found in step 55) so as automatically to take account of variations in the size of the person within the image.
  • each step 56 The required amount and direction of translation of the central area 26 of the warp template of Figure 1 (d) in each step 56 can obviously be pre-calculated using simple geometry in the case of video-telephone and like applications if the relative positions of the user, the video camera and the display at each location are known and it is assumed that the user will be looking at the display under normal circumstances.
  • a spatial warp in which a sub-area, e.g. sub-area 26 in Figure 1 (d), is merely translated on going from source image to destination image, the surrounding sub-areas being spatially warped to accommodate the translation, this is not essential.
  • Such an approach may be effected by locating, for each pixel in the destination image (right-hand template of Figure 1 (d)), the nearest corresponding pixel in the source image (left-hand template of Figure 1 (d)) and assigning the brightness (and colour if appropriate) of the latter pixel to the former.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

Dans un système de téléphonie vidéo, un appelant regardant son écran ne peut simultanément regarder la caméra vidéo placée à côté de l'écran. Pour surmonter ce problème, le signal vidéo produit par la caméra est traité de manière à ce que les yeux de l'appelant paraissent regarder la caméra. Des parties du signal vidéo qui représentent l'oeil ou les yeux de l'appelant sont placées et modifiées de manière à déplacer les pupilles et les iris dans l'image représentée. En outre, ces parties du signal sont modifiées pour appliquer une déflexion spatiale polynomiale à la zone située autour de chaque iris dans l'image représentée, cette déflexion étant telle qu'elle entraîne ladite zone à s'adapter au mouvement de la pupille et de l'iris de l'oeil correspondant.
PCT/GB1996/001768 1995-07-24 1996-07-23 Traitement de signal video de maniere a modifier une image representee par ledit signal WO1997004420A1 (fr)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CA002227626A CA2227626A1 (fr) 1995-07-24 1996-07-23 Traitement de signal video de maniere a modifier une image representee par ledit signal
EP96925030A EP0843869A1 (fr) 1995-07-24 1996-07-23 Traitement de signal video de maniere a modifier une image representee par ledit signal
AU65283/96A AU6528396A (en) 1995-07-24 1996-07-23 Processing a video signal so as to modify an image represented thereby

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP95305199 1995-07-24
EP95305199.2 1995-07-24

Publications (1)

Publication Number Publication Date
WO1997004420A1 true WO1997004420A1 (fr) 1997-02-06

Family

ID=8221270

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB1996/001768 WO1997004420A1 (fr) 1995-07-24 1996-07-23 Traitement de signal video de maniere a modifier une image representee par ledit signal

Country Status (4)

Country Link
EP (1) EP0843869A1 (fr)
AU (1) AU6528396A (fr)
CA (1) CA2227626A1 (fr)
WO (1) WO1997004420A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2390256A (en) * 2002-06-26 2003-12-31 Hewlett Packard Development Co Correcting the viewing direction of eyes of a subject in an image
EP1657915A1 (fr) * 2004-11-12 2006-05-17 Dialog Semiconductor GmbH Algorithme de conversion à taille optimisée de lignes de pixels vers des blocs de pixels
KR100590638B1 (ko) * 1997-11-07 2006-09-22 유니레버 엔.브이. 세제조성물

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1992014340A1 (fr) * 1991-01-31 1992-08-20 Siemens Aktiengesellschaft Procede permettant de corriger la direction du regard sur un visiophone

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1992014340A1 (fr) * 1991-01-31 1992-08-20 Siemens Aktiengesellschaft Procede permettant de corriger la direction du regard sur un visiophone

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100590638B1 (ko) * 1997-11-07 2006-09-22 유니레버 엔.브이. 세제조성물
GB2390256A (en) * 2002-06-26 2003-12-31 Hewlett Packard Development Co Correcting the viewing direction of eyes of a subject in an image
GB2390256B (en) * 2002-06-26 2006-03-01 Hewlett Packard Development Co Image correction system and method
US7177449B2 (en) 2002-06-26 2007-02-13 Hewlett-Packard Development Company, L.P. Image correction system and method
EP1657915A1 (fr) * 2004-11-12 2006-05-17 Dialog Semiconductor GmbH Algorithme de conversion à taille optimisée de lignes de pixels vers des blocs de pixels
US7486298B2 (en) 2004-11-12 2009-02-03 Digital Imaging Systems Gmbh Size optimized pixel line to pixel block conversion algorithm

Also Published As

Publication number Publication date
AU6528396A (en) 1997-02-18
CA2227626A1 (fr) 1997-02-06
EP0843869A1 (fr) 1998-05-27

Similar Documents

Publication Publication Date Title
US6686926B1 (en) Image processing system and method for converting two-dimensional images into three-dimensional images
EP1057326B1 (fr) Determination automatique de positions prereglees correspondant aux participants a des video-conferences
US5680531A (en) Animation system which employs scattered data interpolation and discontinuities for limiting interpolation ranges
US6157733A (en) Integration of monocular cues to improve depth perception
JP4698831B2 (ja) 画像変換および符号化技術
CN110049351B (zh) 视频流中人脸变形的方法和装置、电子设备、计算机可读介质
US5057019A (en) Computerized facial identification system
JPH09135447A (ja) 知的符号化/復号方法、特徴点表示方法およびインタラクティブ知的符号化支援装置
KR20010113720A (ko) 영상 렌더링 방법 및 장비
KR20220136196A (ko) 화상 처리 장치, 화상 처리 방법, 이동 장치, 및 저장 매체
KR100411760B1 (ko) 애니메이션 영상 합성 장치 및 방법
US5917494A (en) Two-dimensional image generator of a moving object and a stationary object
JP6549764B1 (ja) 画像投影システム、画像投影方法、及びプログラム
JPH06118349A (ja) 眼鏡装用シミュレーション装置
WO1997004420A1 (fr) Traitement de signal video de maniere a modifier une image representee par ledit signal
JPH03252780A (ja) 特徴量抽出方法
JPH0981746A (ja) 二次元表示画像生成方法
CN113255649B (zh) 一种基于图像识别的图像分割框选方法及终端
JP3577154B2 (ja) 画像処理装置
JPH05250445A (ja) 三次元モデルデータ生成装置
JPH03127284A (ja) カラー画像処理装置
JPH0746582A (ja) 映像切り出し方法
JP2973432B2 (ja) 画像処理方法および装置
KR20020067088A (ko) 3차원 동화상 모델 얼굴 대체 방법 및 장치
KR20040058421A (ko) 양안식 스테레오 영상의 카메라 광축 간격 조절 장치 및그 방법

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AL AM AT AU AZ BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE HU IL IS JP KE KG KP KR KZ LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK TJ TM TR TT UA UG US UZ VN AM AZ BY KG KZ MD RU TJ TM

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): KE LS MW SD SZ UG AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 1996925030

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2227626

Country of ref document: CA

Ref country code: CA

Ref document number: 2227626

Kind code of ref document: A

Format of ref document f/p: F

WWP Wipo information: published in national office

Ref document number: 1996925030

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

WWR Wipo information: refused in national office

Ref document number: 1996925030

Country of ref document: EP

WWW Wipo information: withdrawn in national office

Ref document number: 1996925030

Country of ref document: EP