CA2227626A1 - Processing a video signal so as to modify an image represented thereby - Google Patents

Processing a video signal so as to modify an image represented thereby Download PDF

Info

Publication number
CA2227626A1
CA2227626A1 CA002227626A CA2227626A CA2227626A1 CA 2227626 A1 CA2227626 A1 CA 2227626A1 CA 002227626 A CA002227626 A CA 002227626A CA 2227626 A CA2227626 A CA 2227626A CA 2227626 A1 CA2227626 A1 CA 2227626A1
Authority
CA
Canada
Prior art keywords
image
eye
areas
iris
parts
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA002227626A
Other languages
French (fr)
Inventor
David John Machin
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
British Telecommunications PLC
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of CA2227626A1 publication Critical patent/CA2227626A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/44Morphing

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

To overcome the problem in a video telephony system that a caller looking at his display will not at the same time be looking at a video camera positioned beside the display, the video signal produced by the camera is processed so that the caller's eyes appear to be indeed looking at the camera. Parts of the video signal which represent the eye or eyes of the caller are located and modified so as to move the pupils and irises in the represented image. These parts of the signal are moreover modified to apply a polynomial spatial warp to the surroundings of each iris in the represented image, which warp is such as to cause these surroundings to accommodate the movement of the pupil and iris of the corresponding eye.

Description

CA 02227626 l998-0l-22 W O 97/04420 PCT/~'3C~1768 Process;..~ a Video SignAI so as to modify ~n image represented thereby This invention relates to a method of processing a - video signal representing an image which includes at least one eye of a human face, the processing being such as to change the direction in which the eye(s) appear to be looking in the represented image, the method comprising locating a part or parts of the signal which represent areas of the image whose boundaries window an eye and modifying this part or parts in such manner as to move, within each of the represented areas, the pupil and iris of the corresponding eye relative to the boundary of that area. The invention also relates to a signal processor for carrying out such a method.
Operators of telecommunication networks are increasingly offering video services to their subscribers. Such services may take the form, for example, of video links added to basic telephone links, so that parties to telephone calls between locations can not only hear, but also see, the people with whom they arecommunicating. If the video displays are to be a reasonable size, for example commensurate with displays provided by conventional non-portable television receivers, a problem arises because a video camera also has to be provided at each location. Because a party to a call will normally be looking at his display he will not also be looking at the camera unless this is effectively or actually located in the same direction as the display. Locating the camera actually in front of the display will obviously cause it to obscure part of the display and various proposals have been made, for example using semi-transparent mirrors, to enable it to be located effectively rather than actually in front of the display. However, there are drawbacks inherent in all of these proposals and it is usual at present to locate the camera adjacent one edge of the display, resulting in each party to a call appearing to the or each other party as if he is not looking at him, because he is looking at the display rather than the camera.
International Patent application no. W O-A-92/14340 proposes that this situation be artificially corrected by modifying the video signals from each camera t in such a way as to change the directions in which people present in the images represented by the signals appear to be looking, more particularly so that they appear to be looking at the camera. What is proposed in W O-A-92/14340 appears to be a method of the general kind set forth in the first paragraph of the present document. In the proposed method it appears that the parts of each video signal which represent the whites of the eyes of a person represented are located and, within these, the parts which represent the pupils (and posssibiy the irises as well) of the eyes. The latter parts are apparently moved within the video signal so as to take up new positions within the former parts so that, in the represented image,the pupils (and irises) adopt changed positions within the whites of the eyes.
Changing the positions of the pupils (and irises) in this way results, of course, in the creation of voids at positions which have been vacated, and these are filled in 10 some unspecified manner with the colour of the whites of the eyes.
A disadvantage, recognised in W0-A-92/14340, with such a method is that discontinuities are produced in the image represented by the video signal, which discontinuities have to be smoothed. It can also be a disadvantage in somecircumstances that, with such a method, it is only possible to displace the pupils 15 and irises in a horizontal direction using such a method (assuming the face is vertical) because that is the only direction in which whites of the eyes are normally visible. It is an object of the present invention to enable these disadvantages to be mitigated .
According to one aspect of the present invention a method of processing a 20 video signal representing an image which includes at least one eye of a humanface, the processing being such as to change the direction in which the eye(s) appear to be looking in the represented image, comprises locating a part or parts of the signal which represent areas of the image whose boundaries window an eye and modifying this part or parts in such manner as to move, within each of the 2~ represented areas, the pupil and iris of the corresponding eye relative to the boundary of that areais characterised in that said parts are moreover modified in such manner as to apply, within each of the represented areas, a polynomial spatial warp to the surroundings of the iris of the corresponding eye, which spatial warp is such as to cause these surroundings to accommodate the movement of 30 the iris and pupil of the corresponding eye.
Although applying a polynomial spatial warp to an area of an image will inevitably give rise to some distortion, it will not give rise to discontinuities within , .

WO 97/04420 PCT/GBs6/01768 that area. Moreover, if the wafp is sùitably chosen, it will not give rise to discontinuities within the image as a whole.
In order to reduce the visibility of the warp in the image represented by the modified video signal said parts are preferably modified in such a manner that a 5 polynomial spatial warp is not applied to the represented pupil and iris of each eye.
Conveniently each successive image frame of the video signal is temporarily stored in a frame store, templates of the pupils of left and right eyes are scanned over respective regions of the stored image frame, the difference between each template and the region of the stored image frame which it covers is 10 determined for each position of that template, and the locations of said first and second areas for the corresponding image frame are determined by the positions of the two templates for which the corresponding said difference is smallest. The processing may then be adapted to the size of the face in the image by scaling the sizes of the first and second areas in accordance with the separation between said 15 positions of the two templates for the corresponding image frame. Moreover, in order to make use of information obtained for each frame in the processing of the next frame, for each stored image frame the locations of said respective regionsmay be determined by the positions of the two templates which determined the locations of the first and second areas for the immediately preceding stored image 20 frame.
It should be noted that using templates, albeit of complete eyes, to locate the areas of eyes in successive image frames, and also using the area located ineach frame to determine the area searched in the next frame, is disclosed in a paper "Real-Time Facial-Feature Tracking Based on Matching Techniques and its 25 Applications" by Sako et al in the Proceedings of the 1 2th IAPR International Conference on Pattern Recognition, Jerusalem, 6-13 October 1994 at pages 320-324. Other ways of locating the parts of the image may be used instead.
According to another aspect the invention provides apparatus for processing a video signal representing an image which includes at least one eye of 30 a human face so as to change the direction in which the eye(s~ appear to be ~ Iooking in the represented image, the apparatus comprising locating means for locating parts of the signal which represent areas of the image whose boundarieswindow an eye and modifying means for modifying these parts in such manner as to move, within each of the represented areas, the pupil and iris of the corresponding eye relative to the boundary of that area, characterised in that the modifying means is moreover arranged to modify said parts in such manner as to apply, within each of the represented areas, a polynomial spatial warp to the surroundings of the iris of the corresponding eye, which spatial warp is such as to cause these surroundings to accommodate the movement of the iris and pupil of the corresponding eye.
Embodiments of the invention will now be described, by way of example only, with reference to the accompanying diagrammatic drawings in which:
Figure 1 (a), Figure 1 (b), Figure 1 (c) and Figure 1 (d) illustrate some examples of polynomial spatial warp which may be applied within a rectangular window, Figure 2 shows how the window of Figure 1 can be centred on a human eye in an image, Figure 3 is a block diagram of a video signal processing apparatus, Figure 4 is a flow diagram of various operations carried out by the apparatus of Figure 3, and Figure 5 illustrates how the locations of areas containing the eyes may be obtained by processing a head and shoulders silhouette.
Spatial warps are discussed in many publications, for example at pages 183-185 of "Practical Computer Vision Using C" by J R Parker (John Wiley &
Sons) and at pages 296-302 of "Digital Image Processing" by R C Gonzalez and R
E Woods (Addison-Wesley Publishing Company). A spatial warp is a mapping between positions in a source image and positions in a destination image, which 25 mapping is defined by a general mapping function. Such a function may be written as:
r = G, (x, y) c = Gc (x, y) 30 where (r,c) are the new coordinates and (x, y) the original ones. In practice so-called polynomial warps are normally used, these having the general form:

_ W O 97/04420 PCT/~Lr~1768 Gr(X,y)= ~ ~ a~x y =o j=o n n Gc(x,y)=~ ~ byxiyJ
i=O J=O

5 where n is the order of the warp. Thus, for example, if a first order polynomial spatial warp is employed, r and c are given by:

r = ax + by + dxy + e... (1) c = fx + gy + hxy + k .. (2) where a,b,d,e,f,g,h, and k are constants.
Since polynomial warps r and c are continuous functions of x and y, application of such a warp to an image or image portion can be likened to "printing" the image or image portion on a sheet of rubber and then stretching this sheet according to some predetermined set of rules. Such stretching will not in general result in the production of discontinuities in the image or image portion.
The values of the constants in the expressions for r and c can be calculated provided that a sufficient number of control or tie points are known, i.e.
points or pixels in the source and destination images which correspond to each other. If a first order warp is employed, so that r and c are given by equations (1) and (2) above, knowledge of four control or tie points will enable the eight constants a,b,d,e,f,g,h and k to be calculated by insertion of the values of (x,y) and (r,c) for each point into the equations in turn, thereby producing eight equations in the eight unknown constants. (If a warp of order two is employed, nine control points will be required, sixteen for a warp of order three, and so on).
In the course of the following description only first-order warps will be considered although it is to be understood that higher-order warps may be employed in implementing the invention, if desired.
Consider a quadrilateral which has a tie-point at each of its corners in the source image. In general, with a first-order warp, positions within the quadrilateral W O 97/04~20 PCT/GB96/017C8 will be mapped to positions within the quadrilateral which has the four tie-points at its corners in the destination image. If a polynomial warp is required to be applied to an image or image portion having a rectilinear fixed boundary, the area within the boundary has to be divided into sub-areas in such a way that at least those 5 sub-areas which are bounded in part by the area boundary are quadrilateral in form (so that the area boundary is defined in full in both the source and the destination image portion). Moreover, if a polynomial warp is applied to only a portion of an image, if discontinuities are not to occur at the boundary between that portion and the neighbouring portion of the image the relationship between the tie-points which 10 define the boundary must be the same in both the source and destination images.
Some examples of polynomial warps will now be illustrated with reference to Figure 1 of the drawings.
Figures 1 a - 1 d each relate to the application of a first-order polynomial spatial warp within a fixed rectangular window having corners 1, 2, 3 and 4.
Figure 1 (a) shows a very simple example in which the area within the window is divided into two sub-areas 10, 11, having tie-points 1,4,5,6 and 2,3,5,6 respectively. The warp produced by displacement, as illustrated, of the tie-points 5 and 6 by equal amounts in the same direction, from 5a to 5b and 6a to 6b, is in fact a linear contraction of sub-area 10 in the x-direction and a compensating linear expansion of sub-area 11 b. (Of course it is not essential that tie-points 5 and 6 be displaced by equal amounts, or even in the same direction; to do otherwise wouldsimply result in a more complicated warp).
Figure 1 (b) shows a similar example in which tie-points 7,8 are displaced in the Y-direction from 7a, 8a to 7b, 8b, resulting in linear contraction of sub-area 12 and compensating linear expansion of sub-area 13.
Figure 1 (c) shows what is effectively a combination of the procedures shown in Figures 1 (a) and 1 (b). The area inside the window is in this case divided into four sub-areas 14, 15, 16, 17 having tie-points 1,5,7,9; 2,5,8,9; 3,6,8,9 and 4,6,7,9 respectively, where 9 is the point of intersection of straight lines joining points 7,8 and 5,6. With displacements of the tie-points shown in Figure 1 (c), sub-area 14 undergoes a linear contraction in both the X and Y directions, sub-area 15 undergoes a linear contraction in the Y direction, sub-area 16 undergoes a linear expansion in both the X and Y directions, and sub-area 17 undergoes a linear W O 97/04420 PCT/~L~-~1768 expansion in the Y direction and a linear contraction in the X direction. (It is not essential, of course, that point 9 lies on straight lines joining points 7,8 and 5,6 ~ either before or after displacement; if it does not then a more complicated warp will result.) Figure 1 ld) illustrates a warp which results in sub-area (26) within the rectangle 1,2,3,4 being translated intact when the warp is imposed. In this casethe absolute locations of the tie-points 18,19,20,21 are changed but their relative locations remain the same, so that a warp is imposed on the sub-areas 22,23,24,25 whereas a simple translation occurs of the sub-area 26.
If the rectangle 1,2,3,4 of the left-hand column of Figure 1 windows a portion of an image containing a human eye, e.g. as shown in Figure 2, the eye having a pupil 27, an iris 28, and eyelids 29 and 30, and is centred on the pupil of the eye, then the pupil can be moved within the rectangle in the X direction by imposlng a warp of the kind illustrated in Figure 1 (a), in the Y direction by imposing a warp of the kind illustrated in Figure 1 (b), or in both the X and Y directions by imposing warps of the kinds illustrated in Figures 1 (c) and 1 (d). Thus the direction in which the eye appears to be looking can be changed so that, for example, it appears to be looking at a video camera which is generating a signal representing the image when it is in fact looking elsewhere, for example at a video display.
Inevitably some non-realistic distortion will occur to the image portion within the rectangle when the warp is imposed. This will usually be less noticeable if a warp is used which includes the mere translation of a sub-area intact within the rectangle, e.g. as illustrated in Figure 1 (d), because this sub-area can be chosen to window the pupil 27 and iris 28, thereby avoiding distortion of the circular shape of the iris.
Figure 3 is a block diagram of apparatus which may be used for carrying out a method in accordance with the invention. Only those parts of the apparatuswhich are particularly relevant for the purposes of the present description are indicated .
The apparatus of Figure 3 comprises a video camera 31 which has a video signal output 32 and a synchronising signal input 33, and a programmed video signal processor 34 which includes a central processing unit (CPU) 35, memory 36, and an analogue-to-digital (A/D) converter 37. The memory 36 includes three video signal frame stores 38, 39 and 40. The processor 34 has a video signal input 41 which is fed from the video signal output 32 of camera 31, a video signal output 42, and a (line and field) synchronising signal output 43 which is connected to the synchronising signal input 33 of camera 31.
The processor 34 is basically programmed to write in an ongoing fashion the digitised video signal generated by camera 31 corresponding to successive frames of the image picked up by the camera alternately into frame stores 38 and39, and to read out the contents of each location of each of the stores 38 and 39 onto the output 42 just before it is newly written each time. Thus a complete 10 frame of image information is present alternately in the frame store 38 and the frame store 39 for processing, each time for the duration of one line period before it is read out, and the processor 34 imparts a delay of two frame periods between its input 41 and its output 42.
The processor 34 is furthermore programmed to initiate the sequence of 15 steps shown in Figure 4 of the drawings whenever instructed, e.g. upon the entering of an appropriate command. In Figure 4 the various blocks etc have the following significances:

50 - Start 51 - Has a complete image frame been newly written into one of the frame stores 38 and 39~
52 - Determine the head-and-shoulders silhouette of a person present in the newly-written image frame and hence the approximate areas of the stored image where the person's two eyes are located.
53 - Search the approximate areas of the stored image determined in step 52 for the pupils of the two eyes and thereby determine the coordinates of the pupils in the stored image.
54 - Has a further complete image frame been newly written into one of the frame stores 38 and 39?
55 - Search for the pupils of the two eyes in the areas in the newly-written image around the last-determined pupil coordinates and thereby determine the coordinates of the pupils in the newly stored image.
56 - Centre a respective warp template on each set of coordinates in the newly-written image which were determined in step 55 and apply spatial warps to areas of the newly-written image which lie within the template.

After being instructed to initiate the sequence of steps shown in Figure 4, processor 34 has two basic tasks: to determine the coordinates of the pupils of the eyes of a person included in a newly-stored image frame and to impart spatial warps to areas of the stored image within which the pupils are located. It first of all waits, by means of test 51, until the storage of an image frame in one of the frame stores 38 and 39 which is currently taking place is completed. When this is the case it carries out processing steps 52 and 53 to determine the coordinates of the pupils of the eyes of a person included in the newly-stored image frame. In the present embodiment it is assumed that the scenario depicted in the stored image has certain pre-determined characteristics, more particulariy that the image is of the head and the upper part of the body of a person who is not looking straight ahead, as could be arranged to occur in a video-telephone application in which aseated person faces a display screen to the side of which is provided the camera31 of Figure 3 to produce an image of the seated person. This assumption is madebecause the processing required in a search for the pupils in the image is considerable, and becomes well-nigh prohibitive if the complete image has to be searched. If the search can be initially narrowed to relatively small areas of the image the problem is considerably alleviated.
In order to determine first the approximate areas of the stored image in which the eyes are present processor 34 determines in step 52 the head-and-shoulders silhouette of the person. In order to do this in the present embodiment use is made of the fact that a person is never completely still, so that the exact position of his silhouette will change from frame to frame. Accordingly, processor 34 subtracts the brightness information for each pixel in one of the stores 38 and 39 from the corresponding information in the other of these stores, subjects the~ moduli of the results to a threshold, and writes a single bit corresponding to each pixel into the corresponding location in frame store 40, the single bit indicating whether or not the threshold was exceeded for the corresponding pixel. The resulting image in frame store 40 may be as is shown diagla"""atically as a full W O 97/04420 P ~/~r''~1768 line in Figure 5 of the drawings, which line indicates the pixels for which the threshold was exceeded.
In order to analyse the image in frame store 40, and thereby determine the approximate areas in the stored images in frame stores 38 and 39 where the person's two eyes are located, processor 34 then addresses coiumns of pixels or groups of adjacent columns of pixels in store 40 one by one, starting at one side of the image. The object is to detect a column or group of adjacent columns containing a number of pixels for which the threshold was exceeded, which number is greater than a second threshold. It will be evident that if the secondthreshold is appropriately chosen and the scanning starts at the left-hand side of the image in frame store 40, the first such column or such group of columns encountered will be at the area of column X~ in Figure 5. After this area is passed the second such column or group of columns will be at the area of column X2; thevalue of X2 is one of those required to ascertain the approximate areas in whichthe eyes are present and is therefore stored. Similar addressing of columns or groups of columns is then carried out from the other side of the image to ascertain the value of X4 and then similar addressing of rows or groups of rows starting from the top of the image to ascertain the values of Y1 and Y2. Having ascertained these values then it is known that the eyes are likely to be located within respective halves of a rectangle 60 which is centred on the coordinates (X4- X2~/2, (Y, - Y2)/2. The size of the rectangle 60 may be fixed, or may be made proportional to the values of (X4 - X2) and (Y1 - Y2) if there is a large potential variation in these values.
Having ascertained where the boundaries of the rectangle 60 lie, processor 34 then searches within respective halves of a corresponding rectangle applied to the last stored image in store 38 or 39 for the pupils of the left and right eyes of the person present in the image (step 53 in Figure 5). In the present embodimentthis is done by first locating the darkest areas within each half of the window and then scanning a template of a pupil of a left eye or a right eye as appropriate over these areas to determine the positions where the correlation coefficient betweenthe respective templates and the image is a maximum. The coordinates of these positions are the coordinates of the pupils of the two eyes. (Each template is W O 97/04420 PCT/GB9~01768 preferably derived from the pupils of the corresponding eyes of severat people by averaging) .
Processor 34 then waits (test 54) until storage of another image frame in one of the frame stores 38 and 39 which is currently taking place is completed.
When this is the case it searches for the pupils of the eyes in the newly storedimage, using correlation between the pupil templates and the image once again, in the areas of the newly stored image which are in the vicinities of the coordinates determined in step 53, it being assumed that movement of the pupils from image frame to image frame is smali. In this way updated coordinates are obtained in 10 step 55.
Respective warp templates, for instance as shown in the left-hand half of Figure 1(d), are then centred on the pupils in the newly stored image in step 56and an appropriate warp is applied to the parts of the image within the templateboundaries (c.f. the transition from the left half of Figure 1 (d) to the right half thereof). The image is then outputted on output 42 and steps 55 and 56 are repeated for each new image frame to be stored, the coordinates employed for thesearch in each repeated step 55 being those determined in the immediately preceding step 55.
The size of the warp template of Figure 1 (d) used in each step 56 is 20 preferably chosen so that the outer rectangle 1,2,3,4 encompasses the eyelid and eyelashes of the relevant eye but not the eyebrows, and the inner rectangle or square 26 is just large enough to encompass the iris of the relevant eye. If desired, provision may be made for the size of the warp template to be automatically scaled in accordance with the distance between the pupils of the left and right eyes (determined by means of the coordinates found in step 55)so as automatically to take account of variations in the size of the person within theimage.
The required amount and direction of translation of the central area 26 of the warp template of Figure 1 (d) in each step 56 can obviously be pre-calculated 30 using simple geometry in the case of video-telephone and like applications if the relative positions of the user, the video camera and the display at each location are known and it is assumed that the user will be looking at the display under normal circumstances.

Although it is preferred in each step 56 to employ a spatial warp in which a sub-area, e.g. sub-area 26 in Figure 1 (d), is merely translated on going fromsource image to destination image, the surrounding sub-areas being spatially warped to accommodate the translation, this is not essential. A warp of the kind5 shown in Figure 1 (c) or even Figure 1 (a) or Figure 1 (b) has been found to give satisfactory results in some circumstances. It should be noted however that warps of the kind illustrated in Figures 1 (a) to 1 (c) give rise to "tearing" of the area of the image within the rectangle 1,2,3,4 with respect to the area of the image outsidethis rectangle; this "tearing" can sometimes be aesthetically unacceptable.
Referring once again to equations (1) and (2~ above, it will be appreciated that insertion of integer values for x and y will not in general yield integer values for r and c, which means that in, for example, Figure 1 (d) pixels in the sourceimage (left-hand template of Figure 1 (d)) will not map exactly onto pixels in the destination image (right-hand template of Figure 1(d)). Some form of interpolation 15 or the like has therefore to be employed in the steps 56 of Figure 4 in order to estimate the brightness (and colour if employed) of the actual pixels in the parts of the image to which a warp has been applied. It has been found that in many circumstances a "nearest-neighbour" approach to this is all that is required. Such an approach may be effected by locating, for each pixel in the destination image20 (right-hand template of Figure 1 (d)), the nearest corresponding pixel in the source image (left-hand template of Figure 1 (d)) and assigning the brightness (and colour if appropriate) of the latter pixel to the former. In other words, instead of mapping each pixel (x,y) in the source image to a position (r,c) in the destination image by means of equations (1) and (2) these equations are used to map each pixel (x,y) in 25 the destination image to a position (r,c) in the source image, the coordinates r and c of this position each being rounded to the nearest whole number and the pixel in the source image whose coordinates are the result having its brightness (and colour if used) assigned to the pixel in the destination image.

Claims (6)

1. A method of processing a video signal representing an image which includes at least one eye of a human face, the processing being such as to change the direction in which the eye(s) appear to be looking in the represented image, the method comprising locating a part or parts of the signal which represent areas of the image whose boundaries window an eye and modifying this part or parts in such manner as to move, within each of the represented areas, the pupil and iris of the corresponding eye relative to the boundary of that area, characterised in that said part or parts are moreover modified in such manner as to apply, within each of the represented areas, a polynomial spatial warp to the surroundings of the iris of the corresponding eye, which spatial warp is such as to cause these surroundingsto accommodate the movement of the iris and pupil of the corresponding eye.
2. A method as claimed in Claim 1 wherein said parts are modified in such a manner that a polynomial spatial warp is not applied to the represented pupil and iris of each eye.
3. A method as claimed in Claim 1 or Claim 2 wherein each successive image frame of the video signal is temporarily stored in a frame store, templates of the pupils of left and right eyes are scanned over respective regions of the stored image frame, the difference between each template and the region of the stored image frame which it covers is determined for each position of that template, and the locations of said first and second areas for the corresponding image frame are determined by the positions of the two templates for which the corresponding said difference is smallest.
4. A method as claimed in Claim 3 wherein the sizes of the first and second areas are scaled in accordance with the separation between said positions of thetwo templates for the corresponding image frame.
5. A method as claimed in Claim 3 or Claim 4 wherein, for each stored image frame, the locations of said respective regions are determined by the positions of the two templates which determined the locations of the first and second areas for the immediately preceding stored image frame.
6. Apparatus for processing a video signal representing an image which includes at least one eye of a human face so as to change the direction in whichthe eye(s) appear to be looking in the represented image, the apparatus comprising locating means for locating a part or parts of the signal which represent areas of the image whose boundaries window an eye and modifying means for modifying these parts in such manner as to move, within each of the represented areas, thepupil and iris of the corresponding eye relative to the boundary of that area, characterised in that the modifying means is moreover arranged to modify said parts in such manner as to apply, within each of the represented areas, a polynomial spatial warp to the surroundings of the iris of the corresponding eye, which spatial warp is such as to cause these surroundings to accommodate the movement of the iris and pupil of the corresponding eye.
CA002227626A 1995-07-24 1996-07-23 Processing a video signal so as to modify an image represented thereby Abandoned CA2227626A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
EP95305199 1995-07-24
EP95305199.2 1995-07-24
PCT/GB1996/001768 WO1997004420A1 (en) 1995-07-24 1996-07-23 Processing a video signal so as to modify an image represented thereby

Publications (1)

Publication Number Publication Date
CA2227626A1 true CA2227626A1 (en) 1997-02-06

Family

ID=8221270

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002227626A Abandoned CA2227626A1 (en) 1995-07-24 1996-07-23 Processing a video signal so as to modify an image represented thereby

Country Status (4)

Country Link
EP (1) EP0843869A1 (en)
AU (1) AU6528396A (en)
CA (1) CA2227626A1 (en)
WO (1) WO1997004420A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0916334A1 (en) * 1997-11-07 1999-05-19 Unilever Plc Detergent composition
US7177449B2 (en) 2002-06-26 2007-02-13 Hewlett-Packard Development Company, L.P. Image correction system and method
EP1657915A1 (en) 2004-11-12 2006-05-17 Dialog Semiconductor GmbH Sized optimized pixel line to pixel block conversion algorithm

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE4102895C1 (en) * 1991-01-31 1992-01-30 Siemens Ag, 8000 Muenchen, De

Also Published As

Publication number Publication date
AU6528396A (en) 1997-02-18
EP0843869A1 (en) 1998-05-27
WO1997004420A1 (en) 1997-02-06

Similar Documents

Publication Publication Date Title
US6686926B1 (en) Image processing system and method for converting two-dimensional images into three-dimensional images
JP4698831B2 (en) Image conversion and coding technology
US7295699B2 (en) Image processing system, program, information storage medium, and image processing method
US5680531A (en) Animation system which employs scattered data interpolation and discontinuities for limiting interpolation ranges
US5057019A (en) Computerized facial identification system
JPH05143709A (en) Video effect device
KR20010113720A (en) Image rendering method and apparatus
JPH09135447A (en) Intelligent encoding/decoding method, feature point display method and interactive intelligent encoding supporting device
US6400832B1 (en) Processing image data
KR100411760B1 (en) Apparatus and method for an animation image synthesis
KR20220136196A (en) Image processing device, image processing method, moving device, and storage medium
JP3538263B2 (en) Image generation method
JP6549764B1 (en) IMAGE PROJECTION SYSTEM, IMAGE PROJECTION METHOD, AND PROGRAM
JPH06118349A (en) Spectacles fitting simulation device
CA2227626A1 (en) Processing a video signal so as to modify an image represented thereby
GB1605135A (en) Variable image display apparatus
JPH03252780A (en) Feature quantity extracting method
JPH0981746A (en) Two-dimensional display image generating method
JPH0863615A (en) Method for converting two-dimensional image into three-dimensional image
JPH05250445A (en) Three-dimensional model data generating device
JP2973432B2 (en) Image processing method and apparatus
GB2342026A (en) Graphics and image processing system
JP3009934B2 (en) Color adjustment method and color adjustment device
JP2591337B2 (en) Moving image generation method and motion vector calculation method
JPWO2020066008A1 (en) Image data output device, content creation device, content playback device, image data output method, content creation method, and content playback method

Legal Events

Date Code Title Description
EEER Examination request
FZDE Discontinued
FZDE Discontinued

Effective date: 20021008