WO1996029829A1 - Improvements in image compositing - Google Patents

Improvements in image compositing Download PDF

Info

Publication number
WO1996029829A1
WO1996029829A1 PCT/AU1996/000156 AU9600156W WO9629829A1 WO 1996029829 A1 WO1996029829 A1 WO 1996029829A1 AU 9600156 W AU9600156 W AU 9600156W WO 9629829 A1 WO9629829 A1 WO 9629829A1
Authority
WO
WIPO (PCT)
Prior art keywords
colour
rejection
acceptance
image
pixel
Prior art date
Application number
PCT/AU1996/000156
Other languages
French (fr)
Inventor
Christopher Alan Bone
Christopher Leslie Godfrey
Michael Joseph Murray
Original Assignee
Animal Logic Research Pty. Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Animal Logic Research Pty. Limited filed Critical Animal Logic Research Pty. Limited
Publication of WO1996029829A1 publication Critical patent/WO1996029829A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/74Circuits for processing colour signals for obtaining special effects
    • H04N9/75Chroma key

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Processing Of Color Television Signals (AREA)

Abstract

Methods and apparatus relating to the determination of a key colour value, k, of image pixels for a matte for use in image compositing, for arranging a colour space for the generation of the matte, for generating the matte and for forming a composited image are disclosed. A plurality of rejection key colour points (30) are defined in the colour space (10). The rejection points (30) can form discrete galaxies of bunched points in the colour space (10). A plurality of acceptance colour points (40) also are defined in the colour space (10), and similarly, can form separate galaxies. For any image point (62) under consideration, the location of the rejection region boundary (k = 0) must be determined depending upon a rejection indice, and the distance (r) to the boundary from the nearest rejection point (301) to the image point (62) determined. The boundary of the nearest acceptance region (k = 1) also must be determined, together with the distance (a1) from the image point (62) to the nearest acceptance point (401). Similarly, the distances (x1, y1) from the image point (62) to the nearest rejection point (301) and acceptance point (401) respectively must be determined. The colour value, k1, is determined as x1/(x1 + y1) or 1 - y1/(x1 + y1).

Description

Improvements in Image Compositing
Field of the Invention
This invention relates to improvements in image compositing, and particularly to improvements in the generation of mattes and the recovery of original colour in composited images.
Background of the Invention
Image compositing is the technique of overlaying a foreground image on a background image to form a composite image, and is used commonly in the production of motion pictures, broadcast television and recorded video programmes. Generation of a composite image is known as "keying" . Keying involves generation of a matte from an observed (eg. filmed or sampled) image, representing the desired foreground image extracted from its surroundings. Image compositing then merges the foreground image with the background image using the matte.
To give an example, consider the situation where the desired background image is a view of city buildings shot of a miniaturised set less than a metre in height. The desired foreground image is a person of normal size, and yet the required assembled or composited image must be of a normal size person against a normal size cit . This means that the foreground image of the person must be filmed (with the correct framing) separately from filming of the city. To generate the matte of the foreground image, the person is filmed against a colour not found in any portion of the person's image. This colour is called the key colour. The key colour often will be chosen to be blue, however the actual colour depends upon what colours appear in the foreground image. For example, blue would not be used as the key colour if the person is wearing blue jeans, rather green could be substituted. In the keying of an image to generate a matte, an image rejection area firstly is defined corresponding to the key colour. Image colour values falling within the rejection area are rejected from the matte. An image acceptance area can also be defined. Image colour values falling within the acceptance area are accepted in the matte. The rejection area and the acceptance area are separated by the roll-off. The roll-off is a blend between foreground and background. In the rejection area, when the composited image is formed, all of the background image shows through. In the acceptance area all of the foreground image minus the key colour shows through. The matte then is used to merge the foreground image onto the background image to form the composited image.
A key colour value, k, typically is defined, and has the value k = 0 for any pixel in the rejection area, k = 1 for any pixel in the acceptance area, and 0 < k < 1 for any pixel in the roll-off.
In the prior art the key colour will in practice not simply be a pure colour, such as would be represented by a point in three-dimensional colour space, rather is more in the nature of a small mass of colour which is approximated by a small cube (ie. a key colour volume) in the colour space. In one example the colour space can be described in a cartesian coordinate system such as RGB (Red/Blue/Green) or YIQ. The rejection area can be thought of as a larger cube enveloping the key colour volume, and often is termed the clip volume. Any image pixel having a colour falling within the clip volume is rejected. Of course, the clip volume can be collapsed onto the key colour volume if desired. The acceptance area can be thought of as the region in colour space outside a yet larger cube circumscribing the clip volume. Thus any image pixel having colour coordinates outside this larger volume is accepted. The roll-off is the space between the clip volume and the larger volume circumscribing the clip volume.
The often problematic soft blend of colours between the acceptance area and the rejection area is called the roll-off, as noted above. The colours falling in the roll- off equate to the mixing or blending of the acceptance colour and the key colour in that image. In the roll-off, the amount of key colour mixed with the acceptance colour biases a mix between foreground and background in the resultant composite. For example, a man standing against a blue backdrop casts a shadow on it reducing the colour of that blue to some fraction of the original colour in the shadow area. The required background image for this composite is of a brick wall; if the original blue backdrop colour is the key colour, that blue is replaced by the background image of the brick wall in the resultant composite image. The acceptance area is the man (who has no blue in his image) and the roll-off area is the shadow area. The resulting composite image is 100% of the man over the brick wall with a percentage mix of the brick wall and the shadowed key colour.
The best image composites are achieved when the colour of the image to be keyed is as far away as possible from the key colour. A stationary red cube against the blue key colour is a very easy composite to achieve, however this is rarely what is required in practice.
The problems associated with roll-off have many causes including motion blur, which occurs when a moving foreground object blurs with the key colour. The problem of shadows has already been exemplified. Smoke and fog are semi-transparent in nature, meaning that the resultant image is a mix of the key colour and white. The transparency of subjects such as glass, sheer clothing, coloured liquids, light and lens flares also causes difficult colour mixes. Colour mixing also will occur when a semi- reflective object is placed in front of the key-coloured background. Marginal colour differences between the foreground image and the key colour will always result in difficulties in extracting the matte. In filming the foreground object against the key- coloured backdrop diffuse interflection can occur, in that the backdrop bounces colour back onto the foreground object.
A large proportion of foreground images are recorded by means of videotape or film stock. That recorded information then is converted into either analogue video signal or digital pixel-based information for manipulation and compositing onto the desired background image. There are physical limitations associated with videotape or film stocks, such as noise or grain. The conversion process from the analog recorded image into digital form also can result in a lowering of resolution and colour distortion. A commonly occurring unwanted result due to the above-noted problems is that of colour bleed in the roll-off areas, resulting in colour fringing in the composited image. This is the effect of a portion of the key colour mixing through in the composite image.
There have been a number of attempts at reducing the roll-off problem. These include colour reduction, whereby the roll-off region in the foreground image is desaturated in colour (ie. reverts to shades of grey). The colour suppression technique applies in cases where the key colour is a primary (red, green or blue). The foreground image pixels for which the key primary colour is dominant are scaled such that the key primary colour is less than or equal to the stronger of the remaining two primary colours. Another technique is to invert the colour hue. The hue of the foreground pixel in the roll-off area has its hue value modified by 180°. In this way red becomes cyan, green becomes magenta and blue becomes yellow. Lastly, the key colour reduction technique operates by multiplying the key colour by some fractional value such that all of its components are less than or equal to the foreground pixel being examined. The scaled key colour then is subtracted from the foreground pixel.
A specific example of prior art key colour modelling is disclosed in U.S. Patent No. 5,355,174 issued on 11 October 1994 and assigned to Imagica Corporation of Japan. This document discloses approximating a key colour rejection space as a hexoctahedron (a form of regular polyhedron) surrounded by a larger like- hexoctahedron defining the boundary of the roll-off. In this manner the galaxy of key colours can be approximated, however the approximation has the ability only to model bunched or closely grouped key colours. For foreground images filmed or captured in poor lighting conditions, the resulting galaxy of key colour points will be L-shaped, and could not be enveloped by a hexoctahedron without including a large portion of the colour space that more correctly is in the roll-off rather than the rejection area. An example of matte generation and image colour compensation in the prior art is disclosed in U.S. Patent No. 5,343,252 issued on 30 August 1994 and assigned to Ultimatte Corporation of the United States.
Summary of the Invention
An object of the present invention is to overcome or at least ameliorate one or more of the problems associated with the prior art.
A first preferred embodiment seeks to define a key colour geometry in colour space so that key colour values of image pixels for a matte can be determined with improved accuracy over prior art methods, and a galaxy of disparate (non-bunched) key colour points can be successfully utilised. Another preferred embodiment seeks to predict the original colour of an image pixel in the roll-off area from the observed colour and to recover that original colour in the production of a composite image.
Therefore, the invention discloses a method for determining a key colour value, k, of image pixels for a matte, said method comprising the steps of: defining a plurality of key colour rejection points in a colour space for which k = 0; defining a plurality of acceptance colour points in said colour space for which k = 1; and for each said image pixel: determining the nearest said rejection point and the distance, x, therefrom to said image pixel; determining the nearest said acceptance point and the distance, y, therefrom to said image pixel; and calculating the key colour value for said pixel as x/(x + y) or 1 - y/(x + y).
The invention further discloses a method for arranging a colour space for the generation of a matte from image pixels to have one or more rejection regions, one or more acceptance regions and a roll-off region, said method comprising the steps of: defining a plurality of key colour points in a colour space; defining a plurality of acceptance colour points in said colour space; and for each said image pixel: determining a boundary of a rejection region for the rejection point nearest to an image pixel under consideration; and determining a boundary of an acceptance region of the acceptance point nearest to an image pixel under consideration, the roll-off region being anywhere in the colour space both beyond all acceptance region boundaries and all rejection region boundaries.
The invention further discloses a method for forming a composited image of a pixel-based background image and an observed pixel-based foreground image using a matte, the method comprising the steps of: obtaining the matte by separating the foreground image from a key-coloured backdrop, and whereby observed pixel colour values falling in one of one or more rejection regions of a colour space, within each of which are a plurality of rejection key colour points, are rejected from the matte, observed pixel colour values falling in one of one or more acceptance regions of the colour space, within each of which are a plurality of acceptance points, are accepted in the matte for their observed colour value, and observed pixel colour values falling in a roll-off region between all rejection regions and all acceptance regions are conditionally accepted in the matte for their observed colour value; calculating for each pixel in said roll-off region, a predicted original colour value to replace the observed colour value in the matte; and forming a composited image of said background image and said matte.
The invention yet further discloses a method for generating a matte from a pixel-based image of a foreground over a key-coloured backdrop for compositing with a background image, the method comprising the steps of: defining one or more rejection regions each located about a plurality of rejection key colour points in a colour space; defining one or more acceptance areas each located about a plurality of acceptance colour points in the colour space; and determining, on a pixel-by-pixel basis, whether an image pixel colour value lies within a rejection region to be rejected from the matte, in an acceptance region to be accepted in the matte for the observed colour value, or in the roll-off region between all rejection regions and all acceptance regions to be conditionally accepted in the matte for the observed colour value.
The invention further discloses apparatus for determining a key colour value, k, of image pixels for a matte, said apparatus comprising: input means; memory means for receiving from said input means and storing a plurality of key colour rejection points mapped in a colour space, and for receiving from said input means and storing a plurality of acceptance colour points mapped in said colour space; and data processing means for processing, on a pixel-by-pixel basis, said image pixels to determine the nearest said rejection point and the distance, x, therefrom to said image pixel, to determine the nearest said acceptance point and the distance, y, therefrom to said image pixel, and to calculate the key colour value for said image pixel as x/(x + y) or 1 - y/(x + y). The invention yet further discloses apparatus for arranging a colour space for the generation of a matte from image pixels to have one or more rejection regions, one or more acceptance regions and a roll-off region, said apparatus comprising: input means; memory means for receiving from said input means and storing a plurality of key colour rejection points mapped in a colour space and for receiving from said input means and storing a plurality of acceptance colour points mapped in said colour space; and data processing means for processing, on a pixel-by-pixel basis, said image pixels to determine a boundary of a rejection region for the rejection point nearest to an image pixel under consideration and to determine a boundary of an acceptance region of the acceptance point nearest to an image pixel under consideration, the roll-off region
5 being anywhere in the colour space both beyond all acceptance region boundaries and all rejection region boundaries.
The invention yet further discloses an image compositing system for forming a composited image of a pixel-based background image and an observed pixel-based foreground image using a matte, said system comprising: ιo a memory for storing an image having said foreground image over a key- coloured backdrop and for storing said background image; data processing means for creating said matte by separating said foreground image from said backdrop, and such that observed image pixels of said foreground and backdrop falling in one of one or more predetermined rejection regions of a colour i s space, within each of which are a plurality of rejection key colour points, are rejected from the matte, observed pixel colour values falling in one of one or more predetermined acceptance regions of the colour space, within each of which are a plurality of acceptance points, are accepted in the matte for their observed colour value, and observed pixel colour values falling in a roll-off region between all rejection
20 regions and all acceptance regions are conditionally accepted in their matte for their observed colour value, for calculating, for each pixel in said roll-off region, a predicted original colour value to replace the observed colour value in said matte, and for forming a composited image of said background image and said matte, and wherein said composited image is stored in said memory.
25 The invention yet further discloses apparatus for generating a matte from a pixel-based image of a foreground over a key-coloured backdrop for compositing with a background image, the apparatus comprising: input means; memory means for receiving from said input means and storing one or more rejection regions each located about a plurality of rejection key colour points in a colour space, and for receiving from said input means and storing one or more acceptance areas each located about a plurality of acceptance colour points in said colour space; and data processing means for determining, on a pixel-by-pixel basis, whether an image pixel colour value lies within a rejection region to be rejected from the matte, in an acceptance region to be accepted in the matte for the observed colour value or in said roll-off region between all rejection regions and all acceptance regions to be conditionally accepted in said matte for its observed colour value.
Embodiments of the invention have the ability to model non-bunched galaxies of key colours to generate a matte. The dynamic determination of the location of the boundaries of the acceptance regions and the rejection regions on the basis of the image pixel under consideration leads to more accurate colour prediction and hence improved image composites. The boundaries of the rejection and acceptance regions thus can curvilinear, and not constrained to a single geometric primitive polyhedron. Non- grouped key colours occur when a foreground image is filmed in poor lighting conditions typically involving white blues, pure blues and black blues forming an L- shaped cylinder in the colour space.
Description of the Drawings
Fig. 1 shows an example in the prior art of a key colour and associated rejection area, an acceptance area and a roll-off in RGB space;
Fig. 2 is a two-dimensional representation of the colour volume of Fig. 1 ; Figs. 3 and 4 show examples of key colour points, rejection areas, acceptance areas and roll-off in RGB space in accordance with two embodiments;
Fig. 5 shows a schematic block diagram of apparatus embodying the invention;
Figs. 6 is a flow block diagram embodying matte generation;
Figs. 7a and 7b are flow block diagrams embodying colour prediction; Fig. 8 is a flow block diagram embodying image compositing; Figs. 9a and 9b are a two-dimensional graphical representations of matte generation; and
Figs. lOa-lOc show two-dimensional representations illustrating methodologies of colour prediction and recovery.
Detailed Description of the Preferred Embodiment
Fig 1 shows a representation of a three-dimensional RGB colour space (or colour volume) 10. A cartesian coordinate system is utilised in the form of (R,G,B). Any image pixel therefore is uniquely defined by its RGB colour value. The volume 10 represents the universe of colours, and conveniently is scaled to unity so that pure red is represented by (1,0,0), green by (0,1,0) and blue by (0,0, 1). Fig. 1 also shows the RGB coordinates or values of other colours occurring at the corners of the cube representing the colour volume 10. It is usual to normalise the RGB coordinates in the manner shown. The mathematical representation of individual colours within the colour volume 10 thus is some integer fraction of unity. For example, in a 8-bit implementations, the colour range from black to a primary colour will be 0,1/255,2/255, ... , 253/255, 254/255, 1. In a 16-bit representation, the divisor is 65,535.
Fig. 1 also shows, as an example of the prior art, three volumes (cubes) located one within the other, within the colour volume 10. The innermost cube is the key colour volume 12 defining the range of colours constituting the key colour. The next outer cube is termed the clip volume 14 defining the edge of the rejection area, and the outer cube 16 defines the edge of the roll-off 18. Colours falling outside of the roll-off 18 are in the acceptance area. The inner extent of the roll-off is defined by the clip volume 14.
Fig. 2 is a two-dimensional representation of Fig. 1. Parameters termed the clip value 20 and roll-off value 22 are defined to locate the cubes 14,16 with respect to the key colour volume 12. Matte Generation
Fig. 3 shows the same colour volume 10, within which is a set of points 30n defining the key colours. The key colours can be selectively defined to be a single colour (ie. a single point) or a representative galaxy of points. It would not be unusual for there to be hundreds of points forming a galaxy. Only a representative number of key colour points 30 have been shown, with the galaxy of all such colour points 30n is stylistically represented by the solid lumpy ball 34. The ability to selectively define a galaxy of discontiguous or non-bunched points, and of a non-symmetric shape, representing the key colours, allows a matte to be extracted for images filmed in poor lighting conditions or for which there is more than one key colour.
A similar number of representative acceptance colour points 40n are shown, also as a stylised lumpy ball representation of the galaxy of acceptance colour points. The region between the rejection galaxy 34 and acceptance galaxy 44 is the roll-off, but only for the case where there is no tolerance or buffer (i.e. 'hard clipping') provided with respect to each rejection key colour point 30n or acceptance colour point 40n.
Such buffer regions can be selectively provided to define a reject clip and an accept clip. The distance from the boundary of the reject clip region or the accept clip region to their respective galaxy 30n,40n is parametrically described by the distance between the accept and reject point that is closest to the sampled image pixel been examined. The sum of the accept and reject clip values is less than or equal to 1 , and both must be larger than or equal to 0. When the sum is 1 it implies that there is no roll-off as the accept and reject regions touch at all points. As mentioned, the rejection clip value and acceptance clip value can be thought of as a user-selectable tolerance. For example, if the rejection clip value is reduced to zero, only the actual key colour points 30n are rejected ('hard clipping'). If the rejection clip value is increased to a value of 1 - (acceptance clip), there is no roll-off 60 and only the actual accept colour points 40n are accepted. Fig. 4 shows a further embodiment showing two mutually exclusive of key colour acceptance galaxies 34j,342 and two colour rejection galaxies 44ι ,442 residing in the colour volume 10. The invention thus contemplates any number of selectively defined rejection galaxies and acceptance galaxies, thus providing great flexibility in accurately defining all of the key colours occurring in the backdrop against which the foreground objects have been shot, and similarly accurately modelling all of the acceptance colour points occurring in the foreground image.
Fig. 5 shows a schematic block diagram of a hardware system embodying the invention. The image data store 50 conveniently represents any means in which observed images can be stored for subsequent processing. The observed images can be by way of frames from motion picture film, solid state storage device, frames of video, CD-ROM, computer data files and the like. The image information stored in the data store 50 can be obtained by equipment such as film or video camera, scanners, telecines, charge coupled device (CCD) digitizing cameras and the like. The image data read into the memory 52 must include, or be converted to include, pixel-by-pixel colour values to be suitable for subsequent processing. The CPU 54 and the memory 52 communicate over a bus structure 56 in the process of matte generation, colour prediction and colour recovery in forming a composited image to be output to some form of output device 58 by the CPU 54. The output device 58 can include means such as a video monitor, hard copy printer, magnetic and optical storage disks, or solid state storage device video tape or film stock.
The process of matte generation now will be described with reference to the steps (100, etc) shown in the flow block diagram of Figs. 6 and Fig. 9a.
For the purposes of illustration, a two-dimensional representation of the colour space will be used. The axes, such as shown in Fig. 9a, are arbitrary, and can be any one of Red/Green, Red/Blue or Green/Blue. Also for the purposes of illustration, only four key colour points 30 j, 30 , 303 and 30 and four acceptance points 401, 402, 403 and 4O4 lying in the plane of the axes, are shown. It is important to bear in mind, however, that embodiments of the invention operate in three-dimensional space. Data relating to each pixel (of each frame) of image data is stored in the memory 52, preferably using the data structure of an octree. The use of octree data structures is particularly advantageous when seeking to manipulate multi-variable sets of data (such as a three-dimensional colour space) in a sparsely populated volume, in that computational efficiencies arise. This is especially the case when seeking to locate the next closest point within the space. A description of octree data structures can be found in the text Computer Graphics, Principles and Practice by Foley, Van Dam et al, section 12.6.3, 2nd Ed. , published by Addison Wesley.
Generation of a matte is performed on a pixel-by-pixel (pn) basis to determine whether a pixel under consideration falls within the rejection area 32, the acceptance area 42 or the roll-off 60. For each frame of image data, each constituent pixel within the frame is stored in a data form to include its colour value together with a key value, k: (R,G,B,k). If a pixel falls within the rejection area 32, then its k value will be 0, hence the pixel colour will be rejected in the finally formed composited image. If a pixel falls within the acceptance area 42, then its k value will be 1, in which case the colour value of that observed is accepted unmodified. If a pixel falls within the roll-off 60, the k value will be in the range 0<k< 1. In that case, the observed image colour value subsequently will undergo the colour prediction and recovery process.
Whether a pixel colour value is either wholly rejected or accepted are defined by the rejection clip and acceptance clip respectively. As previously discussed, the rejection clip and acceptance clip are in the nature of a tolerance, and conveniently expressed as a fraction of 1. In the present example, the rejection clip is chosen as 0.2 and the acceptance clip is chosen as 0.25. The location of the boundary of the k = 0 and k = 1 areas for any pixel, pn, under consideration this is the fraction of the total distance between that point and the nearest reject point 30n and acceptance point 40n. Fig. 9a shows the example of the colour value 62 for one pixel, pi , falling in the roll-off 60. The approximate location of the k = 0, k = 0.5 and k = 1 values have been shown for the whole set of image pixel locations in the roll-off. Proceeding on the assumption that the rejection clip is chosen to be 0.2 (step 102) and the acceptance clip also is chosen to be 0.25 (step 104), then the location of the respective boundaries of the rejection area 32 and acceptance area 42 must be determined. To achieve this, the distances between the colour value point 62 and the closest rejection point 30j and closest acceptance point 40j is determined (step 107), respectively represented in Fig. 9a by the lengths xi and yj . The octree data structures can be beneficially brought to bear on this determination by use of the previous nearest point and discarding groups of key colour points and acceptance colour points that are no closer through performing a single distance calculation on the R,G,B, values. The respective distances rj and aj (i.e. the boundaries of the k = 0 rejection area and the k = 1 acceptance area) then must be determined (steps 108, 110) to find the boundary of the rejection area 32 and the acceptance area 42 for the pixel under consideration. It is important to note that the location of the boundary is dependent upon the image pixel under consideration. These locations are determined from the following formulae:
rn = (rejection clip)(xn-l-yn)
an = (acceptance clip)(xn -I- yn)
For the first image point 62, = 0.55 and yj = 0.45 (scaled), hence: rj = (0.2)(0.55 + 0.45) = 0.2 and aj = (0.25)(0.65 + 0.3)
= 0.24 The key value, kn, is determined (step 112) from the following equation:
V Kn - T- Xn ~ rn xn + yn - rn -an 0.55-0.2 ki =
0.55+0.45-0.2-0.25 = 0.64
The same calculations have been performed for the colour point 64 for the purposes of illustration. In that case, x2 = 0.65, y2 = 0.3, leading to r2 = 0.19 and a2 = 0.16. In turn, k2 = 0.77.
The same calculations are performed for every pixel, pn, (step 106) in the image. When a pixel falls in the rejection area 32 (step 114), the key value is 0 and stored as such (step 116). When a pixel falls in the acceptance area 42, the key value is 1 and stored as such (step 120). Both of the colour points 62, 64 shown in Fig. 9a fall in the roll-off 60, in which case the corresponding observed colour is stored, together with the key value, (step 122) to be used in predicted back to the original colour by virtue of the colour prediction and colour recovery processes.
Fig. 9b shows the situation where there are three mutually exclusive galaxies of key colour points 70n,80n,90n and a single galaxy of acceptance points 100n. The location of the respective k = 0 and k = 1 boundaries of rejection and acceptance clips rn,an of approximately 0.1 also are shown. Such key colour galaxies are common in images filmed in poor lighting conditions. The "k" value for any point in the roll-off 60 is calculated as before after determining the closest acceptance point and the closest rejection key colour point. Two such points 66,68 are shown together with the respective length to be calculated.
In one embodiment of the invention, a matte generated by the foregoing methodology can be applied to the colour prediction and recovery methodologies that hereafter will be described. However it is equally the case that a matte generated by other methodologies, including those in the prior art, can be subjected to the colour prediction and colour recovery methodologies. This can be achieved utilising the matte data values and the knowledge of where the key colour volume is (in the recovery-by- extrapolation method) and where the area outside the roll-off (acceptance) is (in the recovery-by-interpolation method).
Colour Prediction and Recovery Colour prediction and recovery techniques will now be described with reference to Figs. 7a, 7b, 10a, 10b and 10c. Fig. 10a illustrates an extrapolation technique for colour prediction relevant to the rejection area 32 shown in Fig. 9a. This technique embodies the underlying assumption that the amount of key colour blended with the observed colour value is approximately inversely proportional to the key value, and that all recovery colours will lie along the ray defined by the observed colour point under consideration and the closest key colour: key value = 0 implies that the observed value is all key colour and so the recovered colour is infinitely far along the prediction ray, xe = oo, key value = 0.5 implies that the observed value is 50% key colour and 50% original colour, xe = 2xj , and key value = 1 implies that there is no key colour present in the observed image.
Fig. 10b illustrates an interpolation technique for colour prediction which embodies the assumption that the original colour lies along the line segment connecting the closest accept colour 40ι and the observed colour 62: key value = 0 implies that the sampled pixel 62 is replaced by the original colour 40 j , key value = 0.5 implies that the original colour lies half way between 62 and
key value = 1 implies that original colour and the observed colour are co¬ incident. For the colour point 62, as previously shown in Fig. 7a, the distance xj is known as 0.55. The extrapolated predicted original colour value, xe, is determined (step 138) by the following formula:
xen = rn "■" (xn " rn)''cn
Figure imgf000019_0001
= 0.2 + (0.55 - 0.2)/0.64 = 0.75 This then locates the extrapolated predicted original colour value 62' at a distance away from the observed colour value 62, as shown in Fig. 10a. The predicted original colour interpolation value y,n is determined (step 158) from the following formula:
Yin = an + kn(yn - an)
Figure imgf000019_0002
= 0.25 + 0.64(0.45 - 0.25) = 0.38
The value yjj therefore locates the predicted original colour 62" at the distance 0.38 from the nearest acceptance point 401 ? as shown in Fig. 10b.
As is apparent from Figs. 10a and 10b, there are two separate predicted original colours 62' , 62". It has been determined in the course of experimentation that both techniques provide equally satisfactory results.
Whilst the extrapolation and interpolation methodologies can stand alone, it is equally the case that they can be used in concert, and a further calculation (step 190) performed by way of a linear interpolation to finally locate the original colour 62 point along a line intersecting the predicted colour points 62' , 62". This is shown in Fig. 10c.
Once the predicted colour value for the point 62', 62" or 62 is known, then in forming of the composite image (steps 196, 198), the colour value is adjusted to recover the predicted original colour as shown in Fig. 8. The colour recovery process can further include compensation of the luminance of the predicted original colour to be equal to the observed luminance value. Luminance is defined as being the value: (0.299 Red + 0.587 Green + 0.114 Blue) of an RGB colour value. Therefore, a calculation can be performed (step 194) to determine the luminance value of the observed colour point 62 and the colour value of the predicted original colour 62 compensated to be the same.

Claims

CLAIMS:
1. A method for determining a key colour value, k, of image pixels for a matte, said method comprising the steps of: defining a plurality of key colour rejection points in a colour space for which k = 0; defining a plurality of acceptance colour points in said colour space for which k = 1; and for each said image pixel: determining the nearest said rejection point and the distance, x, therefrom to said image pixel; determining the nearest said acceptance point and the distance, y, therefrom to said image pixel; and calculating the key colour value for said pixel as x/(x + y) or 1 - y/(x + y).
2. A method as claimed in claim 1 , comprising further the steps of: defining a rejection clip indice; defining an acceptance clip indice; and for each said image pixel: determining the location, r, of a boundary of a rejection region for the nearest said rejection point as a function of said rejection clip indice; determining the location, a, of a boundary of an acceptance region from the nearest said acceptance point as a function of said acceptance clip indice; and calculating the key colour value, k, as (x - r)/(x + y - a - r) or 1 - (y - a)/(x -+- y - a - r).
3. A method as claimed in claim 2, whereby said clip indice and said rejection indice are a fraction of 1 , and said rejection region boundary location is (clip indice) (x + y) from said nearest rejection point, and said acceptance region boundary location is (rejection indice) (x + y) from said nearest acceptance point.
4. A method as claimed in any one of claims 1 to 3, comprising the further step of predicting the original key colour value, kp, of an image pixel under consideration to be: kp = r + (x - r)/k.
5. A method as claimed in any one of claims 1 to 3 , comprising the further step of predicting the original key colour value, kp, of an image pixel under consideration to be: kp = a + k(y - a).
6. A method for arranging a colour space for the generation of a matte from image pixels to have one or more rejection regions, one or more acceptance regions and a roll-off region, said method comprising the steps of: defining a plurality of key colour rejection points in a colour space; defining a plurality of acceptance colour points in said colour space; and for each said image pixel: determining a boundary of a rejection region for the rejection point nearest to an image pixel under consideration; and determining a boundary of an acceptance region of the acceptance point nearest to an image pixel under consideration, the roll-off region being anywhere in the colour space both beyond all acceptance region boundaries and all rejection region boundaries.
7. A method for forming a composited image of a pixel-based background image and an observed pixel-based foreground image using a matte, the method comprising the steps of: obtaining the matte by separating the foreground image from a key -coloured backdrop, and whereby observed pixel colour values falling in one of one or more rejection regions of a colour space, within each of which are a plurality of rejection key colour points, are rejected from the matte, observed pixel colour values falling in one of one or more acceptance regions of the colour space, within each of which are a plurality of acceptance points, are accepted in the matte for their observed colour value, and observed pixel colour values falling in a roll-off region between all rejection regions and all acceptance regions are conditionally accepted in the matte for their observed colour value; calculating for each pixel in said roll-off region, a predicted original colour value to replace the observed colour value in the matte; and forming a composited image of said background image and said matte.
8. A method as claimed in claim 7, whereby said step of calculating includes determining which is the nearest key colour point in any rejection region and which is the nearest acceptance colour point in any acceptance region, and said observed colour value is determined as a proportion of the distance to said nearest rejection point and the distance to said nearest acceptance point.
9. A method for generating a matte from a pixel-based image of a foreground over a key -coloured backdrop for compositing with a background image, the method comprising the steps of: defining one or more rejection regions each located about a plurality of rejection key colour points in a colour space; defining one or more acceptance areas each located about a plurality of acceptance colour points in the colour space; and determining, on a pixel-by-pixel basis, whether an image pixel colour value lies within a rejection region to be rejected from the matte, in an acceptance region to be accepted in the matte for the observed colour value, or in the roll-off region between all rejection regions and all acceptance regions to be conditionally accepted in the matte for the observed colour value.
10. A method as claimed in claim 9, whereby said step of determining includes determining the nearest rejection point in any rejection region and the nearest acceptance point in any acceptance region, and said observed colour value is determined as a proportion of the distance to said nearest rejection point and the distance to said nearest acceptance colour point.
11. Apparatus for determining a key colour value, k, of image pixels for a matte, said apparatus comprising: input means; memory means for receiving from said input means and storing a plurality of key colour rejection points mapped in a colour space, and for receiving from said input means and storing a plurality of acceptance colour points mapped in said colour space; and data processing means for processing, on a pixel-by-pixel basis, said image pixels to determine the nearest said rejection point and the distance, x, therefrom to said image pixel, to determine the nearest said acceptance point and the distance, y, therefrom to said image pixel, and to calculate the key colour value for said image pixel as x/(x + y) or 1 - y/(x + y).
12. Apparatus as claimed in claim 11, wherein said input means is further operable to define a rejection clip indice and to define an acceptance clip indice, and said data processing means is further operable, on a pixel-by-pixel basis, to determine the location, r, of a boundary of a rejection region for the nearest said rejection point as a function of said rejection clip indice, and to determine the location, a, of a boundary of an acceptance region from the nearest said acceptance point as a function of said acceptance clip indice, and to calculate the key colour value, k, as (x - r)/(x + y - a - r) or 1 - (y - a)/(x + y - a - r).
13. Apparatus as claimed in claim 12, wherein said clip indice and said rejection indice are a fraction of 1 , and said data processing means determines the location, r, of the rejection region boundary as (clip indice) (x + y) from said nearest rejection point, and determines the location, a, of the acceptance region boundary as (rejection indice) (x + y) from said nearest acceptance point.
14. Apparatus as claimed in any one of claims 11 to 13, wherein said data processing means further predicts the original key colour value, kp, of an image pixel under consideration to be: kp = r + (x - r)/k.
15. Apparatus as claimed in any one of claims 11 to 13, wherein said data processing means is further operable to predict the original key colour value, kp, of an image pixel under consideration to be: kp = a + k (y - a).
16. Apparatus for arranging a colour space for the generation of a matte from image pixels to have one or more rejection regions, one or more acceptance regions and a roll-off region, said apparatus comprising: input means; memory means for receiving from said input means and storing a plurality of key colour rejection points mapped in a colour space and for receiving from said input means and storing a plurality of acceptance colour points mapped in said colour space; and data processing means for processing, on a pixel-by -pixel basis, said image pixels to determine a boundary of a rejection region for the rejection point nearest to an image pixel under consideration and to determine a boundary of an acceptance region of the acceptance point nearest to an image pixel under consideration, the roll-off region being anywhere in the colour space both beyond all acceptance region boundaries and all rejection region boundaries.
17. An image compositing system for forming a composited image of a pixel-based background image and an observed pixel-based foreground image using a matte, said system comprising: a memory for storing an image having said foreground image over a key- coloured backdrop and for storing said background image; data processing means for creating said matte by separating said foreground image from said backdrop, and such that observed image pixels of said foreground and backdrop falling in one of one or more predetermined rejection regions of a colour space, within each of which are a plurality of rejection key colour points, are rejected from the matte, observed pixel colour values falling in one of one or more predetermined acceptance regions of the colour space, within each of which are a plurality of acceptance points, are accepted in the matte for their observed colour value, and observed pixel colour values falling in a roll-off region between all rejection regions and all acceptance regions are conditionally accepted in their matte for their observed colour value, for calculating, for each pixel in said roll-off region, a predicted original colour value to replace the observed colour value in said matte, and for forming a composited image of said background image and said matte, and wherein said composited image is stored in said memory.
18. An image compositing system as claimed in claim 17, wherein said data processing means, in calculating said predicted original colour value, determines which is the nearest key colour point in any rejection region and which is the nearest acceptance colour point in any acceptance region, and determines said observed colour value as a proportion of the distance to said nearest rejection point and the distance to said nearest acceptance point.
19. Apparatus for generating a matte from a pixel-based image of a foreground over a key -coloured backdrop for compositing with a background image, the apparatus comprising: input means; memory means for receiving from said input means and storing one or more rejection regions each located about a plurality of rejection key colour points in a colour space, and for receiving from said input means and storing one or more acceptance areas each located about a plurality of acceptance colour points in said colour space; and data processing means for determining, on a pixel-by-pixel basis, whether an image pixel colour value lies within a rejection region to be rejected from the matte, in an acceptance region to be accepted in the matte for the observed colour value or in said roll-off region between all rejection regions and all acceptance regions to be conditionally accepted in said matte for its observed colour value.
20. Apparatus as claimed in claim 19, wherein said data processing means is further operable, for each image pixel in said roll-off region, to determine the nearest rejection point in any rejection region and the nearest acceptance point in any acceptance region, and to determine the observed colour value to be a proportion of the distance to said nearest rejection point and the distance to said nearest acceptance colour point.
PCT/AU1996/000156 1995-03-21 1996-03-21 Improvements in image compositing WO1996029829A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
AUPN1845A AUPN184595A0 (en) 1995-03-21 1995-03-21 Improvements in image compositing
AUPN1845 1995-03-21

Publications (1)

Publication Number Publication Date
WO1996029829A1 true WO1996029829A1 (en) 1996-09-26

Family

ID=3786201

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/AU1996/000156 WO1996029829A1 (en) 1995-03-21 1996-03-21 Improvements in image compositing

Country Status (2)

Country Link
AU (1) AUPN184595A0 (en)
WO (1) WO1996029829A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1251692A2 (en) * 2001-04-18 2002-10-23 Qauntel Limited Improvements in electronic image keying systems
US6496599B1 (en) * 1998-04-01 2002-12-17 Autodesk Canada Inc. Facilitating the compositing of video images
US6894806B1 (en) * 2000-03-31 2005-05-17 Eastman Kodak Company Color transform method for the mapping of colors in images

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1988004510A1 (en) * 1986-12-09 1988-06-16 Corporate Communications Consultants, Inc. Video color detector and chroma key device and method
WO1992005664A1 (en) * 1990-09-20 1992-04-02 Spaceward Holdings Limited Video image composition
AU6554794A (en) * 1993-04-15 1994-11-08 Ultimatte Corporation Screen filtering boundary detection for image compositing
US5381184A (en) * 1991-12-30 1995-01-10 U.S. Philips Corporation Method of and arrangement for inserting a background signal into parts of a foreground signal fixed by a predetermined key color

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1988004510A1 (en) * 1986-12-09 1988-06-16 Corporate Communications Consultants, Inc. Video color detector and chroma key device and method
WO1992005664A1 (en) * 1990-09-20 1992-04-02 Spaceward Holdings Limited Video image composition
US5381184A (en) * 1991-12-30 1995-01-10 U.S. Philips Corporation Method of and arrangement for inserting a background signal into parts of a foreground signal fixed by a predetermined key color
AU6554794A (en) * 1993-04-15 1994-11-08 Ultimatte Corporation Screen filtering boundary detection for image compositing

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6496599B1 (en) * 1998-04-01 2002-12-17 Autodesk Canada Inc. Facilitating the compositing of video images
US6894806B1 (en) * 2000-03-31 2005-05-17 Eastman Kodak Company Color transform method for the mapping of colors in images
EP1251692A2 (en) * 2001-04-18 2002-10-23 Qauntel Limited Improvements in electronic image keying systems
EP1251692A3 (en) * 2001-04-18 2004-07-14 Qauntel Limited Improvements in electronic image keying systems

Also Published As

Publication number Publication date
AUPN184595A0 (en) 1995-04-13

Similar Documents

Publication Publication Date Title
US8953905B2 (en) Rapid workflow system and method for image sequence depth enhancement
US8897596B1 (en) System and method for rapid image sequence depth enhancement with translucent elements
US8396328B2 (en) Minimal artifact image sequence depth enhancement system and method
US8073247B1 (en) Minimal artifact image sequence depth enhancement system and method
US6909806B2 (en) Image background replacement method
US7181081B2 (en) Image sequence enhancement system and method
CA1180438A (en) Method and apparatus for lightness imaging
US7024053B2 (en) Method of image processing and electronic camera
JP4698831B2 (en) Image conversion and coding technology
JP2935459B2 (en) System and method for color image enhancement
JPH06225329A (en) Method and device for chromakey processing
US6525741B1 (en) Chroma key of antialiased images
AU2015213286B2 (en) System and method for minimal iteration workflow for image sequence depth enhancement
US7139439B2 (en) Method and apparatus for generating texture for 3D facial model
JP2001503574A (en) Backing luminance non-uniformity compensation in real-time synthesis systems
WO1996029829A1 (en) Improvements in image compositing
JP2882754B2 (en) Soft chroma key processing method
JPH0341570A (en) Color image processing method
JPH11308525A (en) Still image processing unit

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): CA

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: CA