GB2330265A - Image compositing using camera data - Google Patents

Image compositing using camera data Download PDF

Info

Publication number
GB2330265A
GB2330265A GB9721591A GB9721591A GB2330265A GB 2330265 A GB2330265 A GB 2330265A GB 9721591 A GB9721591 A GB 9721591A GB 9721591 A GB9721591 A GB 9721591A GB 2330265 A GB2330265 A GB 2330265A
Authority
GB
United Kingdom
Prior art keywords
image
images
data
computer program
location
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB9721591A
Other versions
GB9721591D0 (en
Inventor
Kenneth Philip Appleby
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harlequin Group PLC
Harlequin Group Ltd
Original Assignee
Harlequin Group PLC
Harlequin Group Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harlequin Group PLC, Harlequin Group Ltd filed Critical Harlequin Group PLC
Priority to GB9721591A priority Critical patent/GB2330265A/en
Publication of GB9721591D0 publication Critical patent/GB9721591D0/en
Priority to GB9815695A priority patent/GB2330974A/en
Priority to PCT/GB1998/003038 priority patent/WO1999019836A1/en
Priority to AU94496/98A priority patent/AU9449698A/en
Publication of GB2330265A publication Critical patent/GB2330265A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

An image 200 with focal length # k and location data (α k ,# k ,# k ) is to be generated as from a sequence of images 1 to N each having their own associated location and focal length data(α i ,# i ,# i ,# i ) and(α j ,# j ,# j ,# j ) etc. A portion of the image 360 is transformed into a world co-ordinate system(see figure 5) and each of the source images 1 to N is then transformed to the world co-ordinates by their associated transform R i to determine what image the image portion 360 can be derived from. This portion is then transformed back to the frame co-ordinate system using the reverse transform R<SP>-1</SP>.

Description

Image Processing System, Method and Computer Program Product The present invention relates to an image data processing system, method, and computer program product.
There exists a significant quantity of recorded image data such as, for example, video, television, films, CDROM images and computer generated images.
Typically once the images have been recorded or generated and then recorded, it is very time consuming and very difficult to give effect to any changes to the images.
It is often the case that a recorded sequence of images may contain artefacts which were not intended to be included within the sequence of images.
Alternatively, artefacts may be added to a sequence of images. For example, it may be desirable to add special effects to the images. The removal or addition of such artefacts is referred to generically as postproduction processing and involves a significant degree of skill and time.
The quality of images, in terms of both noise and resolution, recorded upon some form of image carrier, such as a film, video tape, cd-rom, video disc, mass storage medium or the like, depends upon firstly the quality of the camera used to capture the image and recording process and secondly upon the quality of the image carrier. In order to improve the quality of the images derived from such carriers, complex digital filtering is typically required which is, again, both time consuming and expensive in terms of supporting hardware.
It is therefore an object of the present invention to at least mitigate some of the problems of the prior art.
Accordingly the present invention provides a method for producing a first image from a second image and a third image within an image processing system comprising storage means for storing the first image and first image camera data governing the orientation (&alpha;k, ssk, #k) and the first focal length (k) of the first image, and for storing the second and third images with second and third image camera data corresponding to the second and third orientations (ai ssi Oi, a, ss, Oj) and second and third focal lengths (hi, ) of a camera when the second and third images were captured, the method comprising the steps of setting the first image camera data; and deriving first image data for the first image selectively from at least the second and third images using the first, second and third image camera data.
A second aspect of the present invention provides an image processing system for producing a first image from a second image and third image, the system comprising means for storing the first image and first image camera data governing the first orientation (ak, ssks Ok) of the first image and storing the second and third images with second and third image camera data corresponding to the second and third orientation (ai ssi Oi, a ssj Q) and second and third focal lengths (#i, ) of a camera when the second and third images were captured; the system comprising means for setting the first image camera data; and means for deriving first image data for the first image selectively from at least the second and third images using the first, second and third image camera data.
A third aspect of the present invention provides a computer program product for producing a first image from a second image and third image within an image processing system comprising storage means for storing the first image and first image camera data governing the first orientation (ak, ssk, ok) and first focal length (k) of the first image and for storing the second and third images with second and third image camera data corresponding to the second and third orientation (ai ssi Oi, a1 ssj Q) and second and third focal lengths (k ) of a camera when the second and third images were captured, the computer program product comprising computer program means for setting the first image camera data; and computer program code means for deriving first image data for the first image selectively from at least the second and third images using the first, second and third image camera data.
Embodiments of the present invention will now be described, by way of example, only with reference to the accompanying drawings in which: figure 1 illustrates a computer suitable for implementing an image processing system or method according to the embodiments of the present invention; figure 2 depicts a sequence of source images from which at least one generated image can be produced; figure 3 shows the production of a first image using image data derived from two source image frames i and j, figure 4 illustrates a frame co-ordinate system for an image; figure 5 shows the mapping of the frame coordinate system of figure 4 into a world co-ordinate system; figure 6 shows in terms of a world co-ordinate frame of reference how a vector may intersect several image frames; figure 7 illustrates from another perspective the intersection of a vector with several images frames; figure 8 depicts a frame co-ordinates system and image frame of a virtual image or image to be generated; figure 9 shows the extrapolation of a portion of the image frame of figure 8 onto a source image; figure 10 illustrates a flowchart for mapping the image portion of an image to be generated onto a given source image; figure 11 illustrates the generation of a clean plate; figures 12 to 15 the source image frames from which a clean plate can be generated; figure 16 illustrates a partially completed clean plate; figure 17 shows a completed clean plate; figure 18 illustrates the propagation of an edit throughout a sequence of generated images; figure 19 shows a figure 20 illustrates the process of removing a scratch from a source image; figure 21 shows the generation of a clean plate from several source images.
Referring to figure 1 there is shown a computer 100 suitable for implementing embodiments of the present invention. The computer 100 comprises at least one microprocessor 102, for executing computer instructions to process data, a memory 104, such as ROM and RAM, accessible by the microprocessor via a system bus 106. Mass storage devices 108 are also accessible via the system bus 106 and are used to store, off-line, data used by and instructions for execution by the microprocessor 102. Information is output and displayed to a user via a display device 110 which typically comprises a VDU together with an appropriate graphics controller. Data is input to the computer using at least one of either of the keyboard 112 and associated controller and the mass storage devices 108.
Referring to Figure 2, there is illustrated schematically the concept of the present invention.
Data for a first image 200, that is an image to be generated, is produced by selecting portions of image data from at least two images 202 and 204 of a sequence of images 206. Each image 1 to N of the sequence of images 206 and the first image to be generated 200 has associated therewith first image camera data 208 to 218. The image camera data 208 to 218 governs the position with reference to, for example, a world coordinates frame of reference of the images 1 to N and the first image 200. The camera image data represents or is related to the focal length of a camera which was used to capture the images 1 to N as well as the orientation of the camera when the images were captured. For any given image, for example image i 202 the associated image camera data comprises ai and ssl which determine the orientation within the world coordinates reference of a frame, and hence indirectly the camera, by way of an axis of rotation. 01. represents a degree of rotation about that axis relative to a reference within the world co-ordinate system. The focal length ki of image i 202 governs the distance from the origin, which coincides with the optical centre of the camera to the centre of the image frame i 202.
Referring to Figure 3, there is shown, in greater detail, the derivation of the first image, image k 200, from second and third images, image i 202 and image j 204. The majority of image k 200 is derived from image i, that is to say, image k is identical to image i but for a first image portion 300. The first image portion 300 is derived from a corresponding portion 302 of the second image 204. Therefore, rather than the first image 200 containing the image data contained within a correspondingly located portion 304 of the second image 202, the image data for first image portion 300 is derived from the image data contained within portion 302 of the third image.
Each image 1 to N of the sequence of images 206 is stored with reference to a corresponding image frame co-ordinate system such as that shown in Figure 4. The image frame co-ordinate system 400, in the present embodiment, comprises three mutually orthogonal axes 402, 404 and 406 which correspond to the x, y and z axis in a right handed Cartesian co-ordinate system respectively. An image 408 is centred on the z-axis 406 at a distance ki; which represents the focal length of the camera at the time when the image 408 was captured.
With reference to Figure 5, there is shown the image 408 of Figure 4 and the corresponding image frame co-ordinate system when that co-ordinate system has been mapped into, for example, a world co-ordinate system 500 which also comprises three mutually orthogonal axes 502, 504 and 506 forming a right handed Cartesian co-ordinate system. A pixel Vp within image 408 is mapped from its position within the image frame co-ordinate system to a corresponding position, Vp', within the world co-ordinate system by a suitable matrix S. Therefore, the position, in terms of the world co-ordinate system 500, of any pixel within the image 408 can be determined by multiplying the image frame reference co-ordinates of that pixel by the matrix S.
There is shown schematically in Figure 6 the sequence of images 1 to N of Figure 2 when positioned within the world co-ordinate system 500. It can be seen that a vector 602, which may represent the line of sight of a virtual camera (not shown), passes through several images of the sequence of images 206. The point of intersection 604 is illustrated as a single pixel on image frame number 74. It will be appreciated that the frame co-ordinates of the point of intersection of the line of sight 602 of the camera within each of the image frames will vary between image frames. Figure 7 shows a sequence of consecutive but non-contiguous images 700 having identified within each of the images 702 to 708 a corresponding portion 710 of the images as the camera which captured the images 702 to 708 panned about a vertical access from left to right. It can be seen from Figure 7 that the position of the point of intersection, within the images, varies as between image frames. It can be seen that the location of the corresponding portion 710 traverses the images. Therefore, the position, in terms of frame coordinates, of the corresponding portion 710 will also change.
The procedure for determining or selectively obtaining image data of a first image to be generated will now be described. Referring to Figure 8, there is shown a first image to be generated 800 as positioned within a corresponding frame co-ordinate system 802 comprising three mutually orthogonal axes 804, 806 and 808 which form a right handed Cartesian co-ordinate system. The focal length Xk of the image 800 is shown along the z axis. A portion 810 of the image 800 to be generated is selected. The portion 810 may correspond to a single pixel of the image 800 or to a group of pixels. The location, or co-ordinates, of the portion 800 with respect to the frame co-ordinate system 802 is determined. The co-ordinates of the portion 810 are mapped, using an appropriate transformation, into corresponding world co-ordinates of the world coordinates system illustrated in Figure 5.
A determination or selection from the sequence of images 206 is then made in order to establish those images from which image data for the first portion 810 can be derived. Each of the source images 1 to N also has associated therewith a matrix RL to RN which transforms that source image from corresponding frame co-ordinates into world co-ordinates as well as the inverse of such a matrix Ri-l to RN-1. The inverse matrices map the images 1 to N from the world coordinate system 500 into the corresponding frame coordinate systems.
Assume that image i has been selected as a source image from which image data for the first image portion can be derived. The location, V p-, of the first image portion of the first image within the world co-ordinate system is transformed into the frame co-ordinate system for frame i using the inverse matrix R,-l as follows V p = Ri-l V p. The result of the mapping is illustrated in Figure 9. It can be seen from Figure 9 that the location V p. of the first portion 810 has fallen short of image i. It is therefore necessary to project the first portion 810 onto image i, that is, it is necessary to determine the corresponding location within image i of the first portion 810. The projection of the first portion 810 onto image i identifies image data 900 of image i which can be used in order to derive the image data of the first image portion 810.
Referring to Figure 10, there is shown schematically a flow chart which implements the mapping of a first image portion 810 of a first image onto a second image portion 900 of a second image, for example image i, in order to determine second image data of the second image portion 900 from which first image data can be derived for the first image.
It will be appreciated from Figures 6 and 7 that any orientation of the co-ordinate system shown in Figure 8 within the world co-ordinate system may map the first image portion onto several images from which image data can be derived in order to produce the first image data for the first portion 810. Therefore, there will be several projections of the first image portion onto prospective images from which data can be derived.
In such a case, the steps 1006 to 1004 of Figure 10 are repeated for all or a selectable portion of the eligible source images 206. An eligible source image is an image which is intersected by the vector, or the extrapolation thereof, produced by the co-ordinates of the first image portion 810 in terms of corresponding frame co-ordinates.
There are various embodiments or combinations of embodiments which can be used in order to determine or derive the image data for the first image portion 810 from the eligible source images. One embodiment determines the distance of each of the projections 900 of the first portion 810 onto the eligible source frames 206 and selects image data according to the distances.
For any given point or portion having co-ordinates p'' = (x11,y1',z''), given in terms of a frame coordinate system, the location of the point of intersection or the projection of that point onto the image is given by (x''Xi/z'',y'',Xi/ z'l,Xi). The distance of the projections 900 of the first image portion 810 onto the eligible source images from the corresponding centres of the eligible source images is determined. The image data of the projection 900 which is closest to its corresponding image frame centre is used as the basis for deriving first image data for the first image portion 810. In an alternative embodiment, the image data of the projections 900 of the first image portion 810 onto the eligible source frames 206 is averaged and that average value is used to determine or derive the first image data for the first image portion 810. In a preferred embodiment, the above calculated average is a weighted according to the distance of any given projection from its corresponding image frame centre.
The above process of selecting an image portion 810 of a first image to be generated and the determination of appropriate image data from eligible or given source images is repeated until a complete first image is formed.
Typically, the image data retrieved from the projections 900 represents the colour or RGB component at the location of the projections within the eligible source images. The colour data may be derived or sampled from an eligible source image using, for example, a bilinear interpolation technique or some other more sophisticated sampling technique. By sampling images and interpolating, the resolution of the generated image can be increased. This has the consequence of improving the image quality, that is, an increase in the resolution of the generated image as compared to the eligible source images can be realised.
Furthermore, the noise of images which is attributable to the grain of, for example, a film medium or a video tape used to record the source images can be reduced by generating images using the present invention. Still further, the noise introduced during image capture can be reduced if the data for any given pixel, or portion of the image to be generated, is derived from corresponding portions of several images.
Applications of the present invention to the processing of a sequence of images will now be described.
Clean plate generation As mentioned above it is often the case that a scene which contains only background information, that is, the scene is completely free of actors, is required in order to produce special effects.
Referring to Figure 11 there is shown an image 1100 which has been generated from a plurality of eligible source images in which the image data has been selected from the projections which were closest to the centre of corresponding eligible source image. It can be seen from within the defined area 1102 that this image data selection strategy results in a smearing or blurring of foreground or moving images. It will be appreciated that the source image sequence from which the image 1100 was derived illustrates a person walking past the steps of an entrance to a house. These individual eligible source images or at least a selection of the individual eligible source images from which the image 1100 in Figure 11 can be derived are shown in Figure 12, 13, 14 and 15. Referring more particularly to Figure 12, there is shown a region 1202 from which background image data can be derived in order to remove the smearing which is depicted in Figure 11 within box 1102. The image data derived from the portion of Figure 12 defined by the region 1202 will be sufficient to remove the smearing defined within box 1104 of image 1100, that is the three right most depictions of the person can be removed by incorporating into the image 1100 the background image data contained within or derived from the identified region 1202 of Figure 12. Similarly, the left most person depicted in Figure 11 can be removed by copying into that region of Figure 11 the image data defined or contained within, for example, the region 1502 of Figure 15. Therefore, by firstly combining the image data of Figures 12, 13, 14 and 15 and then by combining the resultant image 1100 shown in Figure 11 with selected portions of the source images an image 1600 as shown in Figure 16 can be produced.
The image 1600 shown in Figure 16 represents a clean plate, that is, an image which contains only background image data. Figure 17 represents the clean plate 1600 which results from the above processing without the defining box 1102.
Object removal, object placement Referring to Figure 18 there is shown a sequence of four consecutive but non-contiguous source images 1800 to 1806. A selected portion of one frame 1800 is edited in order to remove, for example, the eagle 1808 using either an appropriate tool for editing or a combination of selectable portions of source image 1800 and, for example, source image 1806 in substantially the same manner as defined above with reference to the clean plate generation process. The edited frame 1810 is utilised in conjunction with the remaining source images 1802 to 1806 in order to generate new images 1812 to 1818. For each of the generated images 1812 to 1818 there is a defined region 1820 to 1826 in which the enclosed image data is derived from a corresponding portion of the edited source frame 1810. It can therefore be seen that the eagle 1808 has been removed from the source frame 1800 and that removal has been propagated throughout subsequently generated images 1812 to 1818. Similarly, a source frame could be edited to include some new artefact and that new artefact can be propagated readily throughout subsequently generated images in substantially the same manner as described above with reference to object removal.
Different camera formats Using the present invention images can be generated which have a different camera format as compared to the format in which the source images were captured. Referring to Figure 18, again, there is shown four consecutive but non-contiguous source images 1800 to 1806 which have been recorded in a particular format, for example, having a particular aspect ratio.
Referring to Figure 19, there is shown a single generated image 1900 having a significantly different aspect ratio as compared to that of the source images of Figure 18. The generated image of 1900 of Figure 19 has been derived from several source images. In this way the aspect ratio of a generated image can be set.
The smearing depicted in Figure 11 is also apparent in Figure 19. The image data for the area of the larger aspect ratio image 1900 contained within the box 1904 can be derived from at least one of any of the source images 1800 to 1806. Therefore, the larger aspect ratio image comprises the background of several source images together with the foreground or action derived from at least one of the source images. The above derivation can be repeated in order to produce a plurality of larger aspect ratio source images or a sequence of larger aspect ratio images.
Scratch/blemish removal Referring to Figure 20 there is shown a sequence of consecutive but non-contiguous source frames 2002 to 2006 in which one of the source images 2004 comprises a scratch 2008. An image frame 2010 is generated in which the image data within a pre-defined area 2012 of the source frame is derived from a corresponding area of, for example, a source image 2006 in substantially the same manner as described above. Therefore, the scratch 2008 can be removed from the image, thereby improving the quality of the images.
Clearing the streets Referring to Figure 21, there is shown four consecutive but non-contiguous frames 2100 to 2106 each containing a defined region 2108 which, over a significant number of source images, would have substantially constant image data. The exception to the substantially constant image data arises when a moving object passes through the region 2108. An image or clean plate 2110 can be generated from all or a selectable subset of the source images as follows. For each pixel within the image 2110 to be generated, corresponding image data is derived or extracted from all or a selectable subset of the eligible source images. It will be appreciated that the derived image data for those regions of each source image which do not comprise moving images will remain substantially constant. The image data derived from the source images for any given pixel or portion of the image to be generated is arranged within, for example, a histogram according to, for example, predeterminable bands of the colour information. The histogram is used to determine the most frequently occurring colour or pre-determinable range of colours within the corresponding portions of the source frames. That most frequently occurring colour is then used for the appropriate portion of the generated image 2110. In this way an image or clean plate 2110 is generated which is completely free of any moving or changing image data.
Clean sequence generation It will be appreciated that repeated application of the above clean plate generation process or the "Clearing the streets" process can be utilised in order to produce a sequence of generated images which contain only background information. Such a clean sequence is again very useful for compositing and rotoscoping techniques when producing special effects for a film.
Although specific instances of the application of the above invention have been given, it will be appreciated that the present invention is not limited thereto. An image processing system can be realised in which any combination or all of the above features are provided.
The term captured refers to the generation or recording of digital images. It will therefore be appreciated that a CCD camcorder captures digital images and stores the capture image on a suitable tape.
It will also be appreciated that capturing an image includes the generation of a digital image within or by a computer or other image/data processing environment.
The images which are generated as a consequence of the present invention may be output together with the image camera data for further processing such as, for example, incorporation within a virtual reality environment or to an environment which combines computer generated images with the generated images.
Similarly, camera data taken from a computer animation environment together with the corresponding images, be utilised in order to combine the animated images with the captured images.

Claims (25)

  1. CLAIMS 1. A method for producing a first image from a second image and a third image within an image processing system comprising storage means for storing the first image and first image camera data governing the orientation (akt ssk, ok) and a first focal length (k) of the first image, and for storing the second and third images with second and third image camera data corresponding to the second and third orientations (a ssi Oi, Q ssj 0,) and second and third focal lengths (ki, Xj) of a camera when the second and third images were captured, the method comprising the steps of setting the first image camera data; and deriving first image data for the first image selectively from at least the second and third images using the first, second and third image camera data.
  2. 2. A method as claimed in claim 1, wherein the steps of deriving comprise the steps of calculating, using the first image camera data, the location of a first portion of the first image; calculating, using the second image camera data and the location of the first portion, the location of the second image portion of a second image which corresponds to the location of the first portion within the second image; calculating, using the third image camera data and the location of the first portion, the location of a third image portion of the third image which corresponds to the location of the first portion within the third image; identifying second and third image data of the second and third image portions respectively; and establishing the first image data from at least the second and third image data.
  3. 3. A method as claimed in claim 2, wherein the step of establishing comprises the steps of; determining the centres of the second and third images; determining the distance of the second image portion from the centre of the second image; determining the distance of the third image portion from the centre of the third image.
  4. 4. A method as claimed in claim 3, wherein the step of establishing further comprises the steps of calculating an average, preferably a weighted average according to the distance of the second and third image portions from the respective centres of the second and third images, of the second and third image data; and setting the first image data to equal the calculated average.
  5. 5. A method as claimed in claim 3, wherein the step of establishing further comprises the step of setting the first image data to equal the second image data if the second image portion is closer to the centre of the second image than the third image portion is to the centre of the third image portion; or setting the first image data to equal the third image data if the third image portion is closer to the centre of the third image than the second image portion is to the centre of the second image.
  6. 6. A method as claimed in any preceding claim, comprising the step of setting the size of the first image such that it is different to, preferably greater than, the size of the second image or the size of the third image.
  7. 7. A method as claimed in any preceding claim, wherein the second and third images form part of a sequence of images and the method further comprises the step of selecting the second and third images from the sequence of images according to predeterminable criteria.
  8. 8. A method substantially as described herein with reference to and/or as illustrated by the accompanying drawings.
  9. 9. A method for manufacturing an image carrier, preferably a film, video tape, video disc, cd-rom, magnetic storage medium or the like, from a sequence of images comprising the steps of; producing a first image from at least selected second and third images using a method as claimed in any preceding claim; and putting the first image onto said image carrier.
  10. 10. An image processing system for producing a first image from a second image and third image, the system comprising means for storing the first image and first image camera data governing the orientation (akt sskr ok) of the first image and for storing the second and third images with second and third image camera data corresponding to the second and third orientations (a, ssi Oi, a, ssj Oj) and second and third focal lengths (ki, Xj) of a camera when the second and third images were captured; means for setting the first image camera data; and means for deriving the first image data for the first image selectively from at least the second and third images using the first, second and third image camera data.
  11. 11. A system as claimed in claim 10, wherein the means for deriving comprises means for calculating, using the first image camera data, the location of a first portion of the first image; means for calculating, using the second image camera data and the location of the first portion, the location of a second image portion of the second image which corresponds to the location of the first portion within the second image; means for calculating, using the third image camera data and the location of the first portion, the location of a third image portion of the third image which corresponds to the location of the first portion within the third image; means for identifying second and third image data of the second and third image portions respectively; and means for establishing the first image data from at least the second and third image data.
  12. 12. A system as claimed in claim 11, wherein the means for establishing comprises means for determining the centres of the second and third images; means for determining the distance of the second image portion from the centre of the image; and means for determining the distance of the third image portion from the centre of the third image.
  13. 13. A system as claimed in claim 12, wherein the means for establishing further comprises the means for calculating an average, preferably a weighted average according to the distance of the second and third image portions from the respective centres of the second and third images, of the second and third image data; and means for setting the first image data to equal the calculated average.
  14. 14. A system as claimed in claim 12, wherein the means for establishing further comprises means for setting the first image data equal to the second image data if the second image portion is closer to the centre of the second image than the third image is to the centre of the third image; and means for setting the first image data to equal the third image data if the third image portion is closer to the centre of the third image than the second image portion is to the centre of the second image.
  15. 15. A system as claimed in any of claims 10 to 14, comprising means for setting the size of the first image such that it is different to, preferably greater than, the size of the second image or the size of the third image.
  16. 16. A system as claimed in any of claims 10 to 15, wherein the second and third images form part of the sequence of images and the system further comprises means for selecting the second and third images from the sequence of images according to predeterminable criteria.
  17. 17. An image processing system substantially as described herein with reference to and/or as illustrated in the accompanying drawings.
  18. 18. A computer program product for producing a first image from a second image and third image within an image processing system comprising storage means for storing the first image and first image camera data governing the first orientation (akr sskr ok) and first focal length (k) of the first image and for storing the second and third images with second and third image camera data corresponding to the second and third orientations (ai ssi Oi, aj ssj Oj) and second and third focal lengths (hi, j) of a camera when the second and third images were captured, the computer program product comprising computer program code means for setting the first image camera data; and computer program code means for deriving first image data for the first image selectively from at least the second and third images using the first, second and third image camera data.
  19. 19. A product as claimed in claim 18, wherein the computer program code means for deriving comprises computer program code means for calculating, using the first image camera data, the location of a first portion of the first image; computer program code means for calculating, using the second image camera data and the location of the first portion, the location of the second image portion of the second image which corresponds to the location of the first portion within the second image; the computer program code means for calculating, using the third image camera data and the location of the first portion, the location of the third image portion of the third image which corresponds to the location of the first image portion within the third image; computer program code means for identifying second and third image data of the second and third image portions respectively; and computer program code means for establishing said first image data from at least the second and third image data.
  20. 20. A product as claimed in claim 19, wherein the computer program code means for establishing comprises computer program code means for determining the centres of the second and third images; computer program code means for determining the distance of the second image portion from the centre of the second image; and computer program code means for determining the distance of the third image portion from the centre of the third image.
  21. 21. A product as claimed in claim 20, wherein the computer program code means for establishing further comprises computer program code means for calculating an average, preferably a weighted average according to the distance of the second and third image portions from the respective centres of the second and third images, of the second and third image data; and computer program code means for setting the first image data to equal the calculated average.
  22. 22. A product as claimed in claim 20, wherein the computer program code means for establishing further comprises computer program code means for setting the first image data to equal the second image data if the second image portion is closer to the centre of the second image than the third image portion is to the centre of the third image; and computer program code means for setting the first image data to equal the third image data if the third image portion is closer to the centre of the third image than the second image portion is to the centre of the second image.
  23. 23. A product as claimed in any of claims 18 to 22, further comprising; computer program code means for setting the size of the first image such that it is different to, preferably greater than, the size of the second image or the size of the third image.
  24. 24. A product as claimed in any claims 18 to 23, wherein the second and third images form part of a sequence of images and the product further comprises: computer program code means for selecting the second and third images from the sequence of images according to predeterminable criteria.
  25. 25. A computer program product substantially as described herein with reference to and/or as illustrated by the accompanying drawings.
GB9721591A 1997-10-10 1997-10-10 Image compositing using camera data Withdrawn GB2330265A (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
GB9721591A GB2330265A (en) 1997-10-10 1997-10-10 Image compositing using camera data
GB9815695A GB2330974A (en) 1997-10-10 1998-07-20 Image matte production without blue screen
PCT/GB1998/003038 WO1999019836A1 (en) 1997-10-10 1998-10-12 Image processing system, method and computer program product
AU94496/98A AU9449698A (en) 1997-10-10 1998-10-12 Image processing system, method and computer program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB9721591A GB2330265A (en) 1997-10-10 1997-10-10 Image compositing using camera data

Publications (2)

Publication Number Publication Date
GB9721591D0 GB9721591D0 (en) 1997-12-10
GB2330265A true GB2330265A (en) 1999-04-14

Family

ID=10820411

Family Applications (1)

Application Number Title Priority Date Filing Date
GB9721591A Withdrawn GB2330265A (en) 1997-10-10 1997-10-10 Image compositing using camera data

Country Status (1)

Country Link
GB (1) GB2330265A (en)

Cited By (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014088809A1 (en) * 2012-12-04 2014-06-12 Motorola Solutions, Inc. Transmission of images for inventory monitoring
US10352689B2 (en) 2016-01-28 2019-07-16 Symbol Technologies, Llc Methods and systems for high precision locationing with depth values
US10505057B2 (en) 2017-05-01 2019-12-10 Symbol Technologies, Llc Device and method for operating cameras and light sources wherein parasitic reflections from a paired light source are not reflected into the paired camera
US10521914B2 (en) 2017-09-07 2019-12-31 Symbol Technologies, Llc Multi-sensor object recognition system and method
US10572763B2 (en) 2017-09-07 2020-02-25 Symbol Technologies, Llc Method and apparatus for support surface edge detection
US10591918B2 (en) 2017-05-01 2020-03-17 Symbol Technologies, Llc Fixed segmented lattice planning for a mobile automation apparatus
US10663590B2 (en) 2017-05-01 2020-05-26 Symbol Technologies, Llc Device and method for merging lidar data
US10726273B2 (en) 2017-05-01 2020-07-28 Symbol Technologies, Llc Method and apparatus for shelf feature and object placement detection from shelf images
US10731970B2 (en) 2018-12-13 2020-08-04 Zebra Technologies Corporation Method, system and apparatus for support structure detection
US10740911B2 (en) 2018-04-05 2020-08-11 Symbol Technologies, Llc Method, system and apparatus for correcting translucency artifacts in data representing a support structure
US10809078B2 (en) 2018-04-05 2020-10-20 Symbol Technologies, Llc Method, system and apparatus for dynamic path generation
US10823572B2 (en) 2018-04-05 2020-11-03 Symbol Technologies, Llc Method, system and apparatus for generating navigational data
US10832436B2 (en) 2018-04-05 2020-11-10 Symbol Technologies, Llc Method, system and apparatus for recovering label positions
US10949798B2 (en) 2017-05-01 2021-03-16 Symbol Technologies, Llc Multimodal localization and mapping for a mobile automation apparatus
US11003188B2 (en) 2018-11-13 2021-05-11 Zebra Technologies Corporation Method, system and apparatus for obstacle handling in navigational path generation
US11010920B2 (en) 2018-10-05 2021-05-18 Zebra Technologies Corporation Method, system and apparatus for object detection in point clouds
US11015938B2 (en) 2018-12-12 2021-05-25 Zebra Technologies Corporation Method, system and apparatus for navigational assistance
US11042161B2 (en) 2016-11-16 2021-06-22 Symbol Technologies, Llc Navigation control method and apparatus in a mobile automation system
US11080566B2 (en) 2019-06-03 2021-08-03 Zebra Technologies Corporation Method, system and apparatus for gap detection in support structures with peg regions
US11079240B2 (en) 2018-12-07 2021-08-03 Zebra Technologies Corporation Method, system and apparatus for adaptive particle filter localization
US11090811B2 (en) 2018-11-13 2021-08-17 Zebra Technologies Corporation Method and apparatus for labeling of support structures
US11093896B2 (en) 2017-05-01 2021-08-17 Symbol Technologies, Llc Product status detection system
US11100303B2 (en) 2018-12-10 2021-08-24 Zebra Technologies Corporation Method, system and apparatus for auxiliary label detection and association
US11107238B2 (en) 2019-12-13 2021-08-31 Zebra Technologies Corporation Method, system and apparatus for detecting item facings
US11151743B2 (en) 2019-06-03 2021-10-19 Zebra Technologies Corporation Method, system and apparatus for end of aisle detection
US11200677B2 (en) 2019-06-03 2021-12-14 Zebra Technologies Corporation Method, system and apparatus for shelf edge detection
US11327504B2 (en) 2018-04-05 2022-05-10 Symbol Technologies, Llc Method, system and apparatus for mobile automation apparatus localization
US11341663B2 (en) 2019-06-03 2022-05-24 Zebra Technologies Corporation Method, system and apparatus for detecting support structure obstructions
US11367092B2 (en) 2017-05-01 2022-06-21 Symbol Technologies, Llc Method and apparatus for extracting and processing price text from an image set
US11392891B2 (en) 2020-11-03 2022-07-19 Zebra Technologies Corporation Item placement detection and optimization in material handling systems
US11402846B2 (en) 2019-06-03 2022-08-02 Zebra Technologies Corporation Method, system and apparatus for mitigating data capture light leakage
US11416000B2 (en) 2018-12-07 2022-08-16 Zebra Technologies Corporation Method and apparatus for navigational ray tracing
US11449059B2 (en) 2017-05-01 2022-09-20 Symbol Technologies, Llc Obstacle detection for a mobile automation apparatus
US11450024B2 (en) 2020-07-17 2022-09-20 Zebra Technologies Corporation Mixed depth object detection
US11506483B2 (en) 2018-10-05 2022-11-22 Zebra Technologies Corporation Method, system and apparatus for support structure depth determination
US11507103B2 (en) 2019-12-04 2022-11-22 Zebra Technologies Corporation Method, system and apparatus for localization-based historical obstacle handling
US11592826B2 (en) 2018-12-28 2023-02-28 Zebra Technologies Corporation Method, system and apparatus for dynamic loop closure in mapping trajectories
US11593915B2 (en) 2020-10-21 2023-02-28 Zebra Technologies Corporation Parallax-tolerant panoramic image generation
US11600084B2 (en) 2017-05-05 2023-03-07 Symbol Technologies, Llc Method and apparatus for detecting and interpreting price label text
US11662739B2 (en) 2019-06-03 2023-05-30 Zebra Technologies Corporation Method, system and apparatus for adaptive ceiling-based localization
US11822333B2 (en) 2020-03-30 2023-11-21 Zebra Technologies Corporation Method, system and apparatus for data capture illumination control
US11847832B2 (en) 2020-11-11 2023-12-19 Zebra Technologies Corporation Object classification for autonomous navigation systems
US11954882B2 (en) 2021-06-17 2024-04-09 Zebra Technologies Corporation Feature-based georegistration for mobile computing devices
US11960286B2 (en) 2019-06-03 2024-04-16 Zebra Technologies Corporation Method, system and apparatus for dynamic task sequencing
US11978011B2 (en) 2017-05-01 2024-05-07 Symbol Technologies, Llc Method and apparatus for object status detection

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3914540A (en) * 1974-10-03 1975-10-21 Magicam Inc Optical node correcting circuit
WO1994017636A1 (en) * 1993-01-29 1994-08-04 Bell Communications Research, Inc. Automatic tracking camera control system
US5424733A (en) * 1993-02-17 1995-06-13 Zenith Electronics Corp. Parallel path variable length decoding for video signals
WO1997003517A1 (en) * 1995-07-13 1997-01-30 Robert John Beattie Methods and apparatus for producing composite video images

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3914540A (en) * 1974-10-03 1975-10-21 Magicam Inc Optical node correcting circuit
WO1994017636A1 (en) * 1993-01-29 1994-08-04 Bell Communications Research, Inc. Automatic tracking camera control system
US5424733A (en) * 1993-02-17 1995-06-13 Zenith Electronics Corp. Parallel path variable length decoding for video signals
WO1997003517A1 (en) * 1995-07-13 1997-01-30 Robert John Beattie Methods and apparatus for producing composite video images

Cited By (47)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014088809A1 (en) * 2012-12-04 2014-06-12 Motorola Solutions, Inc. Transmission of images for inventory monitoring
US9380222B2 (en) 2012-12-04 2016-06-28 Symbol Technologies, Llc Transmission of images for inventory monitoring
US9747677B2 (en) 2012-12-04 2017-08-29 Symbol Technologies, Llc Transmission of images for inventory monitoring
US10352689B2 (en) 2016-01-28 2019-07-16 Symbol Technologies, Llc Methods and systems for high precision locationing with depth values
US11042161B2 (en) 2016-11-16 2021-06-22 Symbol Technologies, Llc Navigation control method and apparatus in a mobile automation system
US10505057B2 (en) 2017-05-01 2019-12-10 Symbol Technologies, Llc Device and method for operating cameras and light sources wherein parasitic reflections from a paired light source are not reflected into the paired camera
US10591918B2 (en) 2017-05-01 2020-03-17 Symbol Technologies, Llc Fixed segmented lattice planning for a mobile automation apparatus
US10663590B2 (en) 2017-05-01 2020-05-26 Symbol Technologies, Llc Device and method for merging lidar data
US10726273B2 (en) 2017-05-01 2020-07-28 Symbol Technologies, Llc Method and apparatus for shelf feature and object placement detection from shelf images
US11978011B2 (en) 2017-05-01 2024-05-07 Symbol Technologies, Llc Method and apparatus for object status detection
US11093896B2 (en) 2017-05-01 2021-08-17 Symbol Technologies, Llc Product status detection system
US11367092B2 (en) 2017-05-01 2022-06-21 Symbol Technologies, Llc Method and apparatus for extracting and processing price text from an image set
US11449059B2 (en) 2017-05-01 2022-09-20 Symbol Technologies, Llc Obstacle detection for a mobile automation apparatus
US10949798B2 (en) 2017-05-01 2021-03-16 Symbol Technologies, Llc Multimodal localization and mapping for a mobile automation apparatus
US11600084B2 (en) 2017-05-05 2023-03-07 Symbol Technologies, Llc Method and apparatus for detecting and interpreting price label text
US10572763B2 (en) 2017-09-07 2020-02-25 Symbol Technologies, Llc Method and apparatus for support surface edge detection
US10521914B2 (en) 2017-09-07 2019-12-31 Symbol Technologies, Llc Multi-sensor object recognition system and method
US10809078B2 (en) 2018-04-05 2020-10-20 Symbol Technologies, Llc Method, system and apparatus for dynamic path generation
US10832436B2 (en) 2018-04-05 2020-11-10 Symbol Technologies, Llc Method, system and apparatus for recovering label positions
US10823572B2 (en) 2018-04-05 2020-11-03 Symbol Technologies, Llc Method, system and apparatus for generating navigational data
US11327504B2 (en) 2018-04-05 2022-05-10 Symbol Technologies, Llc Method, system and apparatus for mobile automation apparatus localization
US10740911B2 (en) 2018-04-05 2020-08-11 Symbol Technologies, Llc Method, system and apparatus for correcting translucency artifacts in data representing a support structure
US11506483B2 (en) 2018-10-05 2022-11-22 Zebra Technologies Corporation Method, system and apparatus for support structure depth determination
US11010920B2 (en) 2018-10-05 2021-05-18 Zebra Technologies Corporation Method, system and apparatus for object detection in point clouds
US11090811B2 (en) 2018-11-13 2021-08-17 Zebra Technologies Corporation Method and apparatus for labeling of support structures
US11003188B2 (en) 2018-11-13 2021-05-11 Zebra Technologies Corporation Method, system and apparatus for obstacle handling in navigational path generation
US11079240B2 (en) 2018-12-07 2021-08-03 Zebra Technologies Corporation Method, system and apparatus for adaptive particle filter localization
US11416000B2 (en) 2018-12-07 2022-08-16 Zebra Technologies Corporation Method and apparatus for navigational ray tracing
US11100303B2 (en) 2018-12-10 2021-08-24 Zebra Technologies Corporation Method, system and apparatus for auxiliary label detection and association
US11015938B2 (en) 2018-12-12 2021-05-25 Zebra Technologies Corporation Method, system and apparatus for navigational assistance
US10731970B2 (en) 2018-12-13 2020-08-04 Zebra Technologies Corporation Method, system and apparatus for support structure detection
US11592826B2 (en) 2018-12-28 2023-02-28 Zebra Technologies Corporation Method, system and apparatus for dynamic loop closure in mapping trajectories
US11402846B2 (en) 2019-06-03 2022-08-02 Zebra Technologies Corporation Method, system and apparatus for mitigating data capture light leakage
US11080566B2 (en) 2019-06-03 2021-08-03 Zebra Technologies Corporation Method, system and apparatus for gap detection in support structures with peg regions
US11960286B2 (en) 2019-06-03 2024-04-16 Zebra Technologies Corporation Method, system and apparatus for dynamic task sequencing
US11151743B2 (en) 2019-06-03 2021-10-19 Zebra Technologies Corporation Method, system and apparatus for end of aisle detection
US11200677B2 (en) 2019-06-03 2021-12-14 Zebra Technologies Corporation Method, system and apparatus for shelf edge detection
US11341663B2 (en) 2019-06-03 2022-05-24 Zebra Technologies Corporation Method, system and apparatus for detecting support structure obstructions
US11662739B2 (en) 2019-06-03 2023-05-30 Zebra Technologies Corporation Method, system and apparatus for adaptive ceiling-based localization
US11507103B2 (en) 2019-12-04 2022-11-22 Zebra Technologies Corporation Method, system and apparatus for localization-based historical obstacle handling
US11107238B2 (en) 2019-12-13 2021-08-31 Zebra Technologies Corporation Method, system and apparatus for detecting item facings
US11822333B2 (en) 2020-03-30 2023-11-21 Zebra Technologies Corporation Method, system and apparatus for data capture illumination control
US11450024B2 (en) 2020-07-17 2022-09-20 Zebra Technologies Corporation Mixed depth object detection
US11593915B2 (en) 2020-10-21 2023-02-28 Zebra Technologies Corporation Parallax-tolerant panoramic image generation
US11392891B2 (en) 2020-11-03 2022-07-19 Zebra Technologies Corporation Item placement detection and optimization in material handling systems
US11847832B2 (en) 2020-11-11 2023-12-19 Zebra Technologies Corporation Object classification for autonomous navigation systems
US11954882B2 (en) 2021-06-17 2024-04-09 Zebra Technologies Corporation Feature-based georegistration for mobile computing devices

Also Published As

Publication number Publication date
GB9721591D0 (en) 1997-12-10

Similar Documents

Publication Publication Date Title
GB2330265A (en) Image compositing using camera data
US5577175A (en) 3-dimensional animation generating apparatus and a method for generating a 3-dimensional animation
US5613048A (en) Three-dimensional image synthesis using view interpolation
EP1148443B1 (en) Apparatus and method for video signal processing
US7084875B2 (en) Processing scene objects
EP0358498B1 (en) Method and apparatus for generating animated images
US6466205B2 (en) System and method for creating 3D models from 2D sequential image data
US8953905B2 (en) Rapid workflow system and method for image sequence depth enhancement
US5077608A (en) Video effects system able to intersect a 3-D image with a 2-D image
US8897596B1 (en) System and method for rapid image sequence depth enhancement with translucent elements
US7511730B2 (en) Image processing apparatus and method and image pickup apparatus
US6243488B1 (en) Method and apparatus for rendering a two dimensional image from three dimensional image data
KR101168384B1 (en) Method of generating a depth map, depth map generating unit, image processing apparatus and computer program product
US6429875B1 (en) Processing image data
US20020118217A1 (en) Apparatus, method, program code, and storage medium for image processing
WO2005083630A2 (en) Creating a depth map
JP2000251090A (en) Drawing device, and method for representing depth of field by the drawing device
KR20030036747A (en) Method and apparatus for superimposing a user image in an original image
EP0903695B1 (en) Image processing apparatus
EP0656609B1 (en) Image processing
EP1111910A2 (en) Media pipeline with multichannel video processing and playback
JP2973432B2 (en) Image processing method and apparatus
JP2022029730A (en) Three-dimensional (3d) model generation apparatus, virtual viewpoint video generation apparatus, method, and program
WO1999019836A1 (en) Image processing system, method and computer program product
JP3727768B2 (en) Computer image display method and display device

Legal Events

Date Code Title Description
732E Amendments to the register in respect of changes of name or changes affecting rights (sect. 32/1977)
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)