GB2369260A - Generation of composite images by weighted summation of pixel values of the separate images - Google Patents

Generation of composite images by weighted summation of pixel values of the separate images Download PDF

Info

Publication number
GB2369260A
GB2369260A GB0026347A GB0026347A GB2369260A GB 2369260 A GB2369260 A GB 2369260A GB 0026347 A GB0026347 A GB 0026347A GB 0026347 A GB0026347 A GB 0026347A GB 2369260 A GB2369260 A GB 2369260A
Authority
GB
United Kingdom
Prior art keywords
lt
gt
sep
data
tb
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB0026347A
Other versions
GB0026347D0 (en
GB2369260B (en
Inventor
Richard Antony Kirk
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to GB0026347A priority Critical patent/GB2369260B/en
Publication of GB0026347D0 publication Critical patent/GB0026347D0/en
Publication of GB2369260A publication Critical patent/GB2369260A/en
Application granted granted Critical
Publication of GB2369260B publication Critical patent/GB2369260B/en
Application status is Expired - Fee Related legal-status Critical
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, TV cameras, video cameras, camcorders, webcams, camera modules for embedding in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/225Television cameras ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, camcorders, webcams, camera modules specially adapted for being embedded in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/232Devices for controlling television cameras, e.g. remote control ; Control of cameras comprising an electronic image sensor
    • H04N5/23238Control of image capture or reproduction to achieve a very large field of view, e.g. panorama
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, TV cameras, video cameras, camcorders, webcams, camera modules for embedding in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2622Signal amplitude transition in the zone between image portions, e.g. soft edges
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, TV cameras, video cameras, camcorders, webcams, camera modules for embedding in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2624Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of whole input images, e.g. splitscreen

Abstract

Pixel data are received for two images of different views wherein at least a portion of the first and second views overlaps. Preference data identifying the relative preference for utilising pixel data from the first set, and difference data indicative of the difference in pixel data between the first set and the second set of pixel data in the overlap region of the images are utilised to form a composite image. For each overlap pixel in the composite image a composite pixel value is determined by summing the pixel data from the two sets of pixel data by applying weighting factors selected on the basis of the preference data and the difference data. The invention is described as being utilised in a system for texture rendering a generated three-dimensional computer model of objects within images.

Description

<img class="EMIRef" id="024180518-00010001" />

<tb>

METHOD <SEP> AND <SEP> APPARATUS <SEP> FOR <SEP> THE <SEP> GENERATION <SEP> OF <tb> <img class="EMIRef" id="024180518-00010002" />

<tb> COMPOSITE <SEP> IMAGES <tb> The present application concerns a method and apparatus for the generation of composite images from multiple sources of image data. In particular, the present application concerns a method and apparatus for the generation of composite images where data indicative of the relative preference for utilizing particular sources of image data can be obtained.

Frequently it is desirable to merge one or more images into a composite image. For example, a number of overlapping views where parts of the same view appear in different images may be combined with one another to obtain a single large image.

It is often possible when combining images to determine a particular source image or portion of a source image to be preferred for the generation of a particular part of a composite image. For example, image data from the centre of an image might be preferred due to distortions in images which are most noticeable at the edge of an image such as arise in images obtained by wide angled lenses. Alternatively, where image data of a modelled <img class="EMIRef" id="024180518-00020001" />

<tb> object <SEP> is <SEP> utilized <SEP> to <SEP> texture <SEP> render <SEP> the <SEP> computer <SEP> model <tb> of <SEP> the <SEP> object, <SEP> it <SEP> can <SEP> be <SEP> possible <SEP> to <SEP> determine <SEP> the <SEP> extent <tb> to which different portions of the surface of a model are visible in different images and this information can be utilized to select images for generating texture map data for texture rendering the model.

When this can be achieved, composite images can be generated by determining for each part of a composite image to be formed, a preferred image source to generate that part of a composite image. Composite images can then be generated by combining selected portions of different preferred source images. Thus for example in the case of images where the use of the centre of images is preferable, a composite image corresponding to a patchwork of parts of images corresponding to the centres of source images might be generated.

This method has the advantage that the preferred or 'best'image data is used to generate every part of the composite image. However, where a composite image is generated in this way, the selection of different images for different parts of the composite image can result in noticeable boundaries between regions where different source images have been used. <img class="EMIRef" id="024180518-00030001" />

<tb>

An <SEP> alternative <SEP> approach <SEP> for <SEP> generating <SEP> composite <SEP> image <tb> data <SEP> is <SEP> to <SEP> take <SEP> an <SEP> average <SEP> of <SEP> the <SEP> available <SEP> image <SEP> data <tb> for areas of overlap weighted by how the different sources of image data are to be preferred. By calculating an average value for portions of overlap a means is provided by which the joins between portions of a composite image obtained from different sources of image data can be blended into one another. The inventors have realised that averaging image data can itself generate a number of problems. In particular, where the available sets of image data are not perfectly aligned, averaging image data can result in the generation of'ghost'features. Furthermore, where the images may have different highlights and shadows, the effect of averaging image data tends to remove the contrast within the image. As a result, although joins are not noticeable in such an averaged image, the image tends to lack the realistic texture of the original image data. An alternative method and apparatus for generating composite images is therefore required in which boundaries between portions of a composite image generated from different sets of image data are less <img class="EMIRef" id="024180518-00040001" />

<tb> noticeable <SEP> and <SEP> in <SEP> which <SEP> original <SEP> image <SEP> quality <SEP> is <tb> maintained. <tb> In accordance with one aspect of the present invention there is provided an apparatus for generating composite images comprising: receiving means for receiving sets of pixel value image data corresponding to an area of overlap of at least two images, and preference data indicative of the relative preference for utilizing pixel value image data from different images for generating portions of an overlapping image; means for determining the difference between pixel values of image data from different sets of image data corresponding to the same portion of an image; and means for generating composite image data by determining a weighted average of pixel values of image data from different sets of image data corresponding to the same portion of an overlapping image received by said receiving means utilizing weighting factors determined on the basis of the difference between pixel values of image data determined by said determining means and received preference. In accordance with the further aspect of the present <img class="EMIRef" id="024180518-00050001" />

<tb> invention <SEP> there <SEP> is <SEP> provided <SEP> a <SEP> method <SEP> of <SEP> generating <tb> composite <SEP> image <SEP> data <SEP> from <SEP> a <SEP> plurality <SEP> of <SEP> items <SEP> of <SEP> image <tb> data each comprising a plurality of pixels, said method comprising the steps of: receiving a plurality of items of image data corresponding to the same portion of a composite image to be generated; receiving for each pixel of said items of image data, confidence data indicative of the relative preference for utilizing said pixel to generate said composite image data relative to pixels in said other items of image data corresponding to the same portion of said image to be generated; determining for each pixel of said items of image data, difference data indicative of the difference between said pixel and pixels in said other items of image data corresponding to the same portion of said image to be generated; and generating composite image data for said composite image, wherein data for each pixel in said composite image comprises a determined weighted average of pixel data for pixels in said plurality of items of image data corresponding to said pixel in said composite image, the weights utilized to determine said average being selected utilizing said difference data and confidence data for <img class="EMIRef" id="024180518-00060001" />

<tb> said <SEP> pixels <SEP> in <SEP> said <SEP> plurality <SEP> of <SEP> items <SEP> of <SEP> image <SEP> data. <tb> <img class="EMIRef" id="024180518-00060002" />

<tb> Further <SEP> aspects <SEP> and <SEP> embodiments <SEP> of <SEP> the <SEP> present <SEP> invention <tb> will become apparent with reference to the following description and drawings in which: Figure 1 is a schematic block diagram of a first embodiment of the present invention; Figure 2 is an exemplary illustration of the position an orientation of six texture maps bounding an exemplary subject object ; Figure 3 is a schematic block diagram of the surface texturer of Figure 1; Figure 4 is a flow diagram of the processing of the weight determination module of Figure 3; Figure 5 is a flow diagram of processing to determine the visibility of portions of the surface of an object in input images; Figure 6 is a flow diagram of the processing of visibility scores to ensure relatively smooth variation <img class="EMIRef" id="024180518-00070001" />

<tb> of <SEP> the <SEP> scores <SEP> ; <tb> <img class="EMIRef" id="024180518-00070002" />

<tb> Figures <SEP> 7 <SEP> and <SEP> 8 <SEP> are <SEP> an <SEP> exemplary <SEP> illustration <SEP> of <SEP> the <tb> processing of an area of a model ensuring smooth variation in scores; Figure 9 is a flow diagram of the generation of weight function data; Figures 10A and lOB are an illustrative example of a selection of triangles for associated visibility scores and generated weight function data; Figure 11 is a schematic block diagram of a texture map determination module; Figure 12 is a flow diagram of the processing of low frequency canonical projections; Figure 13 is a flow diagram of the processing of high frequency canonical projections; Figure 14 is a graph illustrating generated blend functions; <img class="EMIRef" id="024180518-00080001" />

<tb> Figure <SEP> 15 <SEP> is <SEP> a <SEP> schematic <SEP> block <SEP> diagram <SEP> of <SEP> a <SEP> surface <tb> texturer <SEP> in <SEP> accordance <SEP> with <SEP> a <SEP> second <SEP> embodiment <SEP> of <SEP> the <tb> present invention; Figure 16 is a flow diagram of the processing of the canonical view determination module the surface texturer of Figure 15.

Figure 17 is a schematic illustrative example of the manner in which two overlapping images may be combined to form a composite image; Figure 18 is a schematic block diagram of an embodiment of the present invention; Figure 19 is a schematic illustrative diagram of the manner in which confidence in an image might vary with respect to position with a single image; Figure 20 is a schematic illustration of the relative confidence in image data of a first image within the overlap of the two images in Figure 17 varies in accordance with the confidence variations illustrated in Figure 19; and <img class="EMIRef" id="024180518-00090001" />

<tb> Figure <SEP> 21 <SEP> is <SEP> a <SEP> flow <SEP> diagram <SEP> of <SEP> the <SEP> processing <SEP> of <SEP> an <SEP> image <tb> combination <SEP> module <SEP> in <SEP> accordance <SEP> with <SEP> this <SEP> embodiment <SEP> of <tb> the present invention. A first embodiment of will now be described in which a number of generated projected images corresponding to the same viewpoints are combined to generate composite texture map data for texture rendering a 3D computer model of object (s) appearing in the images.

First Embodiment Referring to Figure 1, an embodiment of the invention comprises a processing apparatus 2, such as a personal computer, containing, in a conventional manner, one or more processors, memories, graphics cards etc. , together with a display device 4, such as a conventional personal computer monitor, user input devices 6, such as a keyboard, mouse etc. and a printer 8.

The processing apparatus 2 is programmed to operate in accordance with programming instructions input, for example, as data stored on a data storage medium, such as disk 12, and/or as a signal 14 input to the processing apparatus 2, for example from a remote database, by <img class="EMIRef" id="024180518-00100001" />

<tb> transmission <SEP> over <SEP> a <SEP> communication <SEP> network <SEP> (not <SEP> shown) <tb> such <SEP> as <SEP> the <SEP> Internet <SEP> or <SEP> by <SEP> transmission <SEP> through <SEP> the <tb> atmosphere, and/or entered by a user via a user input device 6 such as a keyboard.

As will be described in more detail below, the programming instructions comprise instructions to cause the processing apparatus 2 to become configured to process input data defining a plurality of images of a subject object recorded from different view points. The input data then is processed to generate data identifying the positions and orientations at which the input images were recorded. These calculated positions and orientations and the image data are then used to generate data defining a three-dimensional computer model of the subject object.

When programmed by the programming instructions, processing apparatus 2 effectively becomes configured into a number of functional units for performing processing operations. Examples of such functional units and their interconnections are shown in Figure 1. The units and interconnections illustrated in Figure 1 are, however, notional and are shown for illustration purposes only to assist understanding; they do not necessarily <img class="EMIRef" id="024180518-00110001" />

<tb> represent <SEP> the <SEP> exact <SEP> units <SEP> and <SEP> connections <SEP> into <SEP> which <SEP> the <tb> processor, <SEP> memory <SEP> etc. <SEP> of <SEP> the <SEP> processing <SEP> apparatus <SEP> 2 <tb> become configured. Referring to the functional units shown in Figure 1, a central controller 20 processes inputs from the user input devices 6, and also provides control and processing for the other functional units. Memory 24 is provided for use by central controller 20 and the other functional units. Data store 26 stores input data input to the processing apparatus 2 for example as data stored on a storage device, such as disk 28, as a signal 30 transmitted to the processing apparatus 2, or using a user input device 6. The input data defines a plurality of colour images of one or more objects recorded at different positions and orientations. In addition, in this embodiment, the input data also includes data defining the intrinsic parameters of the camera which recorded the images, that is, the aspect ratio, focal length, principal point (the point at which the optical axis intersects the imaging plane), first order radial distortion coefficient, and skew angle (the angle between <img class="EMIRef" id="024180518-00120001" />

<tb> the <SEP> axes <SEP> on <SEP> the <SEP> pixel <SEP> grid <SEP> ; <SEP> because <SEP> the <SEP> axes <SEP> may <SEP> not <SEP> be <tb> exactly <SEP> orthogonal). <tb>

The input data defining the input images may be generated for example by downloading pixel data from a digital camera which recorded the images, or by scanning photographs using a scanner (not shown). The input data defining the intrinsic camera parameters may be input by a user using a user input device 6.

Position determination module 32 processes the input images received by the input data store 26 to determine the relative positions and orientations of camera view points from which image data of an object represented by the image data have been obtained. In this embodiment, this is achieved in a conventional manner by identifying and matching features present in the input images and calculating relative positions of camera views utilising these matches. Surface modeller 34 processes the data defining the input images and the data defining the positions and orientations at which the images were recorded to generate data defining a 3D computer wire mesh model representing the actual surface (s) of the object (s) in <img class="EMIRef" id="024180518-00130001" />

<tb> the <SEP> images. <SEP> In <SEP> this <SEP> embodiment <SEP> this <SEP> 3D <SEP> model <SEP> defines <SEP> a <tb> plurality <SEP> of <SEP> triangles <SEP> representing <SEP> the <SEP> surface <SEP> of <SEP> the <tb> subject object modelled.

Surface texturer 36 generates texture data from the input image data for rendering onto the surface model produced by surface modeller 34. In particular, in this embodiment the surface texturer 36 processes the input image data to generate six texture maps comprising six views of a subject object as viewed from a box bounding the subject object. These generated texture maps are then utilized to texture render the surface model so that images of a modelled subject object from any viewpoint may be generated. The processing of the surface texturer 36 to generate these texture maps from the input image data will be described in detail later.

Display processor 40, under the control of central controller 20, displays instructions to a user via display device 4. In addition, under the control of central controller 20, display processor 40 also displays images of the 3D computer model of the object from a user-selected viewpoint by processing the surface model data generated by surface modeller 34 and rendering texture data produced by surface texturer 36 onto the <img class="EMIRef" id="024180518-00140001" />

<tb> surface <SEP> model. <tb> <img class="EMIRef" id="024180518-00140002" />

<tb> Printer <SEP> controller <SEP> 42, <SEP> under <SEP> the <SEP> control <SEP> of <SEP> central <tb> controller 30 causes hard copies of images of the 3D computer model of the object selected and displayed on the display device 4 to be printed by the printer 8.

Output data store 44 stores the surface model and texture data therefor generated by surface modeller 34 and surface texturer 36. Central controller 20 controls the output of data from output data store 44, for example as data on a storage device, such as disk 46, or as a signal 48.

The structure and processing of the surface texturer 36 for generating texture data for rendering onto a surface model produced by the surface modeller 34 will now be described in detail.

Canonical Texture Maps When a plurality of images of a subject object recorded from different viewpoints are available, this provides a large amount of data about the outward appearance of the subject object. Where images are recorded from different viewpoints, these images provide varying amounts of data <img class="EMIRef" id="024180518-00150001" />

<tb> for <SEP> the <SEP> different <SEP> portions <SEP> of <SEP> the <SEP> subject <SEP> object <SEP> as <SEP> those <tb> portions <SEP> are <SEP> visible <SEP> to <SEP> a <SEP> lesser <SEP> or <SEP> greater <SEP> amount <SEP> within <tb> the images. In order to create a model of the appearance of an object, it is necessary to process these images to generate texture data so that a consistent texture model of a subject object can be created. In this embodiment this is achieved by the surface texture 36 which processes the input image data of a subject object recorded from different viewpoints in order to generate texture data for rendering the surface model produced by the surface modeller 34. In this embodiment this texture data comprises six texture maps, the six texture maps comprising views of subject object from the six faces of a cuboid centred on the subject object. Figure 2 is an exemplary illustration of the position and orientation of six texture maps 50-55 bounding an exemplary subject object 56. In this embodiment the six texture maps comprise texture maps for six canonical views of an object being views of the object from the top 50, bottom 51, front 52, back 53, left 54 and right 55. The six canonical views 50-55 comprise three pairs <img class="EMIRef" id="024180518-00160001" />

<tb> 50, <SEP> 51 <SEP> ; <SEP> 52, <SEP> 53 <SEP> ; <SEP> 54, <SEP> 55 <SEP> of <SEP> parallel <SEP> image <SEP> planes, <SEP> centred <SEP> on <tb> the <SEP> origin <SEP> of <SEP> the <SEP> coordinate <SEP> system <SEP> of <SEP> the <SEP> model, <SEP> with <tb> each of the three pairs of image planes aligned along one of the three coordinate axes of the coordinate system respectively. The relative positions of the viewpoints of the canonical views 50-55 are then selected so that relative to the size of the model 56 of the subject object created by the surface modeller 34, the distance away from the object is selected so that the canonical views are equally spaced from the centre of the object and the extent of an object as viewed from each image plane is no more than an threshold number of pixels in extent. In this embodiment this threshold is set to be 512 pixels. Each of the texture maps for the model is then defined by a weak perspective projection of the subject object 56 onto the defined image planes.

After image data for each of the six canonical views 5055 has been determined, image data of the model 56 of the subject object from any viewpoint can then be generated using conventional texture rendering techniques, where texture rendering data for each portion of the surface of the model 56 is generated utilizing selected portions of the canonical views 50-56. <img class="EMIRef" id="024180518-00170001" />

<tb> By <SEP> generating <SEP> texture <SEP> data <SEP> for <SEP> a <SEP> model <SEP> of <SEP> a <SEP> subject <tb> object <SEP> in <SEP> this <SEP> way <SEP> the <SEP> total <SEP> number <SEP> of <SEP> required <SEP> texture <tb> maps is limited to the six canonical views 50-56.

Furthermore, since each of the canonical views corresponds to a view of the projection of a real world object the canonical views 50-55 should be representative of realistic views of the subject object and hence be suited for compression by standard image compression algorithms such as JPEG which are optimised for compressing real world images. As generated images of a model of a subject object may obtain texture rendering data from any of the six canonical texture maps, it is necessary to generate all of the texture maps in such a way that they are all consistent with one another. Thus in this way where different portions of an image of a subject object are rendered utilizing texture maps from different views no noticeable boundaries arise. As for any 3D object not all of the surface of a subject object from all six canonical views 50-55 will be visible in any single input image, it is necessary for the surface texture 36 to combine image data from the plurality of images available to generate texture maps for these six consistent views 50-55. <img class="EMIRef" id="024180518-00180001" />

<tb> Prior <SEP> to <SEP> describing <SEP> in <SEP> detail <SEP> the <SEP> processing <SEP> by <SEP> the <tb> surface <SEP> texturer <SEP> 36 <SEP> which <SEP> enables <SEP> a <SEP> set <SEP> of <SEP> six <SEP> consistent <tb> texture maps to be generated from the available input images, the structure of the surface texturer 36 in terms of notional functional processing units will now be described.

Structure of Surface Texturer Figure 3 is a schematic block diagram of the surface texturer 36 in accordance with this embodiment of the present invention. In this embodiment, the surface texturer 36 comprises a weight determination module 58 and a texture map determination module 59.

In order to select portions of available image data to be utilized to generate the six canonical texture maps 50-55 the surface texture 36 utilizes position data generated by the position determination module 32 identifying the viewpoints in which data has been obtained and 3D model data output by the surface model. This position data and 3D model data is processed by the weight determination module 58 which initially determines the extent to which portions of the surface of the subject object being modelled are visible in each of the available images.

The weight determination module 58 then utilizes this <img class="EMIRef" id="024180518-00190001" />

<tb> determination <SEP> to <SEP> generate <SEP> weight <SEP> function <SEP> data <tb> identifying <SEP> a <SEP> relative <SEP> preference <SEP> for <SEP> utilizing <SEP> the <tb> available input images for generating texture data for each portion of the surface of the model of the subject object. This weight function data is then passed to the texture map determination module 59 which processes the weight function data together with the position data generated by the position determination module 32, the model data generated by the surface modeller 34 and the available image data stored within the input data store 26 to generate a set of six consistent texture maps for the canonical views 50-55.

Generation of Weight Function Data The processing of the weight determination module 58 generating weight function data indicative of relative preferences for utilizing different input images for generating texture data for different portions of a model of a subject object will now be described in detail with reference to Figures 4-9, 10A and 10B. Figure 4 is a flow diagram of the processing of the weight determination module 58. <img class="EMIRef" id="024180518-00200001" />

<tb> Initially <SEP> the <SEP> weight <SEP> determination <SEP> module <SEP> 58 <SEP> determines <tb> (S4-1) <SEP> data <SEP> indicative <SEP> of <SEP> the <SEP> extent <SEP> to <SEP> which <SEP> each <SEP> of <SEP> the <tb> triangles of the 3D model generated by the surface modeller 34 is visible in each of the images in the input store 26.

It is inevitable that for any particular input image, some portions of the subject object represented by triangles within the three dimensional model generated by the surface model 34 will not be visible. However, the same triangles may be visible in other images. Realistic texture data for each part of the surface of the 3D model can therefore only be generated by utilizing the portions of different items of image data where corresponding portions of the surface of a subject object are visible.

Thus by determining which triangles are visible within each image potential sources of information for generating texture map data can be identified. Additionally, it is also useful to determine the extent to which each triangle is visible in the available images. In this way it is possible to select as preferred sources of image data for generating portions of texture maps, images where particular triangles are clearly visible, for example in a close up images, rather <img class="EMIRef" id="024180518-00210001" />

<tb> than <SEP> images <SEP> where <SEP> a <SEP> triangle <SEP> whilst <SEP> visible <SEP> is <SEP> viewed <tb> only <SEP> at <SEP> an <SEP> acute <SEP> angle, <SEP> or <SEP> from <SEP> a <SEP> great <SEP> distance. <tb>

Figure 5 is a flow diagram illustrating in detail of the processing of the weight determination module 58 to determine the visibility of triangles in input images, utilizing the 3D model data generated by the surface modeller 34 and the position data for the input images determined by the position determination module 32.

Initially (S5-1) the weight determination module 58 selects the first view for which position data has been generated by the position determination module 32.

The weight determination module 58 then selects (S5-2) the first triangle of the 3D model generated by the surface modeller 34. The weight determination module 58 then (S5-3) determines a visibility value for the triangle being processed as seen from the perspective defined by the position data being processed. In this embodiment this is achieved by utilizing conventional Open GL calls to render the 3D model <img class="EMIRef" id="024180518-00220001" />

<tb> generated <SEP> by <SEP> the <SEP> surface <SEP> modeller <SEP> 34 <SEP> as <SEP> seen <SEP> from <SEP> the <tb> perspective <SEP> defined <SEP> by <SEP> the <SEP> position <SEP> data <SEP> of <SEP> the <SEP> image <tb> being processed. The generated image data is then utilized to calculate the visibility value. Specifically, initially all of the triangles of the 3D model viewed from the viewpoint defined by the selected position data are rendered using a single colour with Z buffering enabled. This Z buffer data is then equivalent to a depth map. The triangle being processed is then rerendered in a different colour with the Z buffering disabled. The selected triangle is then once again rerendered utilizing a third colour with the Z buffering re-enabled so as to utilize the depth values already present in the Z buffer. When re-rendering the glPolygonOffset OpenGL function call is then utilized to shift the triangle slightly towards the defined camera viewpoint defined by the position data to avoid aliasing effects.

A visibility value for the triangle being processed as viewed from the viewpoint defined by the currently selected position data is then determined by calculating the number of pixels in the image rendered in the second and third colours. <img class="EMIRef" id="024180518-00230001" />

<tb> Specifically, <SEP> if <SEP> there <SEP> are <SEP> any <SEP> pixels <SEP> corresponding <SEP> to <tb> the <SEP> second <SEP> colour <SEP> in <SEP> the <SEP> generated <SEP> image <SEP> data, <SEP> these <SEP> are <tb> parts of the currently selected triangle that are occluded by other triangles and hence the selected triangle is partially hidden from the viewpoint defined by the position data. A visibility value for the triangle as perceived from the defined viewpoint is then set for the triangle where: <img class="EMIRef" id="024180518-00230002" />

<img class="EMIRef" id="024180518-00230003" />

where the threshold is a value in this embodiment set to 0.75.

Thus in this way, where the entirety of a triangle is visible within an image a visibility value of one is associated with the triangle and position data. Where a triangle is only slightly occluded by other triangles when viewed from the position defined by the position <img class="EMIRef" id="024180518-00240001" />

<tb> data <SEP> being <SEP> processed, <SEP> i. <SEP> e. <SEP> the <SEP> fraction <SEP> of <SEP> pixels <tb> rendered <SEP> in <SEP> the <SEP> third <SEP> colour <SEP> relative <SEP> to <SEP> the <SEP> total <SEP> number <tb> of pixels rendered in either the second or third colour is greater than the threshold but less than one, a visibility value of less than one is associated with the triangle at image position. Where the fraction of pixels rendered in the third colour relative to the total is less than the threshold value a visibility value of zero is assigned to the triangle as viewed from the position defined by the position data.

The processing of the weight determination module 58 to colour a triangle by rendering and a triangle utilizing the stored values in the Z buffer as previously described enables triangles to be processed even if two small adjacent triangles are mapped to the same pixel in the selected viewpoint. Specifically, if triangle depths are similar for triangles corresponding to the same portion of an image, the glPolygonOffset call has the effect of rendering the second colour in the front of other triangles the same distance from the selected viewpoint.

Thus when the visibility of the selected triangle is determined, the triangle is only given a lower visibility value if it is obscured by other triangles closer to the defined viewpoint and not merely obscured by the <img class="EMIRef" id="024180518-00250001" />

<tb> rendering <SEP> of <SEP> triangles <SEP> at <SEP> the <SEP> same <SEP> distance <SEP> to <SEP> the <SEP> same <tb> portion <SEP> of <SEP> an <SEP> image. <tb> After the visibility value for a triangle in the selected view has been determined, the weight determination module 58 then (S5-4) calculates and stores a visibility score for the triangle in the selected viewpoint utilizing this visibility value.

By calculating the visibility value in the manner described above, a value indicative of the extent to which a triangle in a model is or is not occluded from the selected viewpoint is determined. Other factors, however, also effect the extent to which use of particular portion image data for generating texture data may be preferred. For example, close up images are liable to contain more information about the texture of an object and therefore may be preferable sources for generating texture data. A further factor which may determine whether a particular image is preferable to use for generating texture data is the extent to which a portion of the image is viewed at oblique angle. Thus in this embodiment a visibility score for a triangle in a view is determined by the weight determination module 58 utilizing the following equation: <img class="EMIRef" id="024180518-00260001" />

<img class="EMIRef" id="024180518-00260002" />

<tb> where <SEP> e <SEP> is <SEP> the <SEP> angle <SEP> of <SEP> incidence <SEP> of <SEP> a <SEP> ray <SEP> from <SEP> the <tb> optical <SEP> centre <SEP> of <SEP> the <SEP> camera <SEP> defining <SEP> the <SEP> image <SEP> plane <SEP> as <tb> identified by the position data being processed to the normal of the centroid of the selected triangle and the distance is the distance between the centroid of selected triangle and the optical centre of the camera.

Thus in this way in this embodiment the amount of occlusion, the obliqueness of view and the distance between image plane and a triangle being modelled all effect the visibility score. In alternative embodiments either only some of these factors could be utilized or alternatively greater weight could be placed on any particular factor.

After a visibility score for a particular triangle as perceived from a particular viewpoint has been calculated and stored, the weight determination module 58 then (S55) determines whether the triangle for which data has just been stored is the last of the triangles identified by the model data generated by the surface modeller 34.

If this is not the case the weight determination module <img class="EMIRef" id="024180518-00270001" />

<tb> 58 <SEP> then <SEP> proceeds <SEP> to <SEP> select <SEP> (S5-6) <SEP> the <SEP> next <SEP> triangle <SEP> of <tb> the <SEP> model <SEP> data <SEP> generated <SEP> by <SEP> the <SEP> surface <SEP> modeller <SEP> 34 <SEP> and <tb> then determines a visibility value (S5-3) and calculates and stores a visibility score (S5-4) for that next triangle.

When the weight determination module 58 determines that the triangle for which a visibility score has been stored is the last triangle of the 3D model data generated by the surface modeller 34, the weight determination module 58 then (S5-7) determines whether the currently selected position data defining the position of an input image stored within the input data store 28 is the last of the sets of position data generated by the position determination module 32. If this is not the case the weight determination module 58 then selects (S5-8) the next set of position data to generate and store visibility scores for each of the triangles as perceived from that new position (S5-2-S5-7). Thus in this way the weight determination module 58 generates and stores visibility scores for all triangles as perceived from each of the viewpoints corresponding to viewpoints of input image data in the input data store 26. <img class="EMIRef" id="024180518-00280001" />

<tb> Returning <SEP> to <SEP> Figure <SEP> 4 <SEP> after <SEP> visibility <SEP> scores <SEP> have <SEP> been <tb> stored <SEP> for <SEP> each <SEP> of <SEP> the <SEP> triangles <SEP> as <SEP> perceived <SEP> from <SEP> each <tb> of the camera views corresponding to position data generated by the position determination module 32, the weight determination module 58 then (S4-2) proceeds to alter the visibility scores for the triangles perceived in each view to ensure smooth variation in the scores across neighbouring triangles.

In this embodiment of the present invention the visibility scores are utilized to generate weight functions indicative of a relative preference for using portions of different input images stored within the input data store 26 for generating texture rendering data. In order that the selection of image data from different input images does not result in the creation of noticeable boundaries where different input images are used to generate the texture data, it is necessary to generate weight function data which generates a continuous and reasonably smooth weight function so that image data from different sources is blended into one another. As in this embodiment the triangles forming the 3D model data can be of significantly different sizes, a first stage in generating such a smooth weighting function is to average weighting values across areas of <img class="EMIRef" id="024180518-00290001" />

<tb> the <SEP> model <SEP> to <SEP> account <SEP> for <SEP> this <SEP> variation <SEP> in <SEP> triangle <SEP> size. <tb> <img class="EMIRef" id="024180518-00290002" />

<tb> Figure <SEP> 6 <SEP> is <SEP> a <SEP> flow <SEP> diagram <SEP> of <SEP> the <SEP> detailed <SEP> processing <SEP> of <tb> the weight determination module 58 to alter the visibility score associated with triangles as perceived from a particular viewpoint. The weight determination module 58 repeats processing illustrated by Figure 6 for each of the sets of visibility score data associated with triangles corresponding to each of the views for which image data is stored within the input store 26 so that visibility score data associated with adjacent triangles as perceived from each viewpoint identified by position data determined by the position determination module 32 varies relatively smoothly across the surface of the model.

Initially (S6-1), the weight determination module 58 selects the first triangle identified by 3D model data output from the surface modeller 34.

The weight determination module 58 then (S6-2) determines whether the stored visibility score of the visibility of that triangle as perceived from the view being processed is set to zero. If this is the case no modification of the visibility score is made (S6-3). This ensures that <img class="EMIRef" id="024180518-00300001" />

<tb> the <SEP> visibility <SEP> scores <SEP> associated <SEP> with <SEP> triangles <SEP> which <tb> have <SEP> been <SEP> determined <SEP> to <SEP> be <SEP> completely <SEP> occluded <SEP> or <tb> substantially occluded from the viewpoint being processed remain associated with a visibility score of zero, and hence are not subsequently used to generate texture map data.

If the weight determination module 58 determines (S6-2) that the visibility score of the triangle being processed is not equal to zero the weight determination module 58 then (S6-4) determines the surface area of the triangle from the 3D vertices using standard methods. The weight determination module then (S6-5) determines whether the total surface area exceeds the threshold value. In this embodiment the threshold value is set to be equal to the square of 5% of the length of the bounding box defined by the six canonical views 50-55.

If the weight determination module 58 determines that the total surface area does not exceed this threshold the weight determination module then (S6-6) selects the triangles adjacent to the triangle under consideration, being those triangles having 3D model data sharing two vertices with the triangle under consideration. The weight determination module 58 then calculates (S6-4, S6 <img class="EMIRef" id="024180518-00310001" />

<tb> 5) <SEP> the <SEP> total <SEP> surface <SEP> for <SEP> the <SEP> selected <SEP> triangle <SEP> and <SEP> those <tb> adjacent <SEP> to <SEP> the <SEP> selected <SEP> triangle <SEP> to <SEP> determine <SEP> once <SEP> again <tb> whether the number of pixels the projection of this area onto the image plane corresponding to the viewpoint against which triangles are being processed exceeds the threshold value. This process is repeated until the total surface area corresponding to the selected triangles exceeds the threshold.

When the threshold is determined to be exceeded, the weight determination module 58 then (S6-7) sets as an initial revised visibility score for the central triangle, where the initial revised score is equal to the average visibility score associated with all the currently selected triangles weighted by surface. A threshold value, typically set to 10% of the maximum unmodified weight is then subtracted from this revised initial visibility score. Any negative values associated with triangles are then set to zero. This threshold causes triangles associated with low weights to be associated with a zero visibility score. Errors in calculating camera position and hence visibility scores might result in low weights being assigned to triangles where surfaces are not in fact visible. The reduction of <img class="EMIRef" id="024180518-00320001" />

<tb> visibility <SEP> scores <SEP> in <SEP> this <SEP> way <SEP> ensures <SEP> that <SEP> any <SEP> such <tb> errors <SEP> do <SEP> not <SEP> introduce <SEP> subsequent <SEP> errors <SEP> in <SEP> generated <tb> texture map data. The final revised visibility score for the triangle is then stored. Figures 7 and 8 are an exemplary illustration of the processing of an area of a model in order to revise the visibility scores to ensure a smooth variation in scores regardless of the size of the triangles.

Figure 7 is an exemplary illustration of an area of a model comprising nine triangles, each triangle associated with a visibility score corresponding to the number appearing within respective triangles.

When the triangle corresponding to the central cross hatched triangle with a number 1.0 in the centre of the Figure 7 is processed initially the area of the triangle is determined. If this area is determined to be below the threshold value, the adjacent triangles, the shaded triangles in Figure 7, are selected and the area corresponding to the crossed hatched triangle and the shaded triangles is then determined. If this value is greater than the threshold value an area <img class="EMIRef" id="024180518-00330001" />

<tb> weighted <SEP> average <SEP> to <SEP> the <SEP> triangles <SEP> is <SEP> determined <SEP> after <tb> subtracting <SEP> the <SEP> threshold <SEP> from <SEP> this <SEP> score <SEP> and <SEP> setting <SEP> any <tb> negative value to zero the final value is stored as the revised visibility score for the central triangle. Figure 8 is an exemplary illustration of the area of a model corresponding to the same area illustrated in Figure 7 after all of the visibility scores for the triangles have been processed and modified where necessary. As can be seen by comparing the values associated with triangles in Figure 7 and Figure 8, wherever a triangle is associated with a zero in Figure 7 it remains associated with a zero in Figure 8. However the variation across the remaining triangle is more gradual than the variation illustrated in Figure 7, as these new scores correspond to area weighted averages of the visibility scores in Figure 7.

Returning to Figure 6 after a revised visibility score has been determined for a triangle the weight determination module 58 then determines (S6-8) whether all of the visibility scores for the triangles in a model, as seen from a viewpoint have been processed. If this is not the case the weight determination module 58 then (S6-9) selects the next triangle for processing. <img class="EMIRef" id="024180518-00340001" />

<tb> Thus <SEP> in <SEP> this <SEP> way <SEP> all <SEP> of <SEP> the <SEP> visibility <SEP> scores <SEP> associated <tb> with <SEP> triangles <SEP> are <SEP> processed <SEP> so <SEP> that <SEP> visibility <SEP> scores <tb> gradually vary across the surface of the model and drop to zero for all portions of a model which are not substantially visible from the viewpoint for which the visibility scores have been generated. Returning to Figure 4, after the visibility scores for all of the triangles in all of the views have been amended were necessary to ensure a smooth variation of visibility scores across the entirety of the model, the weight determination module 58 then (S4-3) calculates weight function data for all of the triangles in all of the views so that a smooth weight function indicative of the visibility of the surface of the model as seen from each defined viewpoint can be generated. The processing of the weight determination module 58 for generating weight function data for a model for a single view will now be described in detail with reference to Figures 9, 10A and 10B. This processing is then repeated for the other views so that weight function data for all of the views is generated. Figure 9 is a flow diagram of the processing of the <img class="EMIRef" id="024180518-00350001" />

<tb> weight <SEP> determination <SEP> module <SEP> 58 <SEP> to <SEP> generate <SEP> weight <tb> function <SEP> data <SEP> for <SEP> the <SEP> surface <SEP> of <SEP> a <SEP> model <SEP> defined <SEP> by <SEP> 3D <tb> model data received from the surface modeller 34 for a view as defined by one set of position data output by the position determination module 32 for which visibility scores have been determined and modified. Initially the weight determination module 58 selects (S91) the first triangle of the model. The weight determination module 58 then (S9-2) determines for each of the vertices of the selected triangle the minimum visibility score associated with triangles in the model sharing that vertex. These minimum visibility scores are then stored as weight function data for the points of the model corresponding to those vertices.

The weight determination module 58 then (S9-3) determines for each of the edges of the selected triangle the lesser visibility score of the currently selected triangle and the adjacent triangle which shares that common edge. These values are then stored as weight function data defining the weight to be associated with the mid point of each of the identified edges. The weight determination module 58 then (S9-4) stores as <img class="EMIRef" id="024180518-00360001" />

<tb> weight <SEP> function <SEP> data <SEP> the <SEP> visibility <SEP> score <SEP> associated <SEP> with <tb> the <SEP> currently <SEP> selected <SEP> triangle <SEP> as <SEP> the <SEP> weight <SEP> function <SEP> to <tb> be associated with the centroid of the selected triangle. By allotting weight function data for the vertices, edges and centroids of each of the triangles for the models generated by the surface modeller 34, values for the entirety of the surface of the model of an object can then be determined by interpolating the remaining points of the surface of the model as will be described later. Figures 10A is an illustrative example of a selection of triangles each associated with a visibility score.

Figure 10B is an example of the weight functions associated with the vertices, edges and centroid of the triangle in Figure 10A associated with a visibility score of 0.5.

As can be seen from Figure 10B the selection of weight function data for the vertices and edges of a triangle to be such that the weight function is the minimum of scores associated with triangles having a common edge or vertex ensures that the weight function associated with the edge or vertex of triangles adjacent to a triangle with a <img class="EMIRef" id="024180518-00370001" />

<tb> visibility <SEP> score <SEP> of <SEP> zero <SEP> are <SEP> all <SEP> set <SEP> to <SEP> zero. <tb> <img class="EMIRef" id="024180518-00370002" />

<tb> By <SEP> having <SEP> the <SEP> centre, <SEP> edge <SEP> and <SEP> vertex <SEP> of <SEP> each <SEP> triangle <tb> associated with weight function data, a means is provided to ensure that weight values subsequently associated with the edges of triangles can vary even when the vertices of a triangle associated with the same value. Thus for example in Figure 10B the central portion of the base of the triangle is associated with a value of 0.5 whilst the two vertices at the base of the triangle are both associated with zero. If a simpler weight function were to be utilized and weights associated with positions along the edges of a triangle were to be solely determined by weights allotted to the vertices, this variation could not occur.

Returning to Figure 9 after the weight function data has been determined and stored for the vertices, edges and centroid of the triangle currently under consideration the weight determination module 58 then determines (S9-5) whether the triangle currently under consideration is the last of the triangles in the model generated by the surface modeller 34. If this is not the case the next triangle is then <img class="EMIRef" id="024180518-00380001" />

<tb> selected <SEP> (S9-6) <SEP> and <SEP> weight <SEP> function <SEP> data <SEP> is <SEP> determined <tb> and <SEP> stored <SEP> for <SEP> the <SEP> newly <SEP> selected <SEP> triangle <SEP> (S9-2-S9-4). <tb> Thus in this way weight function data for generating a weight function that varies smoothly across the surface of a modelled object and reduces to zero for all portions of a model which are not visible from a viewpoint associated with the weight function is generated. Returning to Figure 4 after weight function data has been determined for each of the camera views corresponding to position data generated by the position determination module 32 this weight function data is then (S4-4) output by the weight determination module 58 to the texture map determination module 59 so that the texture map determination module 59 can utilize the calculated weight function data to select portions of image data to be utilized to generate texture maps for a generated model.

Generation of Texture Maps Utilizing conventional texture rendering techniques, it is possible to generate image data corresponding to projected images as perceived from each of the canonical views 50-55 where the surface of a model of a subject object is texture rendered utilizing input image data <img class="EMIRef" id="024180518-00390001" />

<tb> identified <SEP> as <SEP> having <SEP> been <SEP> recorded <SEP> from <SEP> a <SEP> camera <tb> viewpoint <SEP> corresponding <SEP> to <SEP> position <SEP> data <SEP> output <SEP> by <SEP> the <tb> position determination module 32 for that image data. It is also possible utilizing conventional techniques to generate projected images corresponding to the projection of the surface of a model texture rendered in accordance with calculated weight functions for the surface of the model as viewed from each of the canonical views 50-55 from the output weight function data.

As described above, the weight function data generated output by the weight determination module 58 are calculated so as to be representative of a relative visibility of portions of the surface of a model from defined viewpoints. The projected images of the weight functions are therefore indicative of relative preferences for using the corresponding portions of projections of image data from the corresponding viewpoints to generating the texture data for the texture map for each of the canonical views. These projected weight function images, hereinafter referred to as canonical confidence images can therefore be utilized to select portions of projected image data to blend to generate output texture maps for canonical views 50-55. <img class="EMIRef" id="024180518-00400001" />

<tb>

Figure <SEP> 11 <SEP> is <SEP> a <SEP> schematic <SEP> diagram <SEP> of <SEP> notional <SEP> functional <tb> modules <SEP> of <SEP> the <SEP> texture <SEP> map <SEP> determination <SEP> module <SEP> 59 <SEP> for <tb> generating canonical texture maps for each of the canonical views 50-55 from image data stored within the input data store 26, position data output by the position determination module 32,3D model data generated by the surface modeller 34 and the weight function data output by the weight determination module 58. The applicants have appreciated that generation of realistic texture for a model of a subject object can be achieved by blending images in different ways to average global lighting effects whilst maintaining within the images high frequency details such as highlights and shadows.

Thus in accordance with this embodiment of the present invention the texture generation module 59 comprises a low frequency image generation module 60 for extracting low frequency image information from image data stored within the input data store 26, an image projection module 62 for generating high and low frequency canonical image projections; a confidence image generation module 64 for generating canonical confidence images; a weighted <img class="EMIRef" id="024180518-00410001" />

<tb> average <SEP> filter <SEP> 66 <SEP> and <SEP> a <SEP> non-linear <SEP> averaging <SEP> filter <SEP> 68 <tb> for <SEP> processing <SEP> high <SEP> and <SEP> low <SEP> frequency <SEP> canonical <tb> projections and the canonical confidence images to generate blended high and low frequency canonical images; and a re-combination of output module 70 for combining the high and low frequency images and outputting the combined images as texture maps for each of the canonical views 50-55.

In order to generate high and low frequency canonical projections of each of the input images stored within the input data store 26 projected into each of the six canonical views, initially each item of image data in the input data store 26 is passed to the low frequency image generation module 60.

This module 60 then generates a set of low frequency images by processing the image data for each of the views in a conventional way by blurring and sub-sampling each image. In this embodiment the blurring operation is achieved by performing a Gausian blur operation in which the blur radius is selected to be the size of the projection of a cube placed at the centre of the object bounding box defined by the six canonical views 50-55 and whose sides are 5% of the length of the diagonal of the <img class="EMIRef" id="024180518-00420001" />

<tb> bounding <SEP> box. <SEP> The <SEP> selection <SEP> of <SEP> the <SEP> blur <SEP> radius <SEP> in <SEP> this <tb> way <SEP> ensures <SEP> that <SEP> the <SEP> radius <SEP> is <SEP> independent <SEP> of <SEP> image <tb> resolution and varies depending upon whether the image is a close up of a subject object (large radius) or the subject appears small in the image (small radius).

These low frequency images are then passed to the image projection module 62 together with copies of the original image data stored within the input data store 26, position data for each of the images as determined by the position determination module 32 and 3D model data for the model generated by the surface modeller 34.

For each of the input images from the input data store 26 the image projection module then utilizes the position data associated with the image and the 3D model data output by the surface modeller 34 to determine calculated projections of the input images as perceived from the six canonical views 50-55 by utilizing standard texture rendering techniques.

In the same way the image projection module 62 generates for each of low frequency images corresponding to image data processed by the low frequency image generation module 60, six low frequency canonical projections for <img class="EMIRef" id="024180518-00430001" />

<tb> the <SEP> six <SEP> canonical <SEP> views <SEP> 50-55. <tb> <img class="EMIRef" id="024180518-00430002" />

<tb> Six <SEP> high <SEP> frequency <SEP> canonical <SEP> image <SEP> projections <SEP> for <SEP> each <tb> image are then determined by the image projection module 62 by performing a difference operation, subtracting the low frequency canonical image projections for an image as viewed from a specified canonical view from the corresponding image projection of the raw image data for that view. The low frequency canonical projections and high frequency canonical projections of each image for each of the six canonical views are then passed to the weighted average filter 66 and the non-linear averaging filter 68 for processing as will be detailed later.

By processing the image data from the input data store 26 in this way it is possible to ensure that the processing of images generates high and low frequency canonical projections of each image that are consistent with one another. In contrast, if a blurring operation is performed upon canonical image projections created from image data stored within the input data store 26, as the pixel data in these generated projections is normally dependent upon different regions of the original image <img class="EMIRef" id="024180518-00440001" />

<tb> data, <SEP> the <SEP> blurring <SEP> operation <SEP> will <SEP> not <SEP> be <SEP> consistent <tb> across <SEP> all <SEP> six <SEP> canonical <SEP> images <SEP> and <SEP> hence <SEP> will <SEP> introduce <tb> errors into the generated texture maps. The confidence image generation module 64 is arranged to receive 3D model data from the surface modeller 34 and weight function data from the weight function determination module 58. The confidence image generation module 64 then processes the 3D model data and weight function data to generate for each of the weight functions associated with viewpoints corresponding to the viewpoints of each input images in the input data store 26, a set of six canonical confidence images for the six canonical views 50-55.

Specifically each triangle for which weight function data has been stored is first processed by linking the midpoint of each edge of the triangle to the midpoints of the two other edges and linking each of the midpoints of each edge and the centroid of the triangle. Each of these defined small triangles, is then projected into the six canonical views with values for points corresponding to each part of the projection of each small triangle being interpolated using a standard OpenGL"smooth shading"interpolation from the weight function data <img class="EMIRef" id="024180518-00450001" />

<tb> associated <SEP> with <SEP> the <SEP> vertices <SEP> of <SEP> these <SEP> small <SEP> triangles. <tb> <img class="EMIRef" id="024180518-00450002" />

<tb> Thus <SEP> in <SEP> this <SEP> way <SEP> for <SEP> each <SEP> of <SEP> the <SEP> six <SEP> canonical <SEP> views, <SEP> a <tb> canonical confidence image for each weight function is generated. The pixel values of canonical confidence images generated in this way are each representative of the relative preference for utilizing corresponding portions of canonical projections of image data representative of the viewpoint for which the weight function was generated to generate texture map data.

These canonical confidence images are then passed to the weighted average filter 66 and the non-linear averaging filter 68, so that texture data for the six canonical views can be generated utilizing the data identifying preferred sources for generating portions of texture map data.

Thus after the processing of image data and weight function data by the low frequency image generation module 60, the image projection module 62 and the confidence image generation module 64, the weighted average filter 66 and non-linear averaging filter 68 receive, for each item of image data stored within the input data store 26 a set of six canonical confidence images identifying the extent to which portions of <img class="EMIRef" id="024180518-00460001" />

<tb> projected <SEP> images <SEP> derived <SEP> from <SEP> a <SEP> specified <SEP> image <SEP> are <SEP> to <SEP> be <tb> preferred <SEP> to <SEP> generate <SEP> texture <SEP> data <SEP> and <SEP> a <SEP> set <SEP> of <SEP> six <tb> projections of either the high or low frequency images corresponding to the item of image data. The weighted average filter 66 and the non-linear averaging filter 68 then proceed to process the confidence images and associated low and high frequency canonical projections in turn for each of the canonical views as will now be described in detail.

Processing of Low Frequency Canonical Projections Figure 12 is a flow diagram of the processing of low frequency canonical projections and associated canonical confidence images for a specified canonical view for each of a set of input images stored within the input data store 26.

Initially (S12-1) the weighted average filter 66 selects a first low frequency canonical projection and its associated confidence image. This is then made the basis of an initial low frequency canonical image to be generated by the weighted average filter 66. The weighted average filter 66 then selects (sol2-2) the next low frequency projection for the same canonical view together with the projection's associated confidence image.

The weighted average filter 66 then (S12-3) determines as a new low frequency canonical image a canonical image comprising for each pixel in the image a weighted average of the current low frequency canonical image and the selected low frequency canonical projection weighted by the confidence scores utilising the following formula: <img class="EMIRef" id="024180518-00470001" />

where C is the current confidence score associated with the pixel being processed for the canonical image and Ci is the confidence score associated with the pixel in the confidence image associated with the selected projection. The confidence score for the pixel in the canonical image is then updated by adding the confidence score for the latest projection to be processed to the current confidence score. That is to say the new confidence score Cnew is calculated by the equation <img class="EMIRef" id="024180518-00480001" />

<tb> r*-r* <SEP> + <SEP> F' <tb> new <SEP> old <tb> where Cold is the previous confidence score associated with the current pixel and Ci is the confidence score of the current pixel in the projection being processed. The weighted average filter 66 then (S12-4) determines whether the latest selected low frequency canonical projection is the last of the low frequency canonical projections for the canonical view currently being calculated. If this is not the case the weighted average filter 66 then proceeds to utilize the determined combined image to generate a new combined image utilizing the next confidence image and associated low frequency image projection (S12-2-S12-4).

When the weighted average filter 66 determines (S12-4) that the last of the low frequency canonical projections for a specified canonical view 50-55 has been processed the weighted average filter 66 outputs (S12-5) as a blended canonical low frequency image the image generated utilizing the weighted average of the last projected image processed by the average weighted filter 66. Thus in this way the weighted average filter 66 enables <img class="EMIRef" id="024180518-00490001" />

<tb> for <SEP> each <SEP> of <SEP> the <SEP> canonical <SEP> views <SEP> 50-55 <SEP> a <SEP> low <SEP> frequency <tb> image <SEP> to <SEP> be <SEP> created <SEP> combining <SEP> each <SEP> of <SEP> the <SEP> low <SEP> frequency <tb> canonical projections weighted by confidence scores indicative of the portions of the images identified as being most representative of the surface texture of a model in the canonical confidence images. As the low frequency images are representative of average local colour of surfaces as effected by global lighting effects, this processing by the weighted average filter 66 enables canonical low frequency images to be generated in which these global lighting effects are averaged across the best available images where greater weight is placed on images in which portions of a model are most easily viewed giving the resultant canonical low frequency images a realistic appearance and a neutral tone.

Processing of High Frequency Canonical Projections Figure 13 is a flow diagram of the processing of canonical confidence images and associated high frequency canonical projections for generating a canonical high frequency image for one of the conical views 50-55.

Initially (S13-1) the non-linear averaging filter 68 selects a first one of the high frequency canonical <img class="EMIRef" id="024180518-00500001" />

<tb> projections <SEP> for <SEP> the <SEP> canonical <SEP> view <SEP> for <SEP> which <SEP> a <SEP> canonical <tb> high <SEP> frequency <SEP> image <SEP> is <SEP> to <SEP> be <SEP> generated <SEP> and <SEP> sets <SEP> as <SEP> an <tb> initial canonical high frequency image an image corresponding to the selected first high frequency projection.

The non-linear averaging filter 68 then (S13-2) selects the next high frequency canonical projection for that canonical view and its associated canonical confidence image for processing.

The first pixel within the canonical high frequency image is then selected (S13-3) and the non-linear averaging filter 68 then (S13-4) determines for the selected pixel in the high frequency canonical image a difference value between the pixel in the current high frequency canonical image and in the corresponding pixel in the high frequency canonical projection being processed.

In this embodiment, where the high frequency canonical projections and canonical high frequency image comprise colour data, this difference value may be calculated by determining the sum of the differences between corresponding values for each of the red, green and blue channels for the corresponding pixels in the selected <img class="EMIRef" id="024180518-00510001" />

<tb> high <SEP> frequency <SEP> canonical <SEP> projection <SEP> and <SEP> the <SEP> canonical <tb> high <SEP> frequency <SEP> image <SEP> being <SEP> generated. <tb>

Alternatively, where the image being processed comprises grey scale data, this difference value could be determined by calculating the difference between the pixel in the canonical high frequency image and the corresponding pixel in the high frequency canonical projection being processed. It will be appreciated that more generally in further embodiments of the invention where colour data is being processed any suitable function of the red, green and blue channel values for corresponding pixels in the high frequency canonical projection being processed and the canonical high frequency image being generated could be utilized to generate a difference value for a pixel. After the difference value for the pixel being processed has been determined, the non-linear averaging filter 68 then (S13-5) determines a blend function for the selected pixel from the confidence scores associated with the pixel and the determined difference value for the pixel. In this embodiment this blend function is initially <img class="EMIRef" id="024180518-00520001" />

<tb> determined <SEP> by <SEP> selecting <SEP> a <SEP> gradient <SEP> G <SEP> in <SEP> dependence <SEP> upon <tb> the <SEP> determined <SEP> difference <SEP> value <SEP> for <SEP> the <SEP> pixel. <tb> Specifically, in this embodiment the value G is calculated by setting <img class="EMIRef" id="024180518-00520002" />

<tb> D <tb> G= <SEP> D <SEP> for <SEP> D <SEP> ? <SEP> Do <tb> - <SEP> 0 <tb> <img class="EMIRef" id="024180518-00520003" />

<tb> G=7rD < Do <tb> <img class="EMIRef" id="024180518-00520004" />

<tb> where <SEP> D <SEP> is <SEP> the <SEP> determined <SEP> difference <SEP> value <SEP> for <SEP> the <SEP> pixel <tb> and Do is a blend constant fixing the extent to which weighting factors are varied due to detected differences between images. In this embodiment where the difference values D are calculated from colour image data for three colour channels each varying from 0 to 255, the value of Do is set to be 60. An initial weighting fraction for calculating a weighted average between the pixel being processed in the current canonical high frequency image and the corresponding pixel in the high frequency canonical projection being processed is then set by calculating: <img class="EMIRef" id="024180518-00530001" />

<img class="EMIRef" id="024180518-00530002" />

<tb> where <SEP> Ci/ <SEP> (C <SEP> + <SEP> Ci) <SEP> is <SEP> the <SEP> relative <SEP> confidence <SEP> score <tb> associated <SEP> with <SEP> the <SEP> pixel <SEP> being <SEP> processed <SEP> in <SEP> the <tb> confidence image associated with the selected high frequency canonical projection being the ratio of the confidence score Ci associated with the pixel in the current image being processed to the confidence score associated with the pixel by the canonical image and G is the gradient determined utilizing the determined difference value D for the pixel. A final weighting fraction is then determined by setting the weighting value W with: <img class="EMIRef" id="024180518-00530003" />

<tb> 0/ < 0 <tb> =7//', <SEP> > 7 <SEP> ; <SEP> <tb> W=ifO <SEP> < 1 <tb> <img class="EMIRef" id="024180518-00530004" />

<tb> Figure <SEP> 14 <SEP> is <SEP> a <SEP> graph <SEP> of <SEP> a <SEP> generated <SEP> blend <SEP> function <tb> illustrating <SEP> the <SEP> manner <SEP> in <SEP> which <SEP> the <SEP> final <SEP> weighting <tb> value W calculated in this way varies in dependence upon the relative confidence score Ci/ (C + Ci) for a pixel being processed and the determined difference value D for <img class="EMIRef" id="024180518-00540001" />

<tb> the <SEP> pixel. <tb> <img class="EMIRef" id="024180518-00540002" />

<tb> In <SEP> the <SEP> graph <SEP> as <SEP> illustrated, <SEP> solid <SEP> line <SEP> 72 <SEP> indicates <SEP> a <tb> graph for calculating W for weighting values where D is equal to a high value, for example in this embodiment where D ranges between 0 and 765 a value of over 700. As such the graph indicates that where the relative confidence score for a particular source is low a weighting value of zero is selected and for high pixel confidence scores, a weighting factor of one is selected.

For intermediate pixel confidence scores an intermediate weighting factor is determined, the weighting factor increasing as the confidence score increases.

The dotted line 74 in Figure 14 is an illustration of the blend function where the calculated value D for the difference between the image data of the first and second images for a pixel is less than or equal to the blend constant Do. In this case the blend function comprises a function which sets a weighting value equal to the pixel relative confidence score for the pixel.

For intermediate values of D, the blend function varies between the solid line 72 and the dotted line 74 with the thresholds below which or above which a weighting value of zero or one is output reducing and increasing as D decreases. Thus as D decreases the proportion of relative confidence scores for which a weighting value other than zero or one is output increases. Returning to Figure 13, after a final weighting value W has been determined the non-linear averaging filter 68 then proceeds to calculate and store blended pixel data (S13-6) for the pixel under consideration. In this embodiment, where the image data comprises colour image data, the calculated blended pixel data is determined for each of the three colour channels by: <img class="EMIRef" id="024180518-00550001" />

The confidence value associated with the blended pixel is then also updated by selecting and storing as a new confidence score for the pixel the greater of the current confidence score associated with the pixel in the canonical image and the confidence score of the pixel in the projection being processed. The effect of calculating the pixel data for pixels in the high frequency canonical image in this way is to make <img class="EMIRef" id="024180518-00560001" />

<tb> the <SEP> pixel <SEP> data <SEP> dependent <SEP> upon <SEP> both <SEP> the <SEP> difference <SEP> data <tb> and <SEP> the <SEP> confidence <SEP> data <SEP> associated <SEP> with <SEP> the <SEP> processed <tb> pixel in the selected high frequency canonical projection. Specifically, where the difference data for a pixel is low (i. e. less than or equal to the blend constant Do) calculated pixel data corresponding to a weighted average proportional to the relative confidence score for the pixel in the associated canonical image is utilized. Where difference data is higher (i. e. greater than the blend constant Do) a pair of threshold values are set based upon the actual value of the difference data. Pixel data is then generated in two different ways depending upon the relative confidence score for the pixel in the associated canonical image. If the relative confidence score is above or below these threshold values either only the original pixel data for the canonical image or the pixel data for the selected high frequency canonical projection is utilized as a composite image data. If the relative confidence score is between the threshold values a weighted average of the original pixel data and pixel data from the selected projection is utilized. After the generated pixel data has been stored, the nonlinear averaging filter 68 then determines (S13-7) <img class="EMIRef" id="024180518-00570001" />

<tb> whether <SEP> the <SEP> pixel <SEP> currently <SEP> under <SEP> consideration <SEP> is <SEP> the <tb> last <SEP> of <SEP> the <SEP> pixels <SEP> in <SEP> the <SEP> canonical <SEP> image. <SEP> If <SEP> this <SEP> not <tb> the case the non-linear averaging filter then (S13-8) selects the next pixel and repeats the determination of a difference value (S13-4) a blend function (S13-5) and the calculation and storage of pixel data (S13-6) for the next pixel.

If after generated pixel data and a revised confidence score has been stored for a pixel the non-linear averaging filter 68 determines (S13-7) that blended pixel data has been stored for all of the pixels of the high frequency canonical image, the non-linear averaging filter 68 then determines (S13-9) whether the selected high frequency canonical projection and associated confidence image is the last high frequency canonical projection and confidence image to be processed. If this is not the case the next high frequency canonical projection is selected (S13-2) and the canonical high frequency image is updated utilizing this newly selected projection (S13-3-S13-9), and its associated confidence image.

When the non-linear averaging filter 68 determines that the latest high frequency canonical projection utilized <img class="EMIRef" id="024180518-00580001" />

<tb> to <SEP> update <SEP> a <SEP> canonical <SEP> high <SEP> frequency <SEP> image <SEP> is <SEP> the <SEP> last <SEP> of <tb> the <SEP> high <SEP> frequency <SEP> canonical <SEP> projections, <SEP> the <SEP> non-linear <tb> averaging filter 68 then (S13-10) outputs as a high frequency image for the canonical view 50-55 the high frequency canonical image updated by the final high frequency projection. The applicants have realized that the undesirable effects of generating composite image data by determining a weighted average of pixels from high frequency image data including data representative of details arise in areas of images where different items of source data differ significantly. Thus for example, where highlights occur in one source image at one point but not at the same point in another source image, averaging across the images results in a loss of contrast for the highlight.

If the highlights in the two images occur at different points within the two source images, averaging can also result in the generation of a'ghost'highlight in another portion of the composite image. Similarly, where different items of source data differ significantly other undesirable effects occur at points of shadow, where an averaging operation tends to cause an image to appear uniformly lit and therefore appear flat. <img class="EMIRef" id="024180518-00590001" />

<tb>

In <SEP> contrast <SEP> to <SEP> areas <SEP> of <SEP> difference <SEP> between <SEP> source <SEP> images, <tb> where <SEP> corresponding <SEP> points <SEP> within <SEP> different <SEP> source <SEP> images <tb> have a similar appearance, an averaging operation does not degrade apparent image quality. Performing an averaging across such areas of composite image, therefore provides a means by which the relative weight attached to different sources of image data can be varied so that the boundaries between areas of a composite image obtained from different source images can be made less distinct.

Thus in accordance with this embodiment of the present invention weighting factors are selected on the basis of both confidence data and difference data for a pixel so that, relative to weighting factors proportional to confidence data only as occurs in the weighted average filter 66, in the non-linear averaging filter 68, where determined differences between the source images are higher, the weighting given to preferred source data is increased and the corresponding weighting given to less preferred data is decreased.

Thus, where both confidence data and difference data are high, a greater weight is given to a particular source or alternatively only the preferred source data is utilized to generate composite image data. As these areas <img class="EMIRef" id="024180518-00600001" />

<tb> correspond <SEP> to <SEP> areas <SEP> of <SEP> detail <SEP> and <SEP> highlights <SEP> or <SEP> shadow <SEP> in <tb> the <SEP> high <SEP> frequency <SEP> images <SEP> the <SEP> detail <SEP> and <SEP> contrast <SEP> in <SEP> the <tb> resultant generated data is maintained. In areas where there is less difference between different sources of image data, weighting values proportional to confidence data are not modified at all or only slightly so that the boundaries between portions of image data generated from different images are minimised across those areas. Thus in this way composite image data can be obtained which maintains the level of contrast and detail in original high frequency source data, whilst minimising apparent boundaries between portions of composite image generated from different items of source data. Thus in this way six low frequency canonical images for the six canonical views 50-55 and six high frequency canonical images are generated by the weighted average filter 66 and the non-linear averaging filter 68 respectively. In the canonical low frequency images the global lighting effects are averaged across the available low frequency canonical projections in proportion to the relative preference for utilizing portions of an image as identified by the canonical confidence images associated <img class="EMIRef" id="024180518-00610001" />

<tb> with <SEP> the <SEP> projections. <SEP> In <SEP> the <SEP> high <SEP> frequency <SEP> canonical <tb> images, <SEP> high <SEP> frequency <SEP> canonical <SEP> projections <SEP> are <SEP> blended <tb> in the above described manner to ensure that high frequency detail and contrast in the canonical images is maintained when the different high frequency canonical projections are combined.

When the six canonical low frequency images and six canonical high frequency images are then received by the recombination output module 70 for each canonical view 50-55 the recombination and output module 70 generates a single canonical texture map by adding the high and low frequency images for each view. The combined canonical texture maps are then output by the recombination and output module 70 for storage in the output data store 44 together with the 3D model data generated by the surface module 34.

The output canonical texture maps can then be utilized to texture render image representations of the model generated by the surface modeller 34 by associating each of the triangles identified by the three-dimensional model data with one of the texture maps. In this embodiment the selection of texture data to texture render each triangle is determined by selecting as the <img class="EMIRef" id="024180518-00620001" />

<tb> map <SEP> to <SEP> be <SEP> utilized <SEP> the <SEP> map <SEP> in <SEP> which <SEP> the <SEP> triangle <SEP> has <SEP> the <tb> greatest <SEP> visibility <SEP> score <SEP> in <SEP> which <SEP> the <SEP> visibility <SEP> score <tb> is calculated for each canonical view 50-55 in the same way as has previously been described in relation to Figure 5. Texture co-ordinates for texture rendering the model generated by the surface modeller 34 are then implicitly defined by the projection of the triangle onto the selected canonical view 50-55.

Images of the model of the subject object can then be obtained for any viewpoint utilizing the output canonical texture maps and the 3D model data stored in the output data store 44, for display on the display device 4 and a hard copies of the generated images can be made utilizing the printer. The model data and texture map stored in the output data store 44 can also be output as data onto a storage device 46,48.

Second Embodiment In the first embodiment of the present invention, an image processing apparatus was described in which input data defining a plurality of images of a subject object recorded from different viewpoints were processed to generate texture data comprising six canonical texture maps being views for a cuboid bounding a model of the <img class="EMIRef" id="024180518-00630001" />

<tb> subject <SEP> object. <SEP> Although <SEP> generating <SEP> texture <SEP> maps <SEP> in <SEP> this <tb> way <SEP> ensures <SEP> that <SEP> the <SEP> total <SEP> number <SEP> of <SEP> required <SEP> view <SEP> is <tb> limited to six, this does not guarantee that every triangle within the model is visible in at least one of the canonical views. As a result, some portions of a model may be texture rendered utilizing surface information corresponding to a different portion of the surface of the model. In this embodiment of the present invention an image processing apparatus is described in which texture data is generated to ensure that all triangles within a 3D model are rendered utilizing texture data for the corresponding portion of the surface of a subject object, if image data representative of the corresponding portion of the subject object is available.

The image processing apparatus in accordance with this embodiment of the present invention is identical to that of the previous embodiment except the surface texturer 36 is replaced with a modified surface texturer 80.

Figure 15 is a schematic block diagram of the notional functional modules of the modified surface texturer module 80. The modified surface texturer 80 comprises a canonical view determination module 82; a weight determination module 58 identical to the weight <img class="EMIRef" id="024180518-00640001" />

<tb> determination <SEP> module <SEP> 58 <SEP> of <SEP> the <SEP> surface <SEP> texturer <SEP> 36 <SEP> of <SEP> the <tb> previous <SEP> embodiment <SEP> and <SEP> a <SEP> modified <SEP> texture <SEP> map <tb> determination module 86. In this embodiment the canonical view determination module 82 is arranged to determine from the 3D model data output by the surface modeller 34 whether any portions of the surface of the 3D model are not visible from the six initial canonical views. If this is determined to be the case the canonical view determination module 82 proceeds to define further sets of canonical views for storing texture data for these triangles. These view definitions are then passed to the texture map determination module 86. The modified texture map determination module 86 then generates canonical texture map data for the initial canonical views and these additional canonical views which are then utilized to texture render the surface of the model generated by the surface modeller 34. The processing of the canonical view determination module 82 in accordance with this embodiment of the present invention will now be described with reference to Figure 16 which is a flow diagram of the processing of the canonical view determination module 82. <img class="EMIRef" id="024180518-00650001" />

<tb> Initially <SEP> (S16-1) <SEP> the <SEP> canonical <SEP> view <SEP> determination <SEP> module <tb> 82 <SEP> selects <SEP> a <SEP> first <SEP> triangle <SEP> of <SEP> the <SEP> model <SEP> generated <SEP> by <SEP> the <tb> surface modeller 34. The canonical view determination module 82 then (S16-2) generates a visibility value and visibility score for the triangle as perceived from each of the six canonical views in a similar manner to which the visibility value is generated by a weight determination module 58 as has previously been described in relation to the previous embodiment.

The canonical view determination module 82 then (sil6-3) determines whether the visibility value associated with the triangle being processed is greater than a threshold value indicating that the triangle is at least substantially visible in at least one of the views. In this embodiment this threshold value is set to zero so that all triangles which are at least 75% visible in one of the views are identified as being substantially visible. If the canonical view determination module 82 determines that the triangle being processed is visible i. e. has a visibility value of greater than zero in at least one view, the canonical view determination module selects from the available canonical views the view in which the <img class="EMIRef" id="024180518-00660001" />

<tb> greatest <SEP> visibility <SEP> score <SEP> as <SEP> generated <SEP> in <SEP> the <SEP> manner <tb> previously <SEP> described <SEP> is <SEP> associated <SEP> with <SEP> the <SEP> triangle. <tb> Data identifying the canonical view in which the selected triangle is most visible is then stored. Thus in this way data identifying the canonical view in which texture data for rendering the triangle is subsequently to be utilized is generated.

The canonical view determination module 82 then (S16-5) determines whether the currently selected triangle corresponds to the last triangle of the 3D model generated by the surface modeller 34. If this is not the case the canonical view determination module 82 then proceeds to select the next triangle and determine visibility values and visibility scores for that next triangle (S16-6-S16-5). When the canonical view determination module 82 determines that all of the triangles have been processed (S16-5) the canonical view determination module 82 then (S16-7) determines whether data identifying a best view has been stored for all of the triangles. If this is not the case the canonical view determination module 82 generates and stores a list of triangles for <img class="EMIRef" id="024180518-00670001" />

<tb> which <SEP> no <SEP> best <SEP> view <SEP> data <SEP> has <SEP> been <SEP> stored <SEP> (S16-8) <SEP> the <tb> canonical <SEP> view <SEP> determination <SEP> module <SEP> 82 <SEP> then <SEP> proceeds <SEP> to <tb> establish the visibility of these remaining triangles in the six canonical views in the absence of the triangles for which best view data has already been stored (16-1 S16-7). This process is then repeated until best view data is stored for all of the triangles.

When the canonical view determination module 82 determines (S16-7) that best view data has been stored for all the triangles the canonical view determination module 82 then outputs (S16-9) view definition data comprising for each triangle the canonical view in which it is most visible together with the generated lists of the triangles not visible canonical views. These lists and identified views in which the triangles are best represented are then output and passed to the modified texture map determination module 86.

When the modified texture map determination module 86 processes the 3D model data, weight function data, image data and position data this generates the six original canonical views in the same manner as has been described in relation to the first embodiment. <img class="EMIRef" id="024180518-00680001" />

<tb> However, <SEP> additionally <SEP> the <SEP> texture <SEP> map <SEP> determination <tb> module <SEP> 86 <SEP> generates <SEP> for <SEP> the <SEP> triangles <SEP> not <SEP> visible <SEP> in <SEP> the <tb> original six canonical views as identified by the lists output by the canonical view determination module 82 further texture maps corresponding to the projection of the triangles identified by the lists of triangles generated by the canonical view determination module 82 in the six canonical views in the absence of the other triangles. These additional canonical views are generated from the high and low frequency canonical projections of image data generated by the image projection module 62 of the modified texture map determination module 86 and additional canonical confidence images being canonical projections of only the triangles identified by the lists of triangles output by the canonical view determination module 82. These additional confidence images being calculated for the triangles identified in the lists utilizing the weight function data generated by the weight determination module 58 in the same way as has previously been described in relation to the first embodiment. Thus in this way additional texture maps are generated for those portions of the surface of the model which are not apparent in the original six canonical views. As <img class="EMIRef" id="024180518-00690001" />

<tb> these <SEP> partial <SEP> additional <SEP> texture <SEP> maps <SEP> are <SEP> generated <SEP> in <SEP> a <tb> similar <SEP> way <SEP> to <SEP> the <SEP> main <SEP> canonical <SEP> views <SEP> they <SEP> should <SEP> also <tb> be representative of projections of portions of an image and hence suitable for compression utilizing the standard image compression techniques.

The original canonical views together with these additional canonical views can then be utilized to texture render the model where the texture maps utilized to render each triangle is selected utilizing the lists and the data identifying the best views as are output by the canonical view determination module 82.

Third Embodiment In the previous two embodiments, apparatus has been described in which a number of images comprising projections of the same view are combined to form composite texture map data. A third embodiment will now be described in which processing similar to the nonlinear averaging filter 68 is utilized to combine two overlapping images to generate a composite image.

Figure 17 is a schematic illustrative example of the manner in which two overlapping images may be combined to form a composite image. In the example illustrated a <img class="EMIRef" id="024180518-00700001" />

<tb> first <SEP> image <SEP> 101 <SEP> and <SEP> a <SEP> second <SEP> image <SEP> 102 <SEP> are <SEP> shown <SEP> in <SEP> which <tb> the <SEP> second <SEP> image <SEP> is <SEP> displaced <SEP> downwards <SEP> and <SEP> to <SEP> the <SEP> right <tb> relative to the first image. The combined image provides a field of view which is greater than both the first image 101 and second image 102. Between the first image 101 and the second image 102 there is an area of overlap 103 indicated by cross-hatching in the figure. If a composite image is to be generated, as image data is available for the area of overlap 103 from both the first image 101 and the second image 102, both sets of image data for the area of overlap can be processed to determine suitable image data to represent the area of overlap 103. Apparatus for generating composite image data in accordance with this embodiment of the present invention will now be described, with reference to Figures 18 to 21. Figure 18 is a schematic block diagram of a third embodiment of the present invention. In this embodiment, the apparatus comprises an overlap determination module 110, that is connected to both a confidence image generation module 112 and a lighting adjustment module 114. The lighting adjustment module 114 is itself <img class="EMIRef" id="024180518-00710001" />

<tb> connected <SEP> to <SEP> a <SEP> difference <SEP> determination <SEP> module <SEP> 116 <SEP> and <SEP> an <tb> image <SEP> combination <SEP> module <SEP> 120. <SEP> The <SEP> image <SEP> combination <tb> module 120, in addition to being arranged to output a composite image and being connected to the lighting adjustment module 114 is also connected to both the confidence image generation module 112 and difference determination module 16.

In accordance with this embodiment of the present invention, a pair of colour images 101,102 is processed to determine for an area of overlap 103 preference data for weighting the selection of image data from the source images and difference data identifying the extent to which the images vary in the area of overlap after accounting for variations in lighting. The preference data, difference data and lighting adjusted image data are then processed by the image combination module 120 to generate a composite image. The processing of this embodiment of the present invention in detail will now be explained.

Initially, image data comprising colour image data identifying red, green and blue values each ranging between 0 and 255 for pixels in a pair of overlapping <img class="EMIRef" id="024180518-00720001" />

<tb> colour <SEP> images <SEP> 101, <SEP> 102 <SEP> are <SEP> received <SEP> by <SEP> the <SEP> overlap <tb> determination <SEP> module <SEP> 110. <SEP> The <SEP> overlap <SEP> determination <tb> module 110 then determines the relative positions and orientations of the overlapping images 101,102 utilizing conventional feature matching techniques. When the relative positions and orientations of the overlapping images 101,102 have been determined this causes the overlap determination module 110 to output alignment data indicating the pixel coordinates of pixels in the images which correspond to one other in the area of overlap to the confidence image generation module 112. The confidence image generation module 112 then calculates for the area of overlap 103 which of the first 101 and second 103 images are to be preferred for each portion of the area of overlap 103.

In this embodiment the composite image generation apparatus is arranged to prefer as source data, image data representing portions of an image closer to the centre of a source image 101,102 in preference to data obtained from the edge of a source image 101,102. This preferred selection of data from the centre of an image ensures that no noticeable boundary results from the processing of the area of overlap. This is because the selection ensures that at the boundary of the area of <img class="EMIRef" id="024180518-00730001" />

<tb> overlap <SEP> data <SEP> corresponding <SEP> to <SEP> the <SEP> source <SEP> image <SEP> which <tb> extends <SEP> beyond <SEP> that <SEP> boundary, <SEP> is <SEP> utilized. <SEP> In-this <tb> embodiment of the present invention the preference is achieved by associating the pixels in each image with a first confidence value based upon the position of the pixel within the image utilizing a stored set of confidence data stored within the confidence image generation module 102.

Figure 14 is an illustrative example of the manner in which an initial confidence score is associated with each pixel within an image. In this example where the images 101,102 each comprise a rectangular image the confidence score comprises a concentric series of contours where the innermost point has associated with it an initial confidence score of 1 and the outer countours have decreasing confidence scores until at the edge of the image a confidence score of zero is assigned.

More generally, where it is possible to define a preference for utilizing portions of an image in advance, any suitable confidence score could be associated with received images. The exact origin of the measure of preference used is not important provided the values vary in a sensible way being high where the pixel values of an <img class="EMIRef" id="024180518-00740001" />

<tb> image <SEP> are <SEP> considered <SEP> trustworthy <SEP> and <SEP> low <SEP> where <SEP> pixel <tb> values may be unreliable. A confidence image being a set of data for each pixel within the area of overlap 103 identifying the relative preference for utilizing either image data from the first image 101 or the second image 102 is then generated. In this embodiment this confidence image is generated by determining for each pixel within the area of overlap the ratio of the initial confidence score for the pixel within the first image to the sum of the confidence scores for the first and second image. That is to say a value P for the extent to which the first image 101 is a preferred source for image data within the area available at 103 is calculated by determining for each pixel within the area of overlap a value P: <img class="EMIRef" id="024180518-00740002" />

where Pi and P2 are confidence scores associated with the position of the pixel within the first 101 and the second 102 respectively. Thus in the case of the area of overlap 103 illustrated in Figure 17 where initial confidence scores are arranged as a set of contours as illustrated in Figure 19 a <img class="EMIRef" id="024180518-00750001" />

<tb> generated <SEP> confidence <SEP> image <SEP> comprising <SEP> data <SEP> identifying <tb> preference <SEP> for <SEP> utilizing <SEP> image <SEP> data <SEP> from <SEP> the <SEP> first <SEP> image <tb> 101 would be generated which could be illustrated as shown in Figure 20. This figure illustrates the manner in which areas of similar preference are arranged, where 1.0 indicates a high preference for utilizing data from the first image 101 and zero indicates a low preference for using data from the first image 101. When the confidence image data for the entire area of overlap has been determined this confidence image is then passed to the image combination module 120 in order that a combined composite image may be generated.

Returning to Figure 18, in addition to passing alignment data to the confidence image generation module 112 so that confidence image data for the area of overlap 113 can be determined the overlap determination module 110 also causes the raw image data received to be passed to lighting adjustment module 114 together with data identifying which areas of the raw images correspond to the area of overlap 113. The lighting adjustment module 114 then alters the raw image data to make an allowance for any variation in overall colour across the area of overlap 113. <img class="EMIRef" id="024180518-00760001" />

<tb>

Specifically, <SEP> in <SEP> order <SEP> to <SEP> determine <SEP> how <SEP> the <SEP> raw <SEP> image <tb> data <SEP> requires <SEP> to <SEP> be <SEP> amended, <SEP> the <SEP> lighting <SEP> adjustment <tb> module 114 initially determines average colour values for the image data of the first image 101 corresponding to the portion of overlap 103 between the first image 101 and the second image 102. In this embodiment, the average colour values for the area of overlap are determined by calculating the average values of each of the red, green and blue channels for each of the pixels corresponding to pixels within the area of overlap 103.

The lighting adjustment module 114 then determines corresponding average red, green and blue values for the portion of the second image 102 corresponding to the area overlap 103. Since the image data for the area of overlap 103 in both images is meant to represent the same view, difference in average colour between these areas is largely indicative of different lighting conditions in the two images 101,102. To remove the effect of possible changes in lighting conditions between the two images 101,102 the lighting adjustment module 114 then calculates colour off-sets required for the first 101 and second 102 images for each of the red, green and blue channels which cause the average colour values in the area of overlap 103 in <img class="EMIRef" id="024180518-00770001" />

<tb> both <SEP> the <SEP> images <SEP> to <SEP> be <SEP> equal. <tb> <img class="EMIRef" id="024180518-00770002" />

<tb> In <SEP> this <SEP> embodiment <SEP> the <SEP> colour <SEP> off-sets <SEP> for <SEP> the <SEP> first <tb> image is set for each of the red, green and blue channel as: <img class="EMIRef" id="024180518-00770003" />

and the colour offset for the second image is set for each channel as: <img class="EMIRef" id="024180518-00770004" />

where Colour, and Colour2 are the determined average value for that colour channel for the area of overlap in the first and second images respectively.

These six colour off-set values are then added to the colour values for pixels in entirety of the images to which they apply to generate a pair of colour adjusted images to be used to generate a composite image. Image data for the adjusted images is then passed together with data identifying the area of overlap to the difference determination module 116 and to the image combination module 120. <img class="EMIRef" id="024180518-00780001" />

<tb>

When <SEP> the <SEP> difference <SEP> determination <SEP> module <SEP> 116 <SEP> receives <tb> image <SEP> data <SEP> for <SEP> the <SEP> colour <SEP> adjusted <SEP> images <SEP> adjusted <SEP> by <SEP> the <tb> lighting adjustment module 114 the difference determination module 116 calculates for the identified area of overlap 103 the extent to which each pixel in the first adjusted image 101 differs from the corresponding pixel of the second adjusted image 102. These difference values are calculated in the same way in which difference values are calculated as has been described in relation to the first embodiment, and output as a difference image to the image combination module 120. Where these difference values are low, this is indicative of the first and second images appearing to be similar for that part of image overlap 103. Where the difference data for a part of the area of overlap 103 is high, this indicates for those pixels the corresponding pixels in the first and second images differ from one another.

At this stage after the processing of the confidence image generation module 112, the lighting adjustment module 114 and the difference determination module 116, the image combination module 120 has stored within it, firstly, a confidence image being data indicating for each pixel within the area of overlap 103 preference data <img class="EMIRef" id="024180518-00790001" />

<tb> identifying <SEP> the <SEP> extent <SEP> to <SEP> which <SEP> either <SEP> the <SEP> first <SEP> 101 <SEP> or <tb> second <SEP> 102 <SEP> image <SEP> is <SEP> a <SEP> preferred <SEP> source <SEP> of <SEP> data <SEP> for <tb> generating image data for that particular portion of area of overlap 103; secondly, a difference image being data indicative of the way in which the data available from the first image 101 and the second image 102 for the area of overlap 3 differs from one another, and thirdly, adjusted image data being raw image data adjusted to account for changes in lighting between the first and second images 101,102. The image combination module 120 the image combination module 120 then proceeds to generate image data for the area of overlap 103 pixel by pixel, utilizing this data.

Figure 21 is a flow diagram of the processing of the image combination module 120 in accordance with this embodiment of the present invention. Initially (S21-1) the image combination module selects the first pixel in the area of overlap 103. The image combination module 20 then (S21-2) determines a blend function to select a weighting value for calculating a weighted average of the sum of the colour pixel values for the pixel from the image data of the first and second images. This embodiment this blend function is determined by utilizing the pixel values difference image and the confidence <img class="EMIRef" id="024180518-00800001" />

<tb> image <SEP> received <SEP> by <SEP> the <SEP> image <SEP> combination <SEP> module <SEP> 120 <SEP> in <SEP> a <tb> similar <SEP> way <SEP> to <SEP> the <SEP> selection <SEP> of <SEP> a <SEP> blend <SEP> function <tb> utilizing confidence images and calculated different values as has been described in relation to the first embodiment. After a final weighting value W utilizing the determined blend function has been determined the image combination module 120 then proceeds to calculate and store blended pixel data (S21-3) for the pixel under consideration in the same way as has been described in the first embodiment. After the generated pixel data has been stored, the image combination module 120 then determines (S21-4) whether the pixel currently under consideration is the last of the pixels in the area of overlap 103. If this not the case the image combination module 120 then (S21-5) selects the next pixel and repeats the determination of a blend function (S21-2) and the calculation and storage of pixel data (S21-3) for the next pixel.

If after generated pixel data has been stored for a pixel the image combination module determines (S21-4) that blended to pixel data has been stored for all of the <img class="EMIRef" id="024180518-00810001" />

<tb> pixels <SEP> within <SEP> the <SEP> area <SEP> of <SEP> overlap <SEP> 3 <SEP> the <SEP> image <SEP> combination <tb> module <SEP> 120 <SEP> outputs <SEP> (S21-6) <SEP> combined <SEP> image <SEP> data <SEP> comprising <tb> for the area of overlap the stored blended pixel data and for the other areas of image, image data from the first and second colour adjusted image. The processing of the image combination module 120 then ends. Although the embodiments of the invention described with reference to the drawings comprise computer apparatus and processes performed in computer apparatus, the invention also extends to computer programs, particularly computer programs on or in a carrier, adapted for putting the invention into practice. The program may be in the form of source or object code or in any other form suitable for use in the implementation of the processes according to the invention. The carrier be any entity or device capable of carrying the program.

For example, the carrier may comprise a storage medium, such as a ROM, for example a CD ROM or a semiconductor ROM, or a magnetic recording medium, for example a floppy disc or hard disk. Further, the carrier may be a transmissible carrier such as an electrical or optical signal which may be conveyed via electrical or optical cable or by radio or other means. When a program is embodied in a signal which may be conveyed directly by a cable or other device or means, the carrier may be constituted by such cable or other device or means.

Alternatively, the carrier may be an integrated circuit in which the program is embedded, the integrated circuit being adapted for performing, or for use in the performance of, the relevant processes.

Claims (31)

  1. <img class="EMIRef" id="024180519-00830001" />
    <tb>
    CLAIMS <tb> <img class="EMIRef" id="024180519-00830002" />
    <tb> 1. <SEP> A <SEP> method <SEP> of <SEP> generating <SEP> composite <SEP> image <SEP> data <SEP> from <SEP> a <tb> pair of sets of pixel image data for a pair of overlapping views comprising the steps of: receiving a first set of pixel data for a first view ; and a second set of pixel data for a second view wherein at least a portion of said first and second views overlap; receiving preference data identifying a relative preference for utilizing pixel data from said first set of pixel data to generate composite image data for said area of overlap; receiving difference data indicative of the difference in pixel data between received pixel data from said first and second sets of pixel data corresponding to the same portion of said area of overlap; and generating composite image data by calculating for each pixel corresponding to said area of overlap the sum of received first and second pixel data for said pixel in said area of overlap weighted by weighting factors selected on the basis of said preference data and said difference data.
  2. 2. A method in accordance with claim 1, wherein said <img class="EMIRef" id="024180519-00840001" />
    <tb> selection <SEP> of <SEP> weighting <SEP> factors <SEP> comprises <SEP> selecting <tb> weighting <SEP> factors <SEP> such <SEP> that <SEP> said <SEP> weighted <SEP> sum <SEP> comprises <tb> a value corresponding to a weighted sum of said first and second items of image data for said pixel weighted in proportion to relative preferences identified by said preference data for said pixel if said difference data is below a threshold value.
  3. 3. A method in accordance with claim 2, wherein said selection of weighting factors comprises selecting weighting factors such that said weighted sum comprises a value corresponding to the preferred item of pixel image data as identified by said preference data if said difference data is above said threshold and said preference data is either greater than a first value or less than a second value, where in said first and second values are selected on the basis of said difference data.
  4. 4. A method in accordance with claim 3, wherein said selection of weighting factors comprises selecting weighting factors proportional to a product of said difference data and values proportional to the preferences identified by said preference data if said difference data is above said threshold value and said <img class="EMIRef" id="024180519-00850001" />
    <tb> preference <SEP> data <SEP> is <SEP> less <SEP> than <SEP> said <SEP> first <SEP> value <SEP> and <SEP> greater <tb> than <SEP> said <SEP> second <SEP> value. <tb>
  5. 5. A method in accordance with claim 3 or claim 4, wherein said first and second values comprise values selected so that the difference between said values is proportional to said difference data.
  6. 6. A method in accordance with any preceding claim, further comprising the step of calculating difference data indicative of the differences in pixel data between received pixel data from said first and second sets of pixel data each corresponding to said area of overlap.
  7. 7. A method in accordance with claim 6, wherein said first and second items of image data comprise colour pixel data and said calculation of difference data comprises calculating the sum of the modulus of differences between data identifying values for different colour channels for said first and second sets of pixel data.
  8. 8. A method in accordance with any preceding claim, further comprising the step of calculating data identifying a relative preference for utilizing pixel <img class="EMIRef" id="024180519-00860001" />
    <tb> data <SEP> from <SEP> said <SEP> first <SEP> set <SEP> of <SEP> pixel <SEP> data <SEP> to <SEP> generate <tb> composite <SEP> image <SEP> data. <tb>
  9. 9. A method in accordance with claim 8, wherein said calculation of preference data is calculated where the relative preference for utilizing pixel data from said first set of pixel data corresponding to adjacent portions of an image are approximately equal.
  10. 10. A method in accordance with claim 8 or claim 9, wherein said calculation of preference data comprises calculating preference data on the basis of structural information about objects appearing in image data from difference image sources.
  11. 11. A method in accordance with claim 8 or claim 9, wherein said calculation of preference data comprises calculating preference data for a pixel from said first set of pixel data on the basis of the relative position of the portion of an image represented by said pixel relative to the centre of said image.
  12. 12. A method in accordance with any preceding claim, further comprising the steps of: determining average pixel data within said sets of <img class="EMIRef" id="024180519-00870001" />
    <tb> pixel <SEP> data <SEP> corresponding <SEP> to <SEP> said <SEP> areas <SEP> of <SEP> overlap <SEP> ; <SEP> and <tb> modifying <SEP> said <SEP> sets <SEP> of <SEP> pixel <SEP> data <SEP> so <SEP> that <SEP> the <tb> determined average pixel data for different sets of pixel data representative of an area of overlap are equal to said determined average.
  13. 13. An apparatus for generating composite image data from a pair of sets of pixel image data for a pair of overlapping views comprising: means for receiving a first set of pixel data for a first view and a second set of pixel data for a second view wherein at least a portion of said first and second views overlap; means for receiving preference data identifying a relative preference for utilizing pixel data from said first set of pixel data to generate composite image data for said area of overlap; means for receiving difference data indicative of the difference in pixel data between received pixel data from said first and second sets of pixel data corresponding to the same portion of said area of overlap; and means for generating composite image data by calculating for each pixel corresponding to said area of overlap the sum of received first and second pixel data <img class="EMIRef" id="024180519-00880001" />
    <tb> for <SEP> said <SEP> pixel <SEP> in <SEP> said <SEP> area <SEP> of <SEP> overlap <SEP> weighted <SEP> by <tb> weighting <SEP> factors <SEP> selected <SEP> on <SEP> the <SEP> basis <SEP> of <SEP> said <tb> preference data and said difference data.
  14. 14. Apparatus in accordance with claim 13, wherein said means for generating image data is arranged to select weighting factors such that said weighted sum comprises a value corresponding to a weighted sum of said first and second items of image data for said pixel weighted in proportion to relative preferences identified by said preference data for said pixel if said difference data is below a threshold value.
  15. 15. Apparatus in accordance with claim 14, wherein said means for generating image data is arranged to select weighting factors such that said weighted sum comprises a value corresponding to the preferred item of pixel image data as identified by said preference data if said difference data is above said threshold and said preference data is either greater than a first value or less than a second value, where in said first and second values are selected on the basis of said difference data.
  16. 16. Apparatus in accordance with claim 15, wherein said <img class="EMIRef" id="024180519-00890001" />
    <tb> means <SEP> for <SEP> generating <SEP> image <SEP> data <SEP> is <SEP> arranged <SEP> to <SEP> select <tb> weighting <SEP> factors <SEP> comprising <SEP> weighting <SEP> factors <tb> proportional to a product of said difference data and values proportional to the preferences identified by said preference data if said difference data is above said threshold value and said preference data is less than said first value and greater than said second value.
  17. 17. Apparatus in accordance with claim 15 or claim 16, wherein said first and second values comprise values selected by said means for generating image data so that the difference between said values is proportional to said difference data.
  18. 18. Apparatus in accordance with any of claims 13 to 17, further comprising means for calculating difference data indicative of the differences in pixel data between received pixel data from said first and second sets of pixel data each corresponding to said area of overlap.
  19. 19. Apparatus in accordance with claim 18, wherein said means for receiving first and second sets of image data are arranged to receive colour pixel data and said means for calculating difference data is arranged to calculate the sum of the modulus of differences between data <img class="EMIRef" id="024180519-00900001" />
    <tb> identifying <SEP> values <SEP> for <SEP> different <SEP> colour <SEP> channels <SEP> for <SEP> said <tb> first <SEP> and <SEP> second <SEP> sets <SEP> of <SEP> pixel <SEP> data. <tb>
  20. 20. Apparatus in accordance with any of claims 13 to 19, further comprising means for calculating data identifying a relative preference for utilizing pixel data from said first set of pixel data to generate composite image data.
  21. 21. Apparatus in accordance with claim 20, wherein said means for calculating preference data is arranged to calculate preference data where the relative preference for utilizing pixel data from said first set of pixel data corresponding to adjacent portions of an image are approximately equal.
  22. 22. Apparatus in accordance with claim 20 or claim 21, wherein said means for calculating preference data is adapted to calculate preference data on the basis of structural information about objects appearing in image data from difference image sources.
  23. 23. Apparatus in accordance with claim 20 or claim 21, wherein said means for calculating preference data is arranged to calculate preference data for a pixel from said first set of pixel data on the basis of the relative <img class="EMIRef" id="024180519-00910001" />
    <tb> position <SEP> of <SEP> the <SEP> portion <SEP> of <SEP> an <SEP> image <SEP> represented <SEP> by <SEP> said <tb> pixel <SEP> relative <SEP> to <SEP> the <SEP> centre <SEP> of <SEP> said <SEP> image. <tb>
  24. 24. Apparatus in accordance with any of claims 12 to 23, further comprising means for determining average pixel data within said sets of pixel data corresponding to said areas of overlap and modifying means for modifying received pixel data corresponding to areas of overlap so that average pixel data for different sets of pixel data is equal to the determined average of all the sets of pixel data.
  25. 25. A recording medium for storing computer implementable process steps for generating within a programmable computer in accordance with any of claims 13 to 24.
  26. 26. A recording medium for storing computer implementable process steps for causing a programmable computer to perform a method in accordance with any of claims 1 to 12.
  27. 27. A recording medium in accordance with claim 25 or claim 26 comprising a computer disc. <img class="EMIRef" id="024180519-00920001" />
    <tb>
  28. 28. <SEP> A <SEP> computer <SEP> disc <SEP> in <SEP> accordance <SEP> with <SEP> claim <SEP> 27 <tb> comprising <SEP> an <SEP> optical, <SEP> magneto-optical <SEP> or <SEP> magnetic <SEP> disc. <tb>
  29. 29. A recording medium in accordance with claim 25 or claim 26, comprising an electric signal transferred via the Internet.
  30. 30. An apparatus for generating composite image data substantially as herein described with reference to the accompanying drawings.
  31. 31. A method of generating composite image data substantially as herein described with reference to the accompanying drawings.
GB0026347A 2000-10-27 2000-10-27 Method and apparatus for the generation of composite images Expired - Fee Related GB2369260B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB0026347A GB2369260B (en) 2000-10-27 2000-10-27 Method and apparatus for the generation of composite images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0026347A GB2369260B (en) 2000-10-27 2000-10-27 Method and apparatus for the generation of composite images

Publications (3)

Publication Number Publication Date
GB0026347D0 GB0026347D0 (en) 2000-12-13
GB2369260A true GB2369260A (en) 2002-05-22
GB2369260B GB2369260B (en) 2005-01-19

Family

ID=9902106

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0026347A Expired - Fee Related GB2369260B (en) 2000-10-27 2000-10-27 Method and apparatus for the generation of composite images

Country Status (1)

Country Link
GB (1) GB2369260B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6954212B2 (en) 2001-11-05 2005-10-11 Canon Europa N.V. Three-dimensional computer modelling
US6975326B2 (en) 2001-11-05 2005-12-13 Canon Europa N.V. Image processing apparatus
US7034821B2 (en) 2002-04-18 2006-04-25 Canon Kabushiki Kaisha Three-dimensional computer modelling
GB2434287A (en) * 2006-01-17 2007-07-18 Delcam Plc 3D Voxel-combining machine tool
US7280106B2 (en) 2002-10-21 2007-10-09 Canon Europa N.V. Apparatus and method for generating texture maps for use in 3D computer graphics
US7561164B2 (en) 2002-02-28 2009-07-14 Canon Europa N.V. Texture map editing
EP2113881A1 (en) * 2008-04-29 2009-11-04 Holiton Limited Image producing method and device
EP2271078A3 (en) * 2009-07-01 2012-10-24 Samsung Electronics Co., Ltd. Image displaying apparatus and image displaying method
US9008427B2 (en) 2013-09-13 2015-04-14 At&T Intellectual Property I, Lp Method and apparatus for generating quality estimators

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4879597A (en) * 1986-09-19 1989-11-07 Questech Limited Processing of video image signals
GB2287604A (en) * 1994-03-18 1995-09-20 Sony Corp Non additive mixing of video signals
EP0810776A2 (en) * 1996-05-28 1997-12-03 Canon Kabushiki Kaisha Image combining apparatus and method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4879597A (en) * 1986-09-19 1989-11-07 Questech Limited Processing of video image signals
GB2287604A (en) * 1994-03-18 1995-09-20 Sony Corp Non additive mixing of video signals
EP0810776A2 (en) * 1996-05-28 1997-12-03 Canon Kabushiki Kaisha Image combining apparatus and method

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6954212B2 (en) 2001-11-05 2005-10-11 Canon Europa N.V. Three-dimensional computer modelling
US6975326B2 (en) 2001-11-05 2005-12-13 Canon Europa N.V. Image processing apparatus
US7561164B2 (en) 2002-02-28 2009-07-14 Canon Europa N.V. Texture map editing
US7034821B2 (en) 2002-04-18 2006-04-25 Canon Kabushiki Kaisha Three-dimensional computer modelling
US7280106B2 (en) 2002-10-21 2007-10-09 Canon Europa N.V. Apparatus and method for generating texture maps for use in 3D computer graphics
GB2434287A (en) * 2006-01-17 2007-07-18 Delcam Plc 3D Voxel-combining machine tool
GB2434287B (en) * 2006-01-17 2011-09-14 Delcam Plc Improvements Relating To Designing And Manufacturing An Article
EP2113881A1 (en) * 2008-04-29 2009-11-04 Holiton Limited Image producing method and device
EP2271078A3 (en) * 2009-07-01 2012-10-24 Samsung Electronics Co., Ltd. Image displaying apparatus and image displaying method
US9183814B2 (en) 2009-07-01 2015-11-10 Samsung Electronics Co., Ltd. Image displaying apparatus and image displaying method
US9196228B2 (en) 2009-07-01 2015-11-24 Samsung Electronics Co., Ltd. Image displaying apparatus and image displaying method
US9008427B2 (en) 2013-09-13 2015-04-14 At&T Intellectual Property I, Lp Method and apparatus for generating quality estimators
US9521443B2 (en) 2013-09-13 2016-12-13 At&T Intellectual Property I, L.P. Method and apparatus for generating quality estimators
US10194176B2 (en) 2013-09-13 2019-01-29 At&T Intellectual Property I, L.P. Method and apparatus for generating quality estimators
US10432985B2 (en) 2013-09-13 2019-10-01 At&T Intellectual Property I, L.P. Method and apparatus for generating quality estimators

Also Published As

Publication number Publication date
GB2369260B (en) 2005-01-19
GB0026347D0 (en) 2000-12-13

Similar Documents

Publication Publication Date Title
Banterle et al. Inverse tone mapping
He et al. Single image haze removal using dark channel prior
Fattal et al. Gradient domain high dynamic range compression
US6707452B1 (en) Method and apparatus for surface approximation without cracks
JP3635051B2 (en) Image generation method and apparatus, a recording medium recording an image processing program, image processing program
CN102075694B (en) Stereoscopic editing for video production, post-production and display adaptation
US6300956B1 (en) Stochastic level of detail in computer animation
US7643025B2 (en) Method and apparatus for applying stereoscopic imagery to three-dimensionally defined substrates
US8213711B2 (en) Method and graphical user interface for modifying depth maps
KR100660490B1 (en) Image blending by guided interpolation
AU2003204466B2 (en) Method and system for enhancing portrait images
JP4814217B2 (en) System and computer program for converting image from low dynamic range to high dynamic range
US7262767B2 (en) Pseudo 3D image creation device, pseudo 3D image creation method, and pseudo 3D image display system
JP5891426B2 (en) An image drawing apparatus, an image drawing method, and an image drawing program for drawing an all-around stereoscopic image
AU727503B2 (en) Image filtering method and apparatus
US5224208A (en) Gradient calculation for texture mapping
Williams et al. Perceptually guided simplification of lit, textured meshes
EP2357841A2 (en) Method and apparatus for processing three-dimensional images
US8953905B2 (en) Rapid workflow system and method for image sequence depth enhancement
US20040042791A1 (en) Image pickup apparatus with brightness distribution chart display capability
US20110090217A1 (en) Method and apparatus for processing three-dimensional images
US20100134634A1 (en) Image processing system
US5995111A (en) Image processing apparatus and method
JP3829985B2 (en) An image processing apparatus and method, recording medium, and program
US8897596B1 (en) System and method for rapid image sequence depth enhancement with translucent elements

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20171027