GB2358540A - Selecting a feature in a camera image to be added to a model image - Google Patents

Selecting a feature in a camera image to be added to a model image Download PDF

Info

Publication number
GB2358540A
GB2358540A GB0001479A GB0001479A GB2358540A GB 2358540 A GB2358540 A GB 2358540A GB 0001479 A GB0001479 A GB 0001479A GB 0001479 A GB0001479 A GB 0001479A GB 2358540 A GB2358540 A GB 2358540A
Authority
GB
United Kingdom
Prior art keywords
model
image
camera
point
ordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB0001479A
Other versions
GB2358540B (en
GB0001479D0 (en
Inventor
Jane Haslam
Richard Ian Taylor
Charles Stephen Wiles
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to GB0001479A priority Critical patent/GB2358540B/en
Publication of GB0001479D0 publication Critical patent/GB0001479D0/en
Priority to US09/718,342 priority patent/US6980690B1/en
Publication of GB2358540A publication Critical patent/GB2358540A/en
Priority to US10/793,850 priority patent/US7508977B2/en
Application granted granted Critical
Publication of GB2358540B publication Critical patent/GB2358540B/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A three-dimensional model of an object is generated from image data representing a set of camera images of the object. Refinement of the model is carried out on the basis of a comparison of a displayed model image and a displayed camera image, enabling a user to select co-ordinates in the camera image defining the selection of an additional feature which is to be added to the model. The model calculates a locus in the three-dimensional space defining positions of possible model points corresponding to the selected image point and consistent with the geometric relationship between the object and a camera position from which the displayed camera image was taken. A position indicator is displayed in the model image at co-ordinates on the locus and the position indicator is moveable by the user in a manner which is constrained to follow a trajectory in the model image corresponding to the locus. The user may then select a required position of the new model point on the locus and the model data is updated to include the new model point corresponding to such selection.

Description

2358540 METHOD AND APPARATUS FOR GENERATING 1 MODEL DATA FROM CAMERA
IMAGES The present invention relates to an apparatus and method of operation thereof for generating model data of a model in a threedimensional space from image data representative of a set of camera images of an object.
It is known to create three-dimensional computer models of real objects based on the input of image data in the form of a series of image f rames which may be derived from a series of photographs taken from different camera positions or from a video recording taken from a moving camera.
Having generated a set of model data, a model image is displayed and may be compared with camera images of the object from which the existing model data has been derived.
A first aspect of the present invention relates to refinement of the existing model data by allowing a user to identify an additional feature in one of the camera images, this feature being absent f rom the displayed model image, and which the user wishes to include in the model by the input of additional model data.
2 one method of refining the mode.1 in this respect requires the user to continue the process of entering matchingl points identified in successive image frames and th(i apparatus to then process the matching point data by re-1 running the model program to incorporate an expanded set! of data. This process however requires a substantiall amount of computer processing effort with consequent delay. In some instances, the additional feature may!, only be visible in a single frame, making it impossiblE1 to identify a matching point in a second frame.
The present invention seeks to provide an improved methoOil and apparatus allowing an additional feature to be added as a result of user input based on a single frame.
According to the present invention there is disclosed:kl method of operating an apparatus for generating model! data representative of a model in a three dimensional-1 space from image data representative of a set of cameral, images of an object; the apparatus performing the steps of; displaying a model image based on an existing set of", model data; displaying one of the camera images of the objec-t41 for selection by a user of an additional feature to be represented by additional model data; receiving an image point selection signal responSiVell -01 3 to user actuation of an input means and identifying coordinates of an image point in the camera image defining the selected additional feature; calculating a locus in the three dimensional space defining positions of possible model points corresponding to the image point and consistent with the geometric relationship between the object and a camera position from which the displayed camera image was taken; displaying a position indicator in the model image at co-ordinates in the model image corresponding to one of the possible model points on the locus; receiving positioning signals responsive to user actuation of the input means and updating the co ordinates of the position indicator such that movement of the position indicator is constrained to follow a trajectory in the model image corresponding to the locus; receiving a model point selecting signal responsive to user actuation of the input means and determining selected co-ordinates of the position indicator to be the position indicator co-ordinates at the time of receiving the model point selecting signal; and determining co-ordinates of the additional model point in the three dimensional space c orresponding to the selected co-ordinates of the position indicator.
In a preferred embodiment, the locus is a straight line in the threedimensional model space, the straight line 4 being displayed in the model image as a visual aid to the,l user in editing the position of the new model point..11 After finalising the position of the new model point, a!, model generating process is initiated to incorporate th(l additional model point into the model data and 0 i generate surface elements of the model, allowing the new model image to be displayed including the surfacEl elements for comparison with the camera image.
A second aspect of the present invention relates to the manner in which the model data is edited when a new modell point is added to the existing set of model data, eithet+11 using the above disclosed method or by other methods,'i Incorporation of an additional model point generally' requires the surface elements of the existing model to bE modified, at least one of the surface elements bein91 replaced by a plurality of new elements which include tht new model point. This aspect of the invention addresse41 the problem of selecting the surface element to bo modified or replaced in a manner which is simple for thO user to implement.
According to the second aspect of the present inventiog, there is disclosed a method of operating an apparatus foX, generating model data defining a model in a threo dimensional space, the model data comprising co-ordinate:$ defining model points and surface elements generated with reference to the model points; the method comprising editing an existing set of model data by the steps of; adding a new model point to the existing set of model data; projecting the hew model point onto the model and identifying a selected one of the surface elements onto which the new model point is projected; identifying a subset of the model points which define the generation of the selected surface element; adding the new model point to the subset to form an edited subset of model points; and generating one or more edited surface elements from the edited subset of model points to replace the selected surface element.
The identification of the surface element to be replaced is thereby automatically implemented by the apparatus, by operating a computer program selected by the user.
In a preferred embodiment, the projection of the new model point onto the model is processed by defining a centre of projection corresponding to one of the camera positions f rom which f rames of the camera image data were obtained. An interface allowing the user to select an appropriate camera position may comprise a display of a pictorial representation showing the relative positions of the object and the cameras, the camera positions being 6 represented by icons which may, be selected by clicking computer mouse or other input device.
A further embodiment provides an alternative interface in.'i 5 which thumbnail images of the camera image f rames are presented to the user, each thumbnail image constitutinc an icon allowing selection using a pointing device suchli as a computer mouse in conjunction with a moveable cursor, on the display screen.
A third aspect of the present invention relates to thEt'i need to enable the user to evaluate the quality of model in order to judge whether further refinement of thel model data is required and to judge whether any editingl' procedure has been correctly effected or requires further,i editing.
This aspect of the invention seeks to provide the user!' with an interface allowing the user to view a model imagel for comparison with a camera image, it being advantageouEI to present the user with compatible views f or ease ofl comparison. The selection of the appropriate model image! for comparison with a specific camera image may be tImel consuming and complex to the user.
According to the present invention there is disclosed al method of operating an Apparatus for generating modell 7 data representative of a three dimensional model of an object from input signals representative of a set of camera images of the object taken from a plurality of camera positions, the method comprising; displaying a set of icons, each being associated with a respective one of the camera images of the object; receiving a selection signal responsive to user actuation of an input means whereby the selection signal identifies a selected one of the icons; determining a selected camera image from the set of camera images corresponding to the selected icon; displaying the selected image; determining position data representative of a selected camera position from which the selected image was taken; generating in accordance with said model a model image representative of a view of the model from a viewpoint corresponding to the position data; and displaying the model image for visual comparison with the selected image by the user.
This method therefore allows the user to simply select a camera image using a set of icons and provides automatic processing using a computer program to generate a model image representative of a view of the model from a viewpoint corresponding to position data determined when the user selects a particular icon.
8 The icons may be representations of camera positions relative to a representation of the object being modelled or alternatively the icons may be thumbnail images of the frames of camera image data.
The user is thereby presented with a computer interface allowing correctly comparable model and camera images to be rapidly selected for evaluation. The selection process may thereby be repeated to view the images from different viewpoints in order to rapidly gain an overview of the quality of the model data as a basis for deciding whether further editing is required.
Embodiments of the above aspects of the present invention will now be described by example only and with reference to the accompanying drawings of which; Figure 1 is a schematic representation of components ofl a modular system in which the present invention may be i embodied; Figure 2 is a schematic representation of a model window; Figure 3 is a schematic representation of a camera imageli window in which a displayed camera image includes an'! I additional feature which is not represented in the model,, image of Figure 2; 9 Figure 4 is a schematic representation of a calculated 1 locus in the 3-D model space for a new model point; Figure 5 is a schematic representation of a model window including a new point moved by the user to positions constrained by the calculated locus; Figure 6 is a schematic representation of a model window during user selection of points for connection to the new 10 model point; Figure 7 is a schematic representation of the model window in which the displayed model image shows the new model point and facets; Figure 8 is a schematic representation of the model window showing the model image including the added model data, viewed from the same direction as the camera image of Figure 3; Figure 9 is a schematic flowchart showing the method steps for adding the new model data; Figure 10 is a further general illustration of the 25 apparatus including a display screen; Figure 11 is a representation of a model window including a display of a line representing the calculatedI trajectory; Figure 12 is a schematic representation of the addition of a new model point to an existing model according to ki second aspect of the present invention; Figure 13A is a schematic representation of a cameral selection window using camera icons; Figure 13B illustrates an alternative camera selectionj window using thumbnail icons; Figure 14 is a diagram illustrating the calculation of aj ray intersecting a facet of the model; Figure 15 is a diagram illustrating the subdivision of facet to include the added model point; Figure 16 is a diagram illustrating the display of a ne, model including the added point and new facets; Figure 17 is a flowchart illustrating the metho( described with reference to Figures 12 to 16; 1 Figure 18 is a flowchart illustrating the step o. replacing the existing facets with new facets using re..:' triangulation; Figure 19 is a diagram illustrating the identification of co-ordinates in a camera image of a f eature corresponding 5 to the added model point; Figure 20 is a diagram illustrating the calculation of the intersection with the f acet of a ray through the camera image point and the added model point; Figure 21 is a flowchart illustrating the method described with reference to Figures 18 to 20; Figures 22 to 26 illustrate a third aspect of the present invention, Figure 22 illustrating schematically camera positions in relation to an object to be modelled; Figure 23 illustrates a display screen of a computer interface allowing viewpoints to be selected by a user for selecting both camera image and model image; Figure 24 is a flowchart illustrating the method of implementing the interface of Figure 23; Figure 25 illustrates an alternative interface display allowing image selection using camera position icons; and 12 Figure 26 is a flowchart illustrating the operation ofl:
I the interface of Figure 25.
Figure 1 schematically shows the components of a modulatI, 5 system in which the-present invention may be embodied.
These components can be effected as processor-implementedi, instructions, hardware or a combination thereof.
Referring to Figure 1, the components are arranged tol, process data defining images (still or moving) of one or more objects in order to generate data defining a three- dimensional computer model of the object(s).
The input image data may be received in a variety of!J ways, such as directly from one or more digital cameras,j via a storage device such as a disk or CD ROM, by digi.tisation of photographs using a scanner, or by!:l downloading image data from a database, for example via,,, a datalink such as the Internet, etc.
The generated 3D model data may be used to: display an'l image of the object(s) from a desired viewing position;,l control manufacturing equipment to manufacture a model of the object(s), for example by controlling cutting!1 apparatus to cut material to the appropriate dimensions; perform processing to recognise the object(s), for' 13 L example by comparing it to data stored in a database; carry out processing to measure the object(s), for example by taking absolute measurements to record the size of the object(s), or by comparing the model with models of the object(s) previously generated to determine changes therebetween; carry out processing so as to control a robot to navigate around the object(s); store information in a geographic information system (GIS) or other topographic database; or transmit the object data representing the model to a remote processing device for any such processing, either on a storage device or as a signal (for example, the data may be transmitted in virtual reality modelling language (VRML) format over the Internet, enabling it to be processed by a WWW browser); etc.
The feature detection and matching module 2 is arranged to receive image data recorded by a still camera from different positions relative to the object(s) (the different positions being achieved by moving the camera and/or the object(s)). The received data is then processed in order to match features within the different images (that is, to identify points in the images which correspond to the same physical point on the object(s)).
The feature detection and tracking module 4 is arranged to receive image data recorded by a video camera as the 14 relative positions of the camera and object(s) ar(, changed (by moving the video camera and/or thqj object(s)). As in the feature detection and matchin module 2, the feature detection and tracking module 41 detects features, such as corners, in the images4ii However, the feature detection and tracking module 4 thei tracks the detected features between frames of image data in order to determine the positions of the features ii) other images.
The camera position calculation module 6 is arranged tc,, use the features matched across images by the featurei detection and matching module 2 or the feature detectiorl and tracking module 4 to calculate the trans formationj between the camera positions at which the images werfil recorded and hence determine the orientation and positionj of the camera focal plane when 'bach image was recorded.,, The feature detection and matching module 2 and the camera position calculation module 6 may be arranged tc)j perform processing in an iterative manner. That is,.' using camera positions and orientations calculated by thel camera position calculation module 6, the featurqj detection and matching module 2 may detect and match! further features in the images using epipolar geometry inj a conventional manner, and the further matched featureE; may then be used by the camera position calculatiortI, module 6 to recalculate the camera positions and orientations.
If the positions at which the images were recorded are already known, then, as indicated by arrow 8 in Figure 1, the image data need not be processed by the f eature detection and matching module 2, the f eature detection and tracking module 4, or the camera position calculation module 6. For example, the images may be recorded by mounting a number of cameras on a calibrated rig arranged to hold the cameras in known positions relative to the object(s).
Alternatively, it is possible to determine the positions of a plurality of cameras relative to the object(s) by adding calibration markers to the object(s) and calculating the positions of the cameras from the positions of the calibration markers in images recorded by the cameras. The calibration markers may comprise patterns of light projected onto the object(s). camera calibration module 10 is therefore provided to receive image data from a plurality of cameras at fixed positions showing the object(s) together with calibration markers, and to process the data to determine the positions of the cameras. A preferred method of calculating the positions of the cameras (and also internal parameters of each camera, such as the focal length etc) is described in 16 "Calibrating and 3D Modelling with a Multi-Camera System-:, by Wiles and Davison in 1999 IEEE Workshop on Multi-Viewl, Modelling and Analysis of Visual Scenes, ISBN 0769501109.:
The 3D object surface generation module 12 is arranged toj receive image data showing the object(s) and dat&' defining the positions at which the images were recorded, and to process the data to generate a 3D computer modeld representing the actual surf ace (s) of the object (s) such as a polygon mesh model.
The texture data generation module 14 is arranged tod, generate texture data for rendering onto the surface: model produced by the 3D object surface generationJ module 12. The texture data is generated from the inputi'l image data showing the object(s).
Techniques that can be used to perform the processing in'l the modules shown in Figure 1 are described in EP-A-, 0898245, EP-A-0901105, pending us applications 09/129077, 11 09/129079 and 09/129080, the full contents of which are!, incorporated herein by cross-reference, and- also Annex A.
The present invention may be embodied in particular as,, part of the feature detection and matching module 2 (although it has applicability in other applications, as will be described later).
17 Figure 10 illustrates generally the apparatus 100 of the present embodiment, comprising a processor 101, display monitor 102, and input devices including a computer mouse 103 and keyboard 104. The mouse 103 enables signals such as an image point selection signal 112 (described below) to be input to the processor.
A disc drive 105 also receives a floppy disc 106 carrying program code and/or image data for use by the processor 101 in implementing the method steps of the present invention.
The display monitor 102 has a display screen 107 which, in the present mode of operation of the program, displays a model window 108 and a camera image window 109.
The processor 101 is connected to a modem 110 enabling program code or image data to be alternatively downloaded via the internet as an electronic signal 111.
The method steps according to a first aspect of the present. embodiment are illustrated in Figure 9 in which steps performed by the user and by the apparatus are separated by a broken line 90 representing the interface provided by the display screen 107 and input devices 103,104.
18 The method begins f rom a starting point at which the apparatus has already acquired a set of existing mode't data derived for example using the components in Figur 1 to process input image data in the form of a series o image frames obtained from a camera at respectiv( different camera positions. The model data includes i set of model points and surface elements and estimates o-O the camera positions in the form of model co- ordinate for camera centres and look-directions derived foi', example by operation of camera position calculatioi module 6 to calculate camera positions based on the imago, data.
At step 91, the apparatus displays in the display screei 1 i 107 a model image 20 in the model window 108 a.,sI illustrated in Figure 2. Also displayed for side by sid(, comparison is a camera image 30 in the camera imago window 109 as illustrated in Figure 3.
The model image 20 of Figure 2 is rendered using existinl model data which the user wishes to update in order C I I add additional model data representing an additionalli feature 31 which is visible in the camera image of Figure'I 3 but which has no equivalent in the model image 20 ofll 25 Figure 2. The model image 20 and camera image 30 al shown in Figures 2 and 3 are generated as views f ror 1 substantially the same viewing direction.
19 At step 92, the user views the model image 20 and the camera image 30 and selects an image point 32 in the camera image 30 by using the computer mouse 103 to align a cursor 33 with the selected additional feature 31 and then clicking the mouse to generate an image point selection signal at step 93.
At step 94, the apparatus receives the image point selection signal and processes the signal to identify co- ordinates of the image point in the camera image 30.
Since the camera image 30 is a two-dimensional projection of the object from which the model is derived, the twodimensional co-ordinates obtained by user selection of the image point 32 do not specify uniquely a position in three dimensions at which the new model point is to be added. At step 95, the apparatus calculates the locus in three dimensions of the positions of possible model points corresponding to the selected image point 32 which are consistent with the geometric relationship between the object and the camera position from which the displayed camera image 30 was taken. This is illustrated in Figure 4 in which the model is viewed from a different viewpoint f rom that of Figure 2 and in which the locus is a straight line extending in the three dimensional space of the model from the model co-ordinates of the camera centre 40 and through the coordinates of the image point 32 in the camera image plane 41.
An exemplary model point 42 lying on the locus 43 i,. illustrated in Figure 4 at one of the possible position. 5 at which the new model point could be added.
At step 96, the apparatus displays in the model window Ei new model image 21 as shown in Figure 5 in which a'l position indicator 50 lies on the locus 43 and is movablE' in response to movement of the computer mouse by the useitl so as to be constrained to follow a trajectory 511 corresponding to the locus when projected into the planel of the model image 21. The new model image 21 of Figurel 5 is generated as a view of the model f rom a dif f eren-t,+11 viewpoint selected to clearly display the locus. Suchl different viewpoints are selected by the user byl temporarily selecting a dif f erent mode of operation f rom a menu of available modes, the viewpoint selecting mode'l 1 providing rotation of the model image in latitude anol longitude in response to sideways and forward /revers movement of the mouse respectively.
At step 97, the user views the model image 21 and thol I position indicator 50 and decides upon an appropriatel 1 position of the position indicator 50 to represent the additional feature 31. At step 98, the user actuates thel mouse to move the position indicator 50 to the selected! 21 position, the apparatus updating the position of the position indicator appropriately at step 99, and at step 910 the user clicks the mouse, thereby selecting the desired position to set the position of the new model point. At step 91l,'I the apparatus receives a selection input signal corresponding to the mouse click and freezes the position at which the position indicator 50 is displayed in the model image window. At step 912, the apparatus determines the three-dimensional co-ordinates corresponding to the selected position of the additional model point, the co-ordinates being uniquely identified in three-dimensions from the known geometry of the locus and the selected position in the two-dimensional projection forming the model image 21 of Figure 5.
At step 913, the apparatus adds the new model point to the existing model data and at step 914 displays the new model point 64 in the model window 10B together with existing model points, superimposed on the model image 20 as shown in Figure 6.
At step 915, the user views the model image and the new model point and selects a set of existing model points 61, 62 and 63 for combining with the new model point 64 to form a new subset of points to be used in the generation of surface elements of the model. The apparatus then generates the additional surface elements 22 shown as elements 70 and 71 in Figure 7. Texture dati)!, may then be rendered onto the resulting surface mode-i using a texture data generation module 14 as described above with reference to Figure 1.
Figure 8 illustrates the model image incorporating tho added model data when viewed from the same direction as the original camera image of Figure 3. In the model image of Figure 8, the additional feature 31 of thp! camera image 30 is represented by added model feature 80.
The user may decide that the added model feature 80 doesl not adequately represent the additional feature 31 and,.:,! if so, may select an editing mode in which the positiopl of the position indicator 50 may be adjusted and tho:j resulting facetted model reviewed until the added model+! feature is judged to be correct, this further step requiring the input of further positioning signals andl model point selecting signals responsive to user,l actuation of the mouse.
In an alternative embodiment illustrated in Figure 11,.' the step 96 of displaying in the model window 108 the neW1 model image 21 together with the indicator 50 may alsoll include displaying a line 120 indicating the path of the:11 trajectory 50.
23 Alternative embodiments are envisaged in which for example non-linear locus calculation is effected, for example to take account of image distortion known to be present in the camera optics. Alternative means may be utilised f or the input of data in place of a computer mouse, alternative f orms of pointing device such as touch screen and touch pad devices being usable, or alternatively conventional keyboard devices may be used to input co-ordinates.
In a further alternative embodiment, the step 915 in which the user selects existing model points f or surf ace generation may be replaced by a step carried out by the apparatus to automatically select existing model points is to be used in combination with the new model point as a basis for re- facetting the model.
A preferred method of performing such automatic f acetting will be described below.
The method of the present invention can be implemented by a computer program operating on the computer apparatus 100, the program comprising processor implementable instructions for controlling the processor 101. The program may be stored in a storage medium such as floppy disk 106. An aspect of the present invention thus provides a storage medium storing processor implementable 24 instructions for carrying out the above described method..i Further, the computer program may be obtained ii)l i electronic form for example by downloading the program code in the form of a signal 111 over a network such asl the internet via the modem 38.
Alternative embodiments of the present invention are envisaged in which for example the above described methodl and apparatus are used to processcamera images obtainedl by selecting frames from a video camera recording, the frames representing different views of the object. Thel displayed images may additionally be modified to includel dimensional information as a guide to the user irill determining the optimum position of the new model point.111 A further aspect of the present embodiment will now be! described, relating to the automatic re-facetting of the!! model when a new model point is added to a set of, I I existing model points. Corresponding reference numerals to those of preceding figures will be used whero appropriate for corresponding elements.
Figure 12 illustrates a new model point 64 which has been added to the data used to derive a model image 20 displayed in a model window 108 in a display screen 1071,1 of a processor controlled apparatus 100 of the typel illustrated in Figure 10 and functioning as a system in 1 the manner described above with reference to Figure 1.
The addition of the new model point 64 may be the result of a process using selection of a camera image point and generating a locus in the model space as described above with reference to Figures 2 to 11 or may be the result of a different process, such as for example the input via a keyboard of numerals representing co-ordinates in the three-dimensional model space.
In Figure 12, the model image 20 is representative of an irregularly shaped object represented schematically by a multi-facetted image in which the surface is comprised of a large number of triangular facets. in practice, the number of facets is likely to be greatly increased beyond the relatively small number illustrated in Figure 12 so that Figure 12 should therefore be regarded as schematic for the purpose of simplicity of representation in this respect.
The method steps required to implement the method are illustrated in the flowchart of Figure 17 in which steps performed by the user are illustrated in the left-hand portion of the flowchart, steps implemented by the apparatus are shown in the right-hand portion of the flowchart and an interface between the user and the 26 apparatus is represented as a broken line 90. I.t practice, the interface is comprised of the displa.
screen 107 and the computer mouse 106 allowing the inpu-tl of pointing signals in conjunction with the display of cursor 33 on the display screen 107.
The following method steps illustrated in Figure 17 wil..',' i be described with reference to Figures 12 to 16. At ste 170, the user selects via mode icons 230 a mode o.fl operation of the apparatus for choosing a view of th I model and the apparatus responds by displaying the mode' image 20 in the model image window 108. The use actuates the mouse 103 to orient the model view to i i position which is judged to be appropriate.
At step 171, the user selects a mode of operation for th( addition of model points and the apparatus responds b Y displaying a prompt for the input of the model poin'tl information. The user inputs co-ordinates of the addeJ model point and, at step 172, the apparatus displays the new model point in the model image window 108 ail illustrated in Figure 12. The apparatus also displays oi the display screen 107 a camera selection window 130 as] i illustrated in Figure 13A in which the camera position, relative to the object represented by the model image ar graphically represented in a manner which enables th user to choose one of the cameras as being appropriatelY 27 located for the purpose of defining a centre of 1 projection to allow the new model point 64 to be projected onto the existing model. The user may for example already have knowledge of the object being modelled and a general indication of the required camera view.
In the camera selection window 130, the cameras are represented at their positions relative to a representation of the object 131 by respective camera icons 132 such that the user is able to select one of the cameras by use of the mouse, the user aligning the cursor 33 onto a selected one of the camera icons and clicking the mouse 103 to effect selection.
is At step 174, the apparatus receives the camera selecting signal and determines the position of the camera centre 147 in the three-dimensional co-ordinate system of the model.
At step 175, the apparatus calculates the manner in which the new model point 64 is projected onto the surface of the model by calculating a ray in the model space through the position of the camera centre and the co-ordinates of the new model point. As shown in Figure 14, a ray 140 defined in the above manner intersects the surface of the model at a point of intersection 141 which lies within a 28 facet 142 defined by apices 143, 144 and 145 and als intersects a second facet 146 on exiting the model surface.
At step 176, the apparatus replaces the existing face,t 142 with new facets 150, 151 and 152 as illustrated i.c Figure 15, each of which includes the new model point 64 as a respective apex. At step 177, the apparatuo displays the new model image including the added point 6 and the new f acets 150, 151 and 152 as illustrated iX Figure 16 in which the new f acets are highlighted b being cross-hatched (facet 152 is hidden from view).
Step 176 of replacing the existing facet with new facetl is illustrated in greater detail in the flowchart o:ti Figure 18. At step 180, the apparatus determines whethe.
the ray 140 intersect one of the model f acets. If no intersection occurs, the apparatus displays a prompt te+ the user to select a model facet at step 181 and at step 182 the user responds by selecting a facet to b( replaced, selection being carried out using the mouse an cursor. At step 183, the apparatus determines the set o:l co-ordinates upon which the selected f acet is based andi at step 184, adds the new model point to this set of co.., ordinates. In this example, since the facet bein i replaced is triangular, the set of co-ordinates on whic the facet is based consists of three model points. Whei 29 the new model point is added, there are four model points as a basis for re-triangulation. At step 185, the apparatus performs re- triangulation to define three triangular facets which connect the set of four points to 5 form part of the surface of the model as illustrated in Figure 15.
If at step 180, the apparatus determines that the ray does in fact intersect a model facet 142 as shown in Figure 14, the point of intersection 141 is determined, thereby defining the facet 142 which is intersected by the ray, and the set of co-ordinates of the intersected f acet are then used in combination with the new model point at step 184 to define the set of new co-ordinates.
If, as in the case of Figure 14, more than one facet is intersected by the ray 140, the apparatus determines at step 185 which of the facets is closest to the new model point 64 as a subject for re-triangulation. In the example of Figure 14, the facet 142 is therefore selected in preference to facet 146 since it is closer to the new model point 64.
Figure 13B illustrates an alternative method of selecting the camera position by using a camera selection window 130 which includes a series of thumbnail icons 133, each thumbnail icon comprising a thumbnail image derived f rom the image data obtained from a respective camera position. The user may thereby select from the displayed! thumbnail images the appropriate camera position forl, viewing the required aspect of the object represented by the model image and by clicking the mouse 103 when cursor] 33 is on the thumbnail icon 133, generates a pointing;1 1 signal 112 received at step 174 of Figure 17 by the apparatus, thereby enabling the required camera positioill, to be determined as a centre of projection.
In the above described example, the centre of projection for projecting the new model point onto the surface of11, the model is defined as being the centre of the camera.1 i The centre of projection may alternatively be defined inill terms of the point in the image plane of the camerail corresponding to the location of the image pointfl corresponding to the new model point. For example, in,, Figure 19, a camera image 30 is displayed in a camera!11 image window 190 to allow the user to select a camera!' image point 191 determined by the user to correspond to,'i the new model point 64. As illustrated in the flowchartil of Figure 21, the co-ordinates of the camera image point i are input at step 210 to enable the apparatus to, calculate at step 211 the ray in the model space through:l the co-ordinates of the added model point and camer&I image point as illustrated in Figure 20 where the11 position 200 of the camera image point in the camera,, plane 201 is used to determine the trajectory of the rayll 31 140.
Alternative devices may be used in place of the computer mouse 103 for the input of selection signals, including for example any conventional pointing device such as a touch screen or touch pad device. Alternatively, a keyboard 104 may be used for the input of commands or coordinates.
In the method of Figure 17, the user may choose to change from one mode to another at any time by selecting one of the mode icons 230.
The method of the above aspect of the present invention described with reference to Figures 1, 10, and 12 to 20 can be implemented by a computer program operating on the computer apparatus 100, the program comprising processor implementable instructions for controlling the processor 101. The program may be stored in a storage medium such as floppy disk 106. An aspect of the present invention thus provides a storage medium storing processor implementable instructions for carrying out the above described method.
Further, the computer program may be obtained in electronic form for example by downloading the program code as a signal Ill over a network such as the internet 32 via the modem 110.
A further aspect of the present embodiment will now bq described using corresponding reference numerals to thos of preceding f igures where appropriate for correspondin i elements. This aspect of the embodiment relates to th, provision of a method and apparatus enabling an interf aco to allow a user to evaluate the quality of a model of th type discussed above, and in particular of the typ(' i discussed with reference to Figure 1 using the apparatu.- described above with reference to Figure 10.
As previously discussed, a user may adopt one of a numbetmll, of techniques for refining and editing model data in order to achieve an improved model image. In order t4l evaluate the quality of the model image, this aspect o,j the embodiment allows views of the model image and cameral image to be presented in respective model image windowsl and camera image windows on the display screen and foil the respective images to be presented such that both the! camera image and model image represent views of thEt object from substantially the same viewpoint and Ini i respect of which substantially the same image settings;' such as magnification, field of view, etc, are providecill (these latter parameters are referred to below as "camera intrinsics").
33 Figure 22 illustrates the relationship between a physical object 220 which is the subject of the modelling exercise and a set of camera positions L(i), relative to the object 220, from which a set of frames of image data are obtained, a corresponding camera image I(i) being obtained. The camera images may be obtained by moving a single camera successively into the camera positions L(i), by having a set of different cameras or by moving the object relative to a stationary camera, for example.
Having obtained model data allowing model images to be displayed, the user wishes to evaluate the model by displaying side by side a camera image and a model image. In Figure 22, camera position L(3) is of particular interest to the user.
Using the apparatus of Figure 10, the user operates the apparatus to achieve this result using the method steps illustrated in the flowchart of Figure 24 which will be illustrated below with reference to Figure 23.
At step 240, the user selects the required mode of operation for displaying camera and model images for the purpose of evaluation, mode selection being achieved using the interface provided by the display screen 107, the cursor 33 and the mouse 103 to select one of the mode icons 230 located in a peripheral region of the display 34 screen as shown in Figure 23.
At step 241, the apparatus generates camera image dat., for each of the frames of image data, using the thumbnai.' image format, and displays the thumbnail images as icon 231 within an icon window 232 of the display screen 107.
The icons 231 are displayed in a sequence as calculate4 by camera position calculation module 6 which correspond# to the spatial relationship of the positions L(i) ag shown in Figure 22, so that the sequence L(!), i = 1 t n progressing from left to right is maintained in tho layout of the icons on the display screen 107 such that, ,, images I(i), i = 1 to n, are positioned from left to is right according to the value of i.
For simplicity of representation, the images shown ii.' Figure 23 are those of a regular polyhedron in which a:' x is drawn on one of the f aces so that the apparent position of the x in each of the displayed thumbnaiif images corresponds to the view which would be obtainecll from the camera positions L(i).
At step 242 the user views the icons and at step 243 thel user selects one of the icons as being of particula:tl relevance for the purpose of evaluation of the images.,,; The user selects the icon as indicated in Figure 23 byl the cursor 33 overlaying the third image, i = 3, 1 corresponding to selection of the camera position L(3) of Figure 22.
At step 244, the apparatus receives the icon selection input and at step 245, the apparatus identifies the selected camera image for display in a camera image window 109. At step 246, the apparatus determines the position data for the selected camera by accessing data stored with the camera image data and at step 247 calculates the model image data using the selected position data to define the viewpoint for the model. In calculating the model image data, the apparatus also uses camera intrinsic parameters stored with the camera image data. The intrinsic parameters of the camera comprise the focal length, the pixel aspect ratio, the first order radial distortion coefficient, the skew angle (between the axes of the pixel grid) and the principal point (at which the camera optical axis intersects the viewing plane).
At step 248, the apparatus displays a model image 20 in the model image window 108 and the camera image 30 in a camera image window 109, thereby allowing the user to view and compare the selected camera image and the model image as calculated from a corresponding viewpoint.
36 In Figure 23, the icons 231 are linked in series by link,51, 233. If necessary, a large number of such icons may be displayed in an array comprising a number of rows,'], maintaining the links between successive icons in orderl to visually indicate the continuity of the sequence (i.e.ll the direction of increasing i). The use of such linksi, therefore assists in providing the user with ani, indication of where the most appropriate image is to bei':
selected.
After viewing the images for a selected viewpoint, the user may then choose to view camera and model images forl different viewpoints by selecting different icons,, repeating step 243 of Figure 24, and resulting in the! T is apparatus repeating steps 244 to 248 to enable the further views to be seen.
If the user then decides that the model data requires 11 editing, the user may then select a different mode of operation by selecting the appropriate mode icon 230 forl! further operation of the apparatus.
An alternative embodiment will now be described withi reference to Figure 25 and the flowchart of Figure 26.1 Referring to Figure 26, at step 260 the user selects all, required mode of operation by selecting the appropriate mode icon 230 of Figure 25. The apparatus responds by 37 generating and displaying icons 250 in a camera position window 251.
Within the camera position window 251, a display generated by the apparatus at step 261 comprises a representation 252 of the object based upon the model data together with representations of cameras at positions L(i), i = 1 to n, such that the relative positions of the cameras and the representation 252 correspond to the calculated camera positions developed by the camera position calculation module 6 of Figure 1.
The representation 252 is thereby placed at the origin of the co-ordinate system of the model and the icons 250 located in effect at the calculated camera positions.
This representation of the relative positions of the cameras and object allows the user to easily select a viewing point for the camera and model images to be displayed. In order to select a particular viewpoint, the user at step 262 views the icons 250 within the window 251 and at step 263 selects one of the icons at the desired camera position. The apparatus responds at step 265 by identifying the camera image data corresponding to the selected camera position. At step 266, the apparatus then proceeds to calculate the model image data using the selected position data as a viewpoint and using camera intrinsic parameters stored in 38 conjunction with the camera image data identified in stel. 265.
At step 267, the apparatus then displays the model image! in model image window 108 and the camera image 30 it),! camera image window 109 to be viewed by the user at stel)l 268. The user is then able to evaluate the quality ofl the image by comparison between the images.
In each of the display interfaces of Figures 23 and 25,1,1 the camera image window 109 and the model image window 108 may be moved relative to one another using a drag anOl drop method by means of actuating the mouse. Similarly,11 the icon windows 232 and 251 may be moved relative to thel is image windows 108 and 109, thereby allowing the user tp!1 arrange the windows for maximum ease of selection anc1 comparison.
The method of the present invention can be implemented byi a computer program operating on the computer apparatus 100, the program comprising processor implementablel instructions for controlling the processor 101. The1,1 program may be stored in a storage medium such as floppyiii disk 106. An aspect of the present invention thus! provides a storage medium storing processor implementablell instructions for carrying out the above described method.'i 39 Further, the computer program may be obtained in electronic form for example by downloading the program code as a signal 111 over a network such as the internet via the modem 110. 5 ANNEX A 1 1. CORNER DETECTION 1.1 Summarv This process described below calculates corner points, to sub-pixel accuracy, from a single grey scale or colour. image. It does this by first detecting edge boundaries in.
the image and then choosing corner points to be points where a strong edge changes direction rapidly. The method is based on the facet model of corner detection, described in Haralick and Shapiroi.
1.2 Algorithm The algorithm has four stages:
(1) Create grey scale image (if necessary); (2) Calculate edge strengths and directions; (3) Calculate edge boundaries; (4) Calcul - ate corner points.
1.2.1 Create arev scale imaqe The corner detection method works on grey scale images. For colour images, the colour values are first converted 41 to floating point grey scale values using the formula:
I grey scal e = (0. 3 x red) + (0. 5 9 x green) + (0. 11 x bl ue) ... A-1 5 This is the standard definition of brightness as defined by NTSC and described in Foley and van Damii.
1.2.2 Calculate edge strengths and directions The edge strengths and directions are calculated using the 7x7 integrated directional derivative gradient operator discussed in section 8.9 of Haralickand Shapiro'.
The row and column forms of the derivative operator are both applied to each pixel in the grey scale image. The results are combined in the standard way to calculate the 20 edge strength and edge direction at each pixel.
The output of this part of the algorithm is a complete derivative image.
1.2.3 Calculate edc[e boundaries 1 42 The edge boundaries are calculated by using a zero crossing edge detection method based on a set of 5x5, kernels describing a bivariate cubic fit to thd neighbourhood of each pixel.
The edge boundary detection method places an edge at all., pixels which are close to a negatively sloped zerd, crossing of the second directional derivative taken irv: the direction of the gradient, where the derivatives arejI defined using the bivariate cubic fit to the grey level'[ surface. The subpixel location of the zero crossing is, also stored along with the pixel location.
The method of edge boundary detection is described ln'l more detail in section 8.8.4 of Haralick and Shapiro'.
1.2.4 Calculate corner points The corner points are calculated using a method which, uses the edge boundaries calculated in the previous step.
43 Corners are associated with two conditions:
I (1) the occurrence of an edge boundary; and (2) significant changes in edge direction.
Each of the pixels on the edge boundary is tested for "cornerness" by considering two points equidistant to it along the tangent direction. If the change in the edge direction is greater than a given threshold then the point is labelled as a corner. This step is described in section 8.10.1 of Haralick and Shapiroi.
Finally the corners are sorted on the product of the edge strength magnitude and the change of edge direction. The top 200 corners which are separated by at least 5 pixels are output.
2. FEATURE TRACKING 2.1 Summarv 44 This process described below tracks feature points (typically corners) across a sequence of grey scale ori:
colour images. I The tracking method uses a constant image velocity Kalmaitl filter to predict the motion of the corners, and al correlation based matcher to make the measurements o.1, corner correspondences.
The method assumes that the motion of corners is smoothl enough across the sequence of input images that all constant velocity Kalman filter is useful, and thatj corner measurements and motion can be modelled byl gaussians.
2.2 Alaorithm.
1) Input corners from an image.
2) Predict forward using Kalman filter.
3) If the position uncertainty of the predicted corner!, is greater than a threshold, L, as measured by the I state positional variance, drop the corner from the list of currently tracked corners.
4) Input a new image from the sequence.
5) For each of the currently tracked corners:
a) search a window in the new image for pixels which match the corner; b) update the corresponding Kalman filter, using any new observations (i.e. matches).
6) Input the corners from the new image as new points to be tracked (first, filtering them to remove any which are too close to existing tracked points).
7) Go back to (2) 2.2.1 Prediction This uses the following standard Kalman filter equations for prediction, assuming a constant velocity and random 46 uniform gaussian acceleration model for the dynamics:
1 X n '1= 8,1, n X n.... A-2 K = 8 K 8 T +Q n+l n-l,n n ', 1'n n where x is the 4D state of the system, (defined by the position and velocity vector of the corner), K is tho state covariance matrix, 8 is the transition matrix, andll Q is the process covariance matrix.
In this model, the transition matrix and processl covariance matrix are constant and have the following! values:
en+l,n.... A-4 0 1 0 0 0 C7 2 1.... A-5 v is 2.2.2 Searching and matching 47 This uses the positional uncertainty (given by the top I two diagonal elements of the state covariance matrix, K) to define a region in which to search for new measurements (i.e. a.range gate). 5 The range gate is a rectangular region of dimensions:
Ax = V-K-11, Ay = FK2-2.... A-6 The correlation score between a window around the previously measured corner and each of the pixels in the range gate is calculated.
The two top correlation scores are kept.
If the top correlation score is larger than a threshold, C,,, and the difference between the two top correlation scores is larger than a threshold AC, then the pixel with the top correlation score is kept as the latest measurement.
2.2.3 Ur)date 48 The measurement is used to update the Kalman f ilter in the standard way:
G = YEH T (HYH T+R) -1.... A-7 x-x+G(.k-Hx).... A-8 K-(I-GH)K where G is the Kalman gain, H is the measurement matrix and R is the measurement covariance matrix. 10 In this implementation, the measurement matrix and, measurement covariance matrix are both constant, being, given by:
H 0)....A-10 R = Cy2 _T 2.2.4 Parameters The parameters of the algorithm are:
Initial conditions: xO and K(). Process velocity variance: OV2. Measurement variance: Cy2.
49 Position uncertainty hreshold for loss of track: A. Covariance threshold: C Matching ambiguity threshold: AC.
For the initial conditions, the position of the f irst corner measurement and zero velocity are used, with an initial covariance matrix of the form:
0 0 KO = 0 Cy2 J.... A- 12 0 Cy02 is set to Cy02 = 200(pixels/frame)2.
The algorithm's behaviour over a long sequence is anyway is not too dependent on the initial conditions.
The process velocity variance is set to the fixed value of 50 (pixels/frame)2. The process velocity variance would have to be increased above this for a hand-held sequence. In fact it is straightforward to obtain a reasonable value for the process velocity variance adaptively.
The measurement variance is obtained from the followincj:l model:
CT 2 = (rK+a).... A-13 where K /(KllK22) is a measure of the positional' uncertainty, r is a parameter related to the likelihoodil, of obtaining an outlier, and a is a parameter related tc,j 11 it 11 1 the measurement uncertainty of inliers. r and "a arEl: set to r=O.l and a=1.0.
This model takes into account, in a heuristic way, thoi', fact that it is more likely that an outlier will be' obtained if the range gate is large.
The measurement variance (in fact the full measurement"I 15 covariance matrix R) could also be obtained from the,,; i 1 i behaviour of the auto-correlation in the neighbourhood of!I the measurement. However this would not take into account the likelihood of obtaining an outlier.
The remaining parameters are set to the values: A=400,,! pixel S2, C0=0.9 and AC=0.001.
51 3. 3D SURFACE GENERATION 3.1 Architecture 5 In the method described below, it is assumed that the object can be segmented from the background in a set of images completely surrounding the object. Although this restricts the generality of the method, this constraint can often be arranged in practice, particularly for small objects. The method consists of five processes, which are run consecutively: 15 First, for all the images in which the camera positions and orientations have been calculated, the object is segmented from the background, using colour information. This produces a set of binary 20 images, where the pixels are marked as being either object or background.
52 The segmentations are used, together withthEl, camera positions and orientations, to generate a!, voxel carving, consisting of a 3D grid of voxelsll, enclosing the object. Each of the voxels is marked'I as being either object or empty space.
The voxel carving is turned into a 3D surface!'I triangulation, using a standard triangulation: algorithm (marching cubes) The number of triangles is reduced substantially by passing the triangulation through a decimationj process.
Finally the triangulation is textured, usinqI'I appropriate parts of the original images to provide the texturing on the triangles.
3.2 Segmentation 53 The aim of this process is to segment an object (in front 1 of a reasonably homogeneous coloured background) in an image usingcolour information. The resulting binary image is used in voxel carving.
Two alternative methods are used:
Method 1: input a single RGB colour value representing the background colour - each RGB pixel in the image is examined and if the Euclidean distance to the background colour (in RGB space) is less than a specified threshold the pixel is labelled as background (BLACK).
Method 2: input a "blue" image containing a representative region of the background.
The algorithm has two stages:
(1) Build a hash table of quantised background colours (2) Use the table to segment each image.
54 Step 1) Build hash table I Go through each RGB pixel, p, in the "blue" backgrouncil image. 5 Set q to be a quantised version of p. Explicitly:
q = (p+t/2)/t.... A- 14 where t is a threshold determining how near RGB values, need to be to background colours to be labelled aE;11 10 background.
The quantisation step has two effects:
1) reducing the number of RGB pixel values, thusl increasing the efficiency of hashing; 2) defining the threshold for how close a RGB pixel has to be to a background colour pixel to be labelled as background. 20 q is now added to a hash table (if not already in the table) using the (integer) hashing function 1 h(q) = (q red & 7)2A6+(q_green & 7)23+(q blue & 7) ... A-15 5 That is, the 3 least significant bits of each colour f ield are used. This function is chosen to try and spread out the data into the available bins. Ideally each bin in the hash table has a small number of colour entries. Each quantised colour RGB triple is only added once to the table (the frequency of a value is irrelevant).
Step 2) Segment each image is Go through each RGB pixel, v, in each image.
Set w to be the quantised version of v as before.
To decide whether w is in the hash table, explicitly look at all the entries in the bin with index h(w) and see if any of them are the same as w. If yes, then v is a background pixel - set the corresponding pixel in the output image to BLACK. If no then v is a foreground
56 pixel - set the corresponding pixel in the output imagcp, to WHITE Post Processing: For both methods a post process is performed to fill small holes and remove small isolated! regions.
A median filter is used with a circular window. circular window is chosen to avoid biasing the result in 10 the x or y directions).
Build a circular mask of radius r. Explicitly store thel start and end values for each scan line on the circle.
Go through each pixel in the binary image.
Place the centre of the mask on the current pixel. Count,, the number of BLACK pixels and the number of WHITE pixels:1 in the circular region. 20 If (#WHITE pixels #BLACK pixels) then set correspondinqj output pixel to WHITE. Otherwise output pixel is BLACK.
57 3.3 Voxel carvina The aim of this process is to produce a 3D voxel grid, 5 enclosing the object, with each of the voxels marked as either object or empty space.
The input to the algorithm is:
- a set of binary segmentation images, each of which is associated with a camera position and orientation; 2 sets of 3D co-ordinates, (xmin, ymin, zmin) and (xmax, ymax, zmax), describing the opposite vertices of a cube surrounding the object; a parameter, n, giving the number of voxels required in the voxel grid.
A pre-processing step calculates a suitable size for the voxels (they are cubes) and the 3D locations of the 58 voxels, using n, (xmin, ymin, zmin) and (xmax, ymax, zmax) Then, for each of the voxels in the grid, the mid-pointi 5 of the voxel cube is projected into each of thei i segmentation images. If the projected point falls onto!, a pixel which is marked as background, on any of the images, then the corresponding voxel is marked as empty!, space, otherwise it is marked as belonging to the object.
Voxel carving is described further in "Rapid Octree I construction from Image Sequences" by R. Szeliski in CVGIP: Image Understanding, Volume 58, Number 1, July 1993, pages 23-32.
3.4 Marchinq cubes The aim of the process is to produce a surfaceJ triangulation from a set of samples of an implicit:I function representing the surface (for instance a signedli!ll distance function). In the case where the implicit! function has been obtained from a voxel carve, the 59 implicit function takes the value -1 for samples which 1 are inside the object and +1 for samples which are outside the object.
Marching cubes is an algorithm that takes a set of samples of an implicit surface (e.g. a signed distance function) sampled at regular intervals on a voxel grid, and extracts a triangulated surface mesh. Lorensen and Clineiii and Bloomenthal:1v give details on the algorithm and its implementation.
The marching-cubes algorithm constructs a surface mesh by "marching" around the cubes while following the zero crossings of the implicit surface f(x)=0, adding to the triangulation as it goes. The signed distance allows the marching-cubes algorithm to interpolate the location of the surface with higher accuracy than the resolution of the volume grid. The marching cubes algorithm can be used as a continuation method (i.e. it finds an initial surface point and extends the surface from this point).
3.5 Decimation The aim of the process is to reduce the number of triangles in the model, making the model more compact and!!, therefore easier to load and render in real time.
The process reads in a triangular mesh and then randomly,! removes each vertex to see if the vertex contributes toll the shape of the surface or not. (i.e. if the hole is, f illed, is the vertex a "long" way f rom the f illed hole) Vertices which do not contribute to the shape are kept.,,! 10 out of the triangulation. This results in fewer vertices! (and hence triangles) in the final model.
The algorithm is described below in pseudo-code.
INPUT Read in vertices Read in triples of vertex IDs making up triangles:1 PROCESSING Repeat NVERTEX times Choose a random vertex, V, which hasn I t been.
chosen before Locate set of all triangles having V as a, vertex, S order S so adjacent triangles are next to each!l, 61 other Re-triangulate triangle set, ignoring V (i.e.
remove selected triangles & V and then fill in hole) Find the maximum distance between V and the plane of each triangle if (distance < threshold) Discard V and keep new triangulation Else Keep V and return to old triangulation OUTPUT Output list of kept vertices output updated list of triangles The process therefore combines adjacent triangles in the model produced by the marching cubes algorithm, if this can be done without introducing large errors into the model.
The selection of the vertices is carried out in a random order in order to avoid the effect of gradually eroding a large part of the surface by consecutively removing neighbouring vertices.
3.6 Further Surface Generation Techniaues 62 Further techniques which may be employed to generate a 3D computer model of an object surface include voxeli, i i colouring, for example as described in "Photorealistici Scene Reconstruction by Voxel Coloring" by Seitz and Dyer! in Proc. Conf. Computer Vision and Pattern Recognitiorti, 1997, p1067-1073, "Plenoptic Image Editing" by Seitz andi Kutulakos in Proc. 6th International Conference orii Computer ision, pp 17-24, "What Do N Photographs Tell Ust About 3D Shape?" by Kutulakos and Seitz in University of' Rochester Computer sciences Technical Report 680, January!, 1998, and "A Theory of Shape by Space Carving" byll 1; Kutulakos and Seitz in University of Rochester Computei4', Sciences Technical Report 692, May 1998.
4. TEXTURING The aim of the process is to texture each surface polygon (typically a triangle) with the most appropriate imagel texture. The output of the process is a VRML model ofil the surface, complete with texture co-ordinates.
The triangle having the largest projected area is a good, 63 triangle to use for texturing,. as it is the triangle for which the texture will appear at highest resolution.
A good approximation to the triangle with the largest projected area, under the assumption that there is no substantial difference in scale between the different images, can be obtained in the following way.
For each surface triangle, the image,ill is found such 10 that the triangle is the most front facing (i.e. having the greatest value for n^,. ,Qi, where fl, is the triangle normal and fr, is the viewing direction for the 'lin th camera). The vertices of the projected triangle are then used as texture co-ordinates in the resulting VRML model.
This technique can fail where there is a substantial amount of selfocclusion, or several objects occluding each other. This is because the technique does not take into account the fact that the object may occlude the selected triangle. However, in practice this does not appear to be much of a problem.
64 It has been found that, if every image is used forl, texturing then this can result in very large VRML models-, being produced. These can be cumbersome to load andi:i render in real time. Therefore, in practice, a subset oE images is used to texture the model. This subset may bE, specified in a configuration file.
References i R M Haralick and L G Shapiro: "Computer and Robot Vision Volume 1", Addison-Wesley, 1992, ISBN 0-201-1 10877-1 (v.1), section 8.
ii J Foley, A van Dam, S Feiner and J Hughes:,1 "Computer Graphics: Principles and Practicel',:, Addison-Wesley, ISBN 0-201-12110-7.
iii W.E. Lorensen and H.E.Cline: "Marching Cubes: P.:' High Resolution 3D Surface Construction Algorithm",:
in Computer Graphics, SIGGRAPH 87 proceedings, 21:j 163-169, July 1987.
iv J. Bloomentha.l: "An Implicit Surface Polygonizer", I Graphics Gems IV, AP Professional, 1994, ISBN 0123361559, pp 324-350.
66

Claims (1)

1 A method of operating an apparatus f or generatincl, model data representative of a model in a threCI dimensional space f rom image data representative of a set..
of camera images of an object; the apparatus performing the steps of; displaying a model image based on an existing set of: model data; displaying one of the camera images of the objecti, for selection by a user of an additional feature to bei represented by additional model data; receiving an image point selection signal responsivel to user actuation of an input means and identifying co-1 ordinates of an image point in the camera image defining:, is the selected additional feature; calculating a locus in the three dimensional spacel defining positions of possible model points corresponding',l to the image point and consistent with the geometricl, relationship between the object and a camera positiow from which the displayed camera image was taken; displaying a position indicator in the model image at co-ordinates in the model image corresponding to one, of the possible model points on the locus; receiving positioning signals responsive to user actuation of the input means and updating the coordinates of the position indicator such that movement of the position indicator is constrained to follow a 67 trajectory in the model image corresponding to the locus; receiving a model point selecting signal responsive to user actuation of the input means and determining selected co-ordinates of the position indicator to be the position indicator co-ordinates at the time of receiving the model point selecting signal; and determining co-ordinates of the additional model point in the three dimensional space corresponding to the selected co-ordinates of the position indicator.
2. A method as claimed in claim 1 including displaying in the model image a line representing the locus.
3. A method as claimed in any preceding claim wherein the locus is a straight line in the three dimensional space.
4. A method as claimed in any preceding claim wherein the input means comprises a computer mouse and wherein said positioning signals are responsive to user actuation of the mouse by clicking the mouse during movement and releasing the mouse at a selected position to generate the model point selecting signal.
5. A method as claimed in any preceding claim including the step of displaying a symbol representative of the additional model point at a model image point 68 corresponding to the selected'co-ordinates.
6. A method as claimed in claim 5 comprising thel further step of editing the position of the additionaL, model point in response to receiving further positioningl signals and model point selecting signals responsive tol user actuation of the input means.
7. A method as claimed in any preceding claim including the further step of receiving a processing instruction1H signal and, responsive to said signal, implementing a model generating process to incorporate the additional,'] model point into the model data.
8. A method as claimed in claim 7 including the step of generating surface elements of the model from the modell data including the additional model point and displaying 1. said surface elements in the model image.
9. Apparatus for generating model data representativell of a model in a three dimensional space from image datal representative of a set of camera images of an object; the apparatus comprising; an interface comprising display means operable tol display images to a user and input means responsive to'l: user actuation; control means operable to control the display means 69 to display a model image basd on an existing set of model data and to display one of the camera images of the object for selection by a user of an additional feature to be represented by additional model data; receiving means for receiving an image point selection signal responsive to user actuation of the input means and identifying co-ordinates of an image h point in the camera image defining the selected additional feature; calculating means for calculating a locus in the three dimensional space defining positions of possible model points corresponding to the. image point and consistent with the geometric relationship between the object and a camera position from which the displayed is camera image was taken; the control means being further operable to control the display means to display a position indicator in the model image at co-ordinates in the model image corresponding to one of the possible model points on the locus; the apparatus further comprising means for receiving positioning signals responsive to user actuation of the input means and updating the co-ordinates of the position indicator such that movement of the position indicator is constrained to follow a trajectory in the model image corresponding to the locus; means for receiving a model point selecting signal 1 responsive to user actuation of the input means andi determining selected co-ordinates of the positioti indicator to be the position indicator co-ordinates atl the time of receiving the model point selecting signal;:1 and means for determining co-ordinates of the additional,:, model point in the three dimensional space correspondincti to the selected co-ordinates of the position indicator.
10. Apparatus as claimed in claim 9 wherein the controll means is operable to control the display means to display!, in the model image a line representing the locus.
11. Apparatus as claimed in any of claims 9 and 1011, wherein the calculating means is operable to calculate! the locus as a straight line in the three dimensiona -LI, space.
12. Apparatus as claimed in any of claims 9 to 1111, wherein the input means comprises a computer mouse and.,11 wherein said positioning signals are responsive to user.:,i actuation of the mouse by clicking the mouse duringl, movement and releasing the mouse at a selected positior).:,! to generate the model point selecting signal.
13. Apparatus as claimed in any of claims 9 to 11,1 wherein the control means is operable to control the, 71 display means to display a syibol representative of the additional model point at a model image point corresponding to the selected co-ordinates.
14. Apparatus as claimed in claim 13 comprising editing means for editing the position of the additional model point in response to receiving further positioning signals and model point selecting signals responsive to user actuation of the input means.
15. Apparatus as claimed in any of claims 9 to 14 including model generating means operable to receive a processing instruction signal and, responsive to said signal, to implement a model generating process to incorporate the additional model point into the model data.
16. Apparatus as claimed in claim 15 wherein the model generating means is operable to generate surface elements of the model from the model data including the additional model point and wherein the control means is operable to control the display to display said surface elements in the model image.
17. A computer program comprising processor implementable instructions for carrying out a method as claimed in any of claims 1 to 8.
72 18. A storage medium storing processor implementable instructions for controlling a processor to carry out a: method as claimed in any of claims 1 to 8.
19. An electrical signal carrying processor!!',, implementable instructions for controlling a processor to carry out a method as claimed in any of claims 1 to 8.
20. A method of operating an apparatus for generatingI I model data defining a model in a three dimensional space, 1:11 the model data comprising co-ordinates defining model:' points and surface elements generated with reference to!, the model points; the method comprising editing an'! existing set of model data by the steps of; 15 adding a new model point to the existing set of model data; projecting the new model point onto the model andl identifying a selected one of the surface elements onto:, which the new model point is projected; 20 identifying a subset of the model points which define the generation of the selected surface element; adding the new model point to the subset to form an edited subset of model points; and generating one or more edited surface elements from the edited subset of model points to replace the selected surface element.
73 21. A method as claimed ifi claim 20 wherein the projecting step comprises receiving input data defining a centre of projection and projecting the new model point onto the model in a direction of projection along a ray generated through the centre of projection and the new model point.
22. A method as claimed in claim 21 wherein the existing set of model data is generated by processing image data 10 representative of camera images of an object to be modelled.
23. A method as claimed in claim 22 wherein the step of receiving input data comprises receiving an image is selection signal for selecting one of said camera images, and defining the centre of projection for projecting the model point to be co-ordinates representative of a camera position from which the selected camera image was taken.
24. A method as claimed in claim 23 including the step of displaying a set of camera images and receiving a selection signal responsive to user actuation of an input means to select the selected camera image.
25. A method as claimed in claim 22 wherein the step of receiving input data comprises receiving an image 74 selection signal for selecting one of said camera imagesl i and receiving an image point selection signal definin co-ordinates of an image point in said selected cameril image corresponding to the new model point.
26. A method as claimed in claim 25 including the stel of calculating in the three dimensional space co-li ordinates of the centre of projection to correspond t( i the position of the image point in an image plane of thell camera.
27. A method as claimed in any of claims 21 to 2('l including the step of determining whether a plurality otil surface elements are intersected by the ray and, if so,.j'!1 determining the selected surface to be whichever of th 1 intersected surface elements is closest to the new model' point.
28. A method as claimed in any of claims 20 to 2711 20 wherein the surface elements comprise triangular facet-! and wherein each subset of the model points defining thel generation of the selected surface element comprises, three model points constituting apices of the triangulat facets.
29. Apparatus for generating model data defining a modell :1 H in a three dimensional space, the model data comprisintl co-ordinates defining model points and surface elements generated with reference to the model points,the apparatus being operable to edit an existing set of model data and comprising; means for adding a new model point to the existing set of model data; means for projecting the new model point onto the model and identifying a selected one of the surface elements onto which the new model point is projected; 10. means for identifying a subset of the model points which define the generation of the selected surface element; means for adding the new model point to the subset to form an edited subset of model points; and means for generating one or more edited surface elements from the edited subset of model points to replace the selected surface element.
30. Apparatus as claimed in claim 29 wherein the projecting means comprises receiving means for receiving input data defining a centre of projection, the projecting means being operable to project the new model point onto the model in a direction of projection along a ray generated through the centre of projection and the new model point.
31. Apparatus as claimed in claim 30 wherein the 76 existing set of model data is generated by processin 9 1 image data representative of camera images of an object. to be modelled.
32. Apparatus as claimed in claim 31 wherein the receiving means is operable to receive an image selectionll signal for selecting one of said camera images, and toi define the centre of projection for projecting the model l i i point to be co-ordinates representative of a camera,l position from which the selected camera image was taken.
33. Apparatus as claimed in claim 32 comprising, interface means for displaying a set of camera images and receiving a selection signal responsive to user actuation 1, of an input means to select the selected camera image.
34. Apparatus as claimed in claim 31 wherein the receiving means is operable to receive an image selection signal for selecting one of said camera images, and tol, receive an image point selection signal defining co-it ordinates of an image point in said selected camera image corresponding to the new model point.
35. Apparatus as claimed in claim 34 including calculating means for calculating in the three dimensional space co- ordinates of the centre of ii projection to correspond to the position of the image point in an image plane of thg camera.
36. Apparatus as claimed in any of claims 30 to 35 including means for determining whether a plurality of surface elements are intersected by the ray and, if so, determining the selected surface to be whichever of the intersected surface elements is closest to the new model point.
37. Apparatus as claimed in any of claims 30 to 36 wherein the surface elements comprise triangular facets and wherein each subset of the model points defining the generation of the selected surface element comprises three model points constituting apices of the triangular 15 facets.
38. A computer program comprising processor implementable instructions for carrying out a method as claimed in any of claims 20 to 28.
39. A storage medium storing processor implementable instructions for controlling a processor to carry out a method as claimed in any of claims 20 to 28.
40. An electrical signal carrying processor implementable instructions for controlling a processor to carry out a method as claimed in any of claims 20 to 28.
78 4 1. A method of operating an' apparatus for generatin( model data representative of a three dimensional model ol, an object from input signals representative of a set o11 camera images of the object taken from a plurality of,-,! camera positions, the method comprising; displaying a set of icons, each being associatec with a respective one of the camera images of the object, !,:, receiving a selection signal responsive to usert actuation of an input means whereby the selection signali, identifies a selected one of the icons; determining a selected camera image from the set of-I camera images corresponding to the selected icon; displaying the selected image; determining position data representative of 4 is selected camera position from which the selected image was taken; generating in accordance with said model a modell image representative of a view of the model from AI viewpoint corresponding to the position data; and 20 displaying the model image for visual comparison! with the selected image by the user. 42. A method as claimed in claim 41 including the step1 of generating the icons in response to receiving a mod61 selection input.
43. A method as claimed in any of clai s 41 and 42:11 79 wherein the icons are generat6d as thumbnail images of the respective camera images.
44. A method as claimed in claim 43 wherein the step of displaying the set of icons comprises displaying the icons in an array and displaying links between the icons such that each pair of icons corresponding to adjacent camera positions in a positional sequence of the camera positions is joined by a respective link.
45. A method as claimed in claim 44 wherein the icons are displayed in a linear array.
46. A method as claimed in any of claims 41 to 45 wherein the selected camera image and the model image are displayed in respective windows and including the step of providing relative movement of the windows in response to receiving window movement input signals.
47. A method as claimed in claim 46 wherein the icons are displayed in a further window and including the step of facilitating movement of the further window relative to the image windows in response to window movement input signals.
48. A method as claimed in any of claims 41 to 47 comprising generating the selection signal by operation of a pointing means for user dctuation in selecting one of the displayed icons.
49. A method as claimed in claim 41 wherein displayingil the set of icons comprises displaying a view of the modell from a viewpoint in which the icons comprise!, representations of cameras and are shown at respectivel I positions relative to the model which correspond, substantially to the camera positions relative to the object.
50. Apparatus for generating model data representative!j of a three dimensional model of an object from input, signals representative of a set of camera images of the] object taken from a plurality of camera positions, the,! apparatus comprising; display means for displaying a set of Icons, eachl being associated with a respective one of the camera, images of the object; 20 means for receiving a selection signal responsive to,l user actuation of an input means whereby the.selection signal identifies a selected one of the icons means for determining a selected camera image from! the set of camera images corresponding to the selected:l icon whereby the display means is operable to display the,,', selected image; means for determining position data representative:I 81 of a selected camera positiorf from which the selected image was taken; means for generating in accordance with said model a model image representative of a view of the model from a viewpoint corresponding to the position data; and control means for controlling the display means to display the model image for visual comparison with the selected image by the user.
51. Apparatus as claimed in claim 50 further comprising means for generating the icons in response to receiving a mode selection input.
52. Apparatus as claimed in any of claims 50 and 51 wherein icon generating means is operable to generate the icons as thumbnail images of the respective camera images.
53. Apparatus as claimed in claim 52 wherein the control means is operable to control the display means to display the set of icons in an array and to display links between the icons such that each pair of icons corresponding to adjacent camera positions in a positional sequence of the camera positions is joined by a respective link.
54. Apparatus as claimed in claim 53 wherein the control means is operable to control the display means to 82 display the icons in a linear array.
55. Apparatus as claimed in any of claims 50 to 541 wherein control means is operable control the displayll 5 means to display the selected camera image and the modell image in respective windows and to provide relative,! movement of the windows in response to receiving windoWl movement input signals.
56. Apparatus as claimed in claim 55 wherein the control.'l means is operable to control the display means to display! the icons in a further window to facilitate movement ofl the further window relative to the camera image window,l and model image window in response to window movement] 15 input signals.
57. Apparatus as claimed in any of claims 50 to 56,1 wherein the means for generating the selection signal! comprises a pointing means for user actuation in:,! selecting one of the displayed icons.
58. Apparatus as claimed in claim 50 wherein the controll means is operable to control the display means for,i displaying the set of icons by displaying a view of thell 25 model from a viewpoint in which the icons compriseil representations of cameras and are shown at respectivel positions relative to the model which correspondll 83 substantially to the camera gositions relative to the object.
59. A computer program comprising processor implementable instructions for carrying out a method as claimed in any of claims 41 to 49.
60. A storage medium storing processor implementable instructions for controlling a processor to carry out a method as claimed in any of claims 41 to 49.
61. An electrical signal carrying processor implementable instructions for controlling a processor to carry out a method as claimed in any of claims 41 to 49.
62. In a method of operating an apparatus for generating model data representative of a model in a three dimensional space from image data representative of a set of camera images of an object, an improvement comprising; the apparatus performing the steps of; displaying a model image based on an existing set of model data; displaying one of the camera images of the object for selection by a user of an additional feature to be represented by additional model data; receiving an image point selection signal responsive to user actuation of an input means and identifying co- 84 ordinates of an image point in'the camera image definincf,:i the selected additional feature; calculating a locus in the three dimensional spacel defining positions of possible model points corresponding.i' to the image point and consistent with the geometric"i, i i relationship between the object and a camera position! from which the displayed camera image was taken; displaying a position indicator in the model imagell at co-ordinates in the model image corresponding to one, of the possible model points on the locus; receiving positioning signals responsive to user,:
actuation of the input means and updating the co-11, ordinates of the position indicator such that movement of! the position indicator is constrained to follow a trajectory in the model image corresponding to the locus; receiving a model point selecting signal responsive11 to user actuation of the input means and determiningl selected co-ordinates of the position indicator to be the position indicator co-ordinates at the time of receiving the model point selecting signal; and determining co-ordinates of the additional model point in the three dimensional space corresponding to the selected co-ordinates of the position indicator.
63. In an apparatus for generating model data representative of a model in a three dimensional space f rom. image data representative of a set of camera images 1, i of an object; an improvement wherein the apparatus comprises; an interface comprising display means operable to display images to a user and input means responsive to user actuation; control means operable to control the display means to display a model image based on an existing set of model data and to display one of the camera images of the object for selection by a user of an additional feature to be represented by additional model data; receiving means for receiving an image point selection signal responsive to user actuation of the input means and identifying co-ordinates of an image point in the camera image defining the selected additional feature; calculating means for calculating a locus in the three dimensional space defining positions of possible model points corresponding to the image point and consistent with the geometric relationship between the object and a camera position from which the displayed camera image was taken; the control means being further operable to control the display means to display a position indicator in the model image at co-ordinates in the model image corresponding to one of the possible model points on the locus; the apparatus further comprising means for receiving 86 positioning signals responsiv( to user actuation of the input means and updating the co-ordinates of the position, indicator such that movement of the position indicator is, constrained to follow a trajectory in the model image corresponding to the locus; means for receiving a model point selecting signalil responsive to user actuation of the input means and, determining selected co-ordinates of the position 1 i i 1 indicator to be the position indicator co-ordinates at;' the time of receiving the model point selecting signal;'!, and means for determining co-ordinates of the additional j model point in the three dimensional space corresponding i, to the selected co-ordinates of the position indicator. 15 64. In an apparatus for generating model data]! representative of a model in a three dimensional space1 from image data representative of a set of camera images of an object, a method wherein; 20 the apparatus performs the steps of; displaying a model image based on an existing set of model data; displaying one of the camera images of the object for selection by a user of an additional feature to be represented by additional model data; receiving an image point selection signal responsive to user actuation of an input means and identifying co- 87 ordinates of an image point in'the camera image defining the selected additional feature; calculating a locus in the three dimensional space defining positions of possible model points corresponding to the image point and consistent with the geometric relationship between the object and a camera position from which the displayed camera image was taken; displaying a position indicator in the model image at co-ordinates in the model image corresponding to one of the possible model points on the locus; receiving positioning signals responsive to user actuation of the input means and updating the co ordinates of the position indicator such that movement of the position indicator is constrained to follow a trajectory in the model image corresponding to the locus; receiving a model point selecting signal responsive to user actuation of the input means and determining selected co-ordinates of the position indicator to be the position indicator co-ordinates at the time of receiving the model point selecting signal; and determining co-ordinates of the additional model point in the three dimensional space corresponding to the selected co-ordinates of the position indicator.
65. In a method of operating an apparatus for generating model data defining a model in a three dimensional space, the model data comprising co-ordinates defining model 88 points and surface elements generated with reference tol the model points; an improvement wherein the method;! comprises editing an existing set of model data by the:
steps of; adding a new model point to the existing set of'!, model data; projecting the new model point onto the model and:l identifying a selected one of the surface elements onta which the new model point is projected; identifying a subset of the model points which'! define the generation of the selected surface element; adding the new model point to the subset to form an, edited subset of model points; and generating one or more edited surface elements from:
the edited subset of model points to replace the selected' surface element.
66. In an apparatus for generating model data defining1 a model in a three dimensional space, the model data comprising co-ordinates defining model points and surfaceil elements generated with reference to the model points, an 1 improvement wherein the apparatus is operable to edit an existing set of model data and comprises; means for adding a new model point to the existing1 set of model data; means for projecting the new model point onto the model and identifying a selected one of the surface 11 89 elements onto which the new m6del point is projected; means for identifying a subset of the model points which def ine the generation of the selected surface element; means for adding the new model point to the subset to form an edited subset of model points; and means f or generating one or more edited surf ace elements from the edited subset of model points to replace the selected surface element.
67. In an apparatus for generating model data defining a model in a three dimensional space, the model data comprising co-ordinates defining model points and surface elements generated with reference to the model points; a method comprising editing an existing set of model data by the steps of; adding a new model point to the existing set of model data; projecting thenew model point onto the model and identifying a selected one of the surface elements onto which the new model point is projected; identifying a subset of the model points which define the generation of the selected surface element; adding the new model point to the subset to form an edited subset of model points; and generating one or more edited surface elements from the edited subset of model points to replace the selected surface element.
69. In a method of operating an apparatus for generating model data representative of a three dimensional model of an object from input signals representative of a set camera images of the object taken from a plurality of:,1 camera positions, an improvement wherein the methodli comprises; displaying a set of icons, each being associated with a respective one of the camera images of the object; receiving a selection signal responsive to user actuation of an input means whereby the selection signal identifies a selected one of the icons; determining a selected camera image from the set of! is camera images corresponding to the selected icon; displaying the selected image; determining position data representative of a selected camera position from which the selected image was taken; generating in accordance with said model a model!11 image representative of a view of the model from a viewpoint corresponding to the position data; and displaying the model image for visual comparison with the selected image by the user.
70. In an apparatus for generating model data representative of a three dimensional model of an object 91 from input signals representtive of a set of camera images of the object taken from a plurality of camera positions, an improvement wherein the apparatus comprises; 5 display means for displaying a set of icons, each being associated with a respective one of the camera images of the object; means for receiving a selection signal responsive to user actuation of an input means whereby the selection signal identifies a selected one of the icons; means for determining a selected camera image from the set of camera images corresponding to the selected icon whereby the display means is operable to display the selected image; means for determining position data representative of a selected camera position from which the selected image was taken; means for generating in accordance with said model a model image representative of a view of the model from a viewpoint corresponding to the position data; and control means for controlling the display means to display the model image for visual comparison with the selected image by the user.
71. In an apparatus for generating model data representative of a three dimensional model of an object f rom. input signals representative of a set of camera 92 images of the object taken 6om a plurality of camer& positions, a method comprising; displaying a set of icons, each being associated with a respective one of the camera images of the object;i, receiving a selection signal responsive to user,l actuation of an input means whereby the selection signal-, identifies a selected one of the icons; determining a selected camera image from the set ofl:
camera images corresponding to the selected icon; displaying the selected image; determining position data representative of al, selected camera position from which the selected imagei was taken; generating in accordance with said model a model image representative of a view of the model from a;l, viewpoint corresponding to the position data; and displaying the model image for visual comparison'I with the selected image by the user.
GB0001479A 2000-01-20 2000-01-21 Method and apparatus for generating model data from camera images Expired - Fee Related GB2358540B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
GB0001479A GB2358540B (en) 2000-01-21 2000-01-21 Method and apparatus for generating model data from camera images
US09/718,342 US6980690B1 (en) 2000-01-20 2000-11-24 Image processing apparatus
US10/793,850 US7508977B2 (en) 2000-01-20 2004-03-08 Image processing apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0001479A GB2358540B (en) 2000-01-21 2000-01-21 Method and apparatus for generating model data from camera images

Publications (3)

Publication Number Publication Date
GB0001479D0 GB0001479D0 (en) 2000-03-15
GB2358540A true GB2358540A (en) 2001-07-25
GB2358540B GB2358540B (en) 2004-03-31

Family

ID=9884159

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0001479A Expired - Fee Related GB2358540B (en) 2000-01-20 2000-01-21 Method and apparatus for generating model data from camera images

Country Status (1)

Country Link
GB (1) GB2358540B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102004016077A1 (en) * 2004-03-30 2005-10-27 Daimlerchrysler Ag Process to position components on an automated test stand by identification of salient features
EP1607716A2 (en) * 2004-06-18 2005-12-21 Topcon Corporation Model forming apparatus and method, and photographing apparatus and method
US7657055B2 (en) * 2003-07-31 2010-02-02 Canon Kabushiki Kaisha Image processing method and image generating apparatus

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111766575B (en) * 2020-06-08 2023-04-21 桂林电子科技大学 Self-focusing sparse imaging method of through-wall radar and computer equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5710875A (en) * 1994-09-09 1998-01-20 Fujitsu Limited Method and apparatus for processing 3-D multiple view images formed of a group of images obtained by viewing a 3-D object from a plurality of positions
GB2328355A (en) * 1997-08-05 1999-02-17 Canon Kk Edge detection in image processing

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5819016A (en) * 1993-10-05 1998-10-06 Kabushiki Kaisha Toshiba Apparatus for modeling three dimensional information
JPH0981778A (en) * 1995-09-13 1997-03-28 Toshiba Corp Device and method for modeling
JPH1040421A (en) * 1996-07-18 1998-02-13 Mitsubishi Electric Corp Method and device for forming three-dimensional shape
US5945996A (en) * 1996-10-16 1999-08-31 Real-Time Geometry Corporation System and method for rapidly generating an optimal mesh model of a 3D object or surface
IL120867A0 (en) * 1997-05-20 1997-09-30 Cadent Ltd Computer user interface for orthodontic use

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5710875A (en) * 1994-09-09 1998-01-20 Fujitsu Limited Method and apparatus for processing 3-D multiple view images formed of a group of images obtained by viewing a 3-D object from a plurality of positions
GB2328355A (en) * 1997-08-05 1999-02-17 Canon Kk Edge detection in image processing

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7657055B2 (en) * 2003-07-31 2010-02-02 Canon Kabushiki Kaisha Image processing method and image generating apparatus
DE102004016077A1 (en) * 2004-03-30 2005-10-27 Daimlerchrysler Ag Process to position components on an automated test stand by identification of salient features
DE102004016077B4 (en) * 2004-03-30 2008-05-08 Daimler Ag Positioning a component of an optical system
EP1607716A2 (en) * 2004-06-18 2005-12-21 Topcon Corporation Model forming apparatus and method, and photographing apparatus and method
EP1607716A3 (en) * 2004-06-18 2012-06-20 Topcon Corporation Model forming apparatus and method, and photographing apparatus and method
US8559702B2 (en) 2004-06-18 2013-10-15 Topcon Corporation Model forming apparatus, model forming method, photographing apparatus and photographing method

Also Published As

Publication number Publication date
GB2358540B (en) 2004-03-31
GB0001479D0 (en) 2000-03-15

Similar Documents

Publication Publication Date Title
US7508977B2 (en) Image processing apparatus
US6990228B1 (en) Image processing apparatus
US6970591B1 (en) Image processing apparatus
Vázquez et al. Viewpoint selection using viewpoint entropy.
US9117310B2 (en) Virtual camera system
US8042056B2 (en) Browsers for large geometric data visualization
US8254667B2 (en) Method, medium, and system implementing 3D model generation based on 2D photographic images
US7474803B2 (en) System and method of three-dimensional image capture and modeling
CN103971399B (en) street view image transition method and device
Chang et al. Image processing of tracer particle motions as applied to mixing and turbulent flow—I. The technique
US20090009513A1 (en) Method and system for generating a 3d model
US20020164067A1 (en) Nearest neighbor edge selection from feature tracking
US20040155877A1 (en) Image processing apparatus
US20020085001A1 (en) Image processing apparatus
EP1445736B1 (en) Method and system for providing a volumetric representation of a three-dimensional object
EP1109131A2 (en) Image processing apparatus
US7620234B2 (en) Image processing apparatus and method for generating a three-dimensional model of an object from a collection of images of the object recorded at different viewpoints and segmented using semi-automatic segmentation techniques
JP2001067463A (en) Device and method for generating facial picture from new viewpoint based on plural facial pictures different in viewpoint, its application device and recording medium
CN113256818A (en) Measurable fine-grained occlusion removal visualization method based on discontinuity detection
GB2358540A (en) Selecting a feature in a camera image to be added to a model image
AU2013204653A1 (en) Method and system for generating a 3d model
US20220375152A1 (en) Method for Efficiently Computing and Specifying Level Sets for Use in Computer Simulations, Computer Graphics and Other Purposes
GB2365243A (en) Creating a 3D model from a series of images
GB2362793A (en) Image processing apparatus
GB2359686A (en) Image magnifying apparatus

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20160121