GB2364494A - Predicting changes in characteristics of an object - Google Patents

Predicting changes in characteristics of an object Download PDF

Info

Publication number
GB2364494A
GB2364494A GB0016151A GB0016151A GB2364494A GB 2364494 A GB2364494 A GB 2364494A GB 0016151 A GB0016151 A GB 0016151A GB 0016151 A GB0016151 A GB 0016151A GB 2364494 A GB2364494 A GB 2364494A
Authority
GB
United Kingdom
Prior art keywords
model
condition
shape
data
operative
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB0016151A
Other versions
GB0016151D0 (en
Inventor
Guy Fowler
Jane Haslam
Ivan Meir
Timothy Parr
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tricorder Technology PLC
Original Assignee
Tricorder Technology PLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tricorder Technology PLC filed Critical Tricorder Technology PLC
Priority to GB0016151A priority Critical patent/GB2364494A/en
Publication of GB0016151D0 publication Critical patent/GB0016151D0/en
Priority to PCT/GB2001/002828 priority patent/WO2002003304A2/en
Priority to AU2001266169A priority patent/AU2001266169A1/en
Publication of GB2364494A publication Critical patent/GB2364494A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/50ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders

Landscapes

  • Medical Informatics (AREA)
  • Engineering & Computer Science (AREA)
  • Public Health (AREA)
  • Health & Medical Sciences (AREA)
  • Pathology (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Image Processing (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

A medical analysis tool which can be run using a personal computer, provides a statistical model configuration that allows a prediction to be made of the soft tissue shape of a patient when changes are made to the shape of the underlying hard tissue e.g. by surgery. The model configuration includes a first parametric model (10) of pre-operative hard tissue shape characteristics of the patient, a second parametric model (11) of pre-operative soft tissue shape characteristics of the patient, a third parametric model (12) of post-operative hard tissue shape characteristics of the patient, a fourth parametric model (13) of post-operative shape tissue shape characteristics of the patient, and a predictive model (14) that characterises a statistical correlation between the models. A surgical plan can be inputted by a surgeon to change to the hard tissue configuration for the patient and the model configuration predicts corresponding changes in shape of the soft tissue which can be displayed by computer for review.

Description

2364494 Predicting changes in characteristics of an object This invention
relates to predicting changes in characteristics of an object and has particular but not exclusive application to procedures to be performed on living 5 objects, especially the human body, such as maxillo-facial and craniofacial surgery, for example bimaxillary osteotomy which involves breaking, moving and resetting of both the maxilla and mandible to improve facial function and aesthetics.
In maxillo-facial and craniofacial surgery, the surgeon's goal is not only to improve lo facial functionality, but also to produce an aesthetically pleasing face. Therefore, the post-operative soft-tissue appearance is an important factor in patient outcome. An accurate simulation of softtissue changes during surgery would give a number of benefits, namely:
ò a tool which enables the surgeon to improve surgical outcome by simulating 15 different possible surgeries and choosing the one with the best outcome, ò a patient consent management tool which allows the surgeon to sit down with the patient, and set their expectations about possible surgical outcomes, ò a training tool that allows trainees to simulate surgeries with a wide variety of pathologies, and 20 a clinical audit tool to allow the results produced by surgery to be directly compared to the original treatment plan.
Traditional methods for maxillo-facial simulation and planning are based upon simple empirical studies of the relationship between bone and tissue movements in 25 2D lateral cephalograms as described by Athanasiou A.E. (ed.), "Orthodontic Cephalometry", Mosby-Wolfe Verlag, London, 1995. These methods form the basis for a number of 2D maxillo-facial surgery simulation products e.g. QuickCeph produced by QuickCeph Systems of 12925 El Camino Real, Ste. J23 San Diego, CA 92130, USA, and also OTP produced by Orthovision Inc. of 3701 Shoreline Dr, 30 Suite 202B, Wayzata, MN 55391 USA. However, there are two significant disadvantages with such methods. Firstly, the simulation is performed on a 2D lateral view of the patient rather than in 3D, and hence the surgeon or patient cannot visualise the post-operative appearance from a range of 3D view- points.
Case: 35\749 Secondly, and more importantly, the simplistic nature of the empirical models leads to inaccurate simulation results.
More recently, a number of techniques have been developed for performing fully 5 3D soft-tissue modelling for maxillo-facial surgery simulation. The most promising of these techniques are based upon physical modelling of facial tissues, taking into account individual patient anatomy. Two main approaches can be found. The first involves mass-spring models fast, simple modelling techniques based upon simulating the linear elastic properties of tissue as a series of masses attached to lo each other by springs. Examples are described in Keeve, E., Girod, S., Kikinis R., Girod B., "Deformable Modelling of Facial Tissue for Cranlofacial Surgery Simulation", Computer Aided Surgery, Vol. 3, No. 5, 1998, and Bro-Neilson M., Cotin S., "Real-time Volumetric Deformable Models for Surgery Simulation using Finite Elements and Condensation", Proc. Of Eurographics, Vol 5, pp57-66, 1996.
The second main prior approach involves finite element models - slower modelling techniques which allow simulation of non-linear, anisotropic and visco-elastic tissue properties. Examples are given in Herrimy D., Harris G.F., Ganaparthy V., "Finite Element Analysis of Cramofacial Skeleton Using Three Dimensional Imaging as the 20 Substrate", in Caronni E.F. (Ed) Cranlofacial Surgery, Proc. of the 2 nd International Congress of the Intern. Society of Cranio-Maxillo-Facial Surgery, Florence, Italy, 1991, and Koch. R, Gross H.H., Buren D.F., Frankhauser G., Parish Y., Carls F.R., "Simulating Facial Surgery Using Finite Element Models", Proc. of SIGGR-APH'96, New Orleans, Louisiana, ASM Computer Graphics, Vol. 30, 1996.
Although these techniques represent interesting advances in this area, a near realtime, clinically validated technique has not yet emerged. Also, a significant practical disadvantage of such techniques is that they require knowledge of the underlying 3D bony structure to be available from X-ray CT in order to construct a patient specific 30 structural model. In the UK and some other countries, X-ray CT scans are not acquired for the majority of maxillo-facial cases due to the high associated radiation dose. Treatment planning is typically performed using only a lateral cephalogram.
3 - The present invention embodies a new approach based upon statistical rather than physical modelling techniques. The invention addresses the disadvantages of current modelling techniques, and when applied to maxillo-facial and craniofacial surgery, can produce post-operative predictions in near real-time, from conventional pre- 5 operative lateral cephalograms and pre-operative 3D facial surface data acquired using for example the Tricorder DSP Series 3D imaging system manufactured by Tricorder p1c, of 6 The Long Room, Coppermill Lock, Summerhouse Lane, Harefield, Middlesex, UB9 6JA, United Kingdom. The invention can also provide significant advantages when used in other situations as will become evident lo hereinafter.
In order to explain the background to the invention, a review of prior statistical modelling techniques will now be given.
2D Point Distytion Models and A aise Shape Models 15 A generic 2D statistical shape modelling technique has been developed known as a 2D Point Distribution Model (or PDM), based upon objects represented at a set of labelled 2D points. Reference is directed to Cootes T.F., Taylor C.J., Cooper D.H., Graham J., "Training Models of Shape from Sets of Examples", Proc BMVC 1992. The model consists of the mean positions of these points and the main modes of 20 variation, which describe the ways in which the points move about the mean.
A PDM is built by performing a statistical analysis of a number of shape training examples. Each example represents an observed instance of the class of shape, and is described by a set of 2D manually labelled socalled landmark points that capture 25 the important features of the object. Each training example is thus described by a vector x of length 2n of its n landmark point positions: x (xo, Yo; x 1, Yi;... xn-i, Y,,-d.
The 2D PDM is built from the training data as follows:
1. Align training examples using Procrustes Analysis as described by Cootes et al supra, scaling, rotating and translating the examples so that they correspond as 30 closely as possible to the first training example, and 2. perform Principal Component Analysis of the covariance 2nx2n matrix S of the aligned training data:
I=P S E (X' V)(X, P i=1 (where p is the number of training examples, x' is the 1th aligned training example, and V is the mean of the aligned training examples).
The modes of variation of the 2D PDM are described by the unit eigenvectors of S, 5 pi (i= 1 to 2n) such that:
SP, = APi (2) (where Ai is the ith eigenvalue of S).
Any shape in the aligned training set can then be described exactly by the equation:
x' = i'+ Pb (3) 10 (where P is the 2nx2n matrix of eigenvectors (PI, P21 P2d)' Generally, P is truncated to use only the t most significant eigenvectors such that some fraction (typically 95%) of the training set variance is expressed. New examples of the class of objects modelled can then be generated by varying the shape parameters b.
In addition to the basic shape information, it is also possible to model the local grey-level appearance at each of the labelled model points (see Cootes T.F, Taylor C.J., Lanitis A., Cooper D.H., Graham J., "Building and Using Flexible Models Incorporating Grey-Level Information", Proc. ICCV, Berlin, May 1993, pp242-246 20 for further details). Briefly, grey-level training data is extracted along profiles perpendicular to the shape example boundary at each landmark point, and landmark grey-level models are built using techniques analogous to those used to model shape. Each grey-level model consists of a mean grey-level pattern, and a number of modes of variation about the mean. A grey-level model instance gi for the Ith 25 landmark point can be expressed as:
gi = ii + Pgib9i (4) (where gi is a vector of grey level profile data, -gi is the mean grey level profile vector averaged over the training data for the ith landmark point, Pg, is a matrix of the most significant eigenvectors of the grey-level training data covariance matrix for the 1th landmark point, andb9i is a set of weights, one for each eigenvector).
Training local grey-level models gives a set of specific models of the expected grey- 5 level evidence at each point in the 2D PDM. 2D PDMs plus grey-level models can then be used in image search applications, that is, given a PDM of a particular class of 2D shape, one can locate an instance of that class of shape in a new image. During image search, the grey-level models can be used to compare expected and observed grey-level image evidence, producing a measure of grey-level fitness at lo each model point that is used to drive the image search algorithm. Image search is achieved using an algorithm known as an Active Shape Model (or ASM), described in detail in Cootes T.F., Taylor C.J., "Active Shape Models Smart Snakes", Proc. BMVC, Leeds 1992, Springer Verlag, pp266-275. The general approach used is as follows:
15 An instance of a 2D PDM is initialised at some position in the image, typically using the mean shape parameters.
2. A region of the image around each model point along the perpendicular to the boundary at that point is examined, and the best match between the observed and expected image data in that region is found; this gives a suggested local 2o displacement at each model point.
3. From the suggested local displacements, adjustments to the model pose and shape parameters are calculated which best satisfy the suggested displacements. This is achieved using an iterative algorithm, and enforces the constraint that each shape parameter is within 3 standard deviations from the mean model shape.
25 4. Steps 2 and 3 are iterated until the algorithm converges.
ASMs can also be implemented in a multi-resolution form that speeds up the algorithm and improves its robustness. Reference is directed to Cootes, T. F., Taylor C.J., Lanitis A., "Active Shape Models: Evaluation of a Multi-Resolution Method 30 for Improving Image Search", Proc. BMVC 1994, pp327-336 2D PDMs and ASMs have been applied to a range of shape-modelling and image analysis applications, including face modelling and location as described in Lanitis A., Taylor, CJ., et al "Automatic Identification of Human Faces Using Flexible Appearance Models", Proc. 5 h BMVC, 1994 pp 65-74. Other applications include locating heart ventricles in echocardio grams, segmenting magnetic resonance (MR) images of the abdomen, and locating anatomical landmarks in lateral cephalograms.
5 3D Point Disoibution Models and A ctize Shape Models PDMs and ASMs have been extended from 2D to 3D. Reference is directed to Hill A., Thornham A. and Taylor CJ., "Model-Based Interpretation of 3D Medical Images", 4th British machine Vision Conference, Guildford, England, 339- 348, Sept. 1993. As in 2D each object is described as a labelled set of n points; the only io difference is that z-ordinates are now included. Thus, each of the p training examples is expressed as a vector x of length 3n, where X = fX1X XJYXlz --7xpx)xpy Xpz I and fx,,,xy,x,,) gives the co-ordinates of the ith landmark point in the example. Modelling and image search methods are analogous to those in 2D. The ma)or difference between 2D and 3D modelling techniques is in the method used to mark-up the training data. This has been approached in a number of ways:
I-Ell et al. supra build their models from volumetric N/1RI data of the head by splitting the 3D space into a number of slices, and marking the landmark points on contours in each slice. This method has a number of disadvantages:
1. A large number of landmark points (-500-1000) must be marked by hand for each example.
2. The examples must be aligned before contour extraction so that the contours approximately correspond between different examples.
3. The points marked on the contours are very unlikely to be 'true' 3D landmark points e.g. points of high curvature in 3D.
4. The method has problems dealing with objects of complex topology.
Another method is proposed in Heap T., Hogg D., "Towards 3D hand Tracking Using a Deformable Model", Proc. 2" International Conf. On Automatic Face and Gesture Recognition 1996, pp140-145. This involves a semi-automatic method for building 3D hand-models from NMI data in which a physically based Simplex Mesh model is constructed on the first example. Subsequent examples require only a few (- 5 -10) guiding points to pull the Simplex Mesh to the new example image data.
This method appears to be robust, and once the initial simplex mesh has been set up, simple to use. However, it is not clear that this method generalises from the 5 class of objects modelled (3D hands) to general objects where key 3D landmark points are not so easily identifiable.
Another approach is described in Brett A.D., Taylor CJ, "A Method of Automated Landmark Generation for Automated 3D PDM Construction", Proc. BMVC 1998, 10 which provides a fully automatic method for 3D PDM construction given a set of 3D triangulated surfaces. Correspondences are determined between highly decimated versions of the surfaces and used to construct a binary tree of merged shapes, with the mean shape at the root of the tree. Once the binary tree has been constructed, a set of landmark points identified on the mean shape can then be 15 propagated out to the leaf examples. Although this method is fully automatic, it is not robust enough for routine use.
Predic-the Models A further extension of the statistical modelling techniques is to build a predictive model. This is done by building a combined statistical model which models the 20 correlation between one class of measurements A and another class of measurements B. A particular measurement of A can then be used to predict the corresponding measurement of B. In one predictive approach, devised by Haslam J.
"Model-based Methods for Medical Image Correction and Interpretation", PhD thesis, August 1996, Manchester University, a model is built which links a 3D PDM 25 of an object to a matrix of Scatter Correction Factors associated with the object, and subsequently uses an instance of 3D shape to infer the corresponding Scatter Correction Factors. Another predictive approach is described by Bowden R.,
Mitchell T.A, Sarhadi M., "Reconstructing 3D Pose and Motion from a Single Camera View", Proc. BMVC 1998. Bowden et al build a model which links the 2D 30 outline of a human figure to a 3D 'stick-man' representation of the same figure, and subsequently use an instance of the 2D outline to infer the corresponding 3D representation. In both of these predictive approaches, the general methodology is as follows: 1. Assume that one class of measurements A are correlated with another class of measurements B, and that the correlation is strong enough for measurements A 5 to be used to predict measurements B. 2. Build a statistical model by Principal Component Analysis from a set of combined training examples. Each combined example contains an example of measurements A (Vector x. of length a) and an example of measurements B (vector X B of length b). The ith training example so obtained is a vector x,,, which 10 concatenates a normallsed version of x Ai and a normalised version of x Bi Xci XAij XAj X A I X Blh (5) CA CA UB a B The normalisation factors CA and a Bare given by the total training set variance of measurement vectors X A and x B respectively. Thus the combined vector x,. is normalised such that the sub-measurements xA and x, give an equal contribution 15 (in terms of variance) to the combined vector.
An instance of the combined model x, may then be described as:
xc:-- ic + Pcbc (6) (where ic is the mean combined model vector, P, is the matrix of eigenvectors of the combined model training data covariance matrix, and b. is a vector of combined model weights.) The model is truncated to use a or less eigenvectors in order that it may be used to make predictions. 3. Given a new set of measurements x A Of A, the combined model can be used to predict the corresponding measurements x. of B by solving a weighted linear 25 least squares problem of the form:
(Pc"W)(x, - ic) = (Pc"WPc)bc (7) (where W is a diagonal matrix of weights with diagonal elements set to 1 for the first a elements, and 0 for the final b elements.) Equation (7) is solved for the unknown vector of combined model weights b,
using standard linear algebra techniques. 4. Once b, has been estimated, x, can then be calculated using equation (6), and the estimate of xB is given by the last b elements of vector x, multiplied by the 5 normalisation factor CFB - The approach followed by Bowden et al shows that an instance of the "stick man" can be used to predict a corresponding instance of the 3D configuration of a corresponding human torso, but the representation is not sufficiently accurate for 10 use in practical situations such as medical procedures where high precision and accuracy are required.
The invention provides an improved predictive technique which involves planning changes for one set of variables for an object and predicting corresponding changes 15 in another r set of variables for the object.
In one aspect the invention provides a method of predicting changes for an object with first and second characteristics that are distinct from but statistically correlated with one another, comprising: providing a statistical model configuration of at least 20 one mode of variation of a first set of variables relating to the first characteristic of the object, and at least one mode of variation of a second set of variables relating to the second characteristic of the object, planning a change to the first set of variables for the object, and using the model configuration to predict a corresponding change to the second set of variables for the object from data corresponding to the planned 25 change to the first set.
The statistical model configuration may include a first parametric model of the first characteristic of the object, a second parametric model of the second characteristic of the object and a predictive model that characterises a statistical correlation 3o between the models, and the method involves fitting the first and second models to the corresponding characteristics of an object in the first condition to provide parameterised data for the first and second characteristics of the object in the first condition, planning a change to the condition for the object so as to provide parameterised data for the first characteristic of the object in a second different condition, and utilising the parameterised data and the predictive model to provide parameterised data corresponding to a prediction of the change of second 5 characteristic of the object in the second condition.
In more detail, the statistical model configuration may include a first parametric model of the first characteristic of the object in the first object condition, a second parametric model of the second characteristic of the object in the first object 10 condition, a third parametric model of the first characteristic of the object in the second object condition, a fourth parametric model of the second characteristic of the object in the second object condition, and the predictive model characterises a statistical correlation between the models, with the method involving: fitting the first and second models to the corresponding characteristics of an object in the 15 first condition to provide parameterised data for the first and second characteristics of the object in the first condition, planning the second condition for the object using the third model to provide parameterised data for the first characteristic of the object in the second condition, and utilising the parameterised data and the predictive model to provide parameterised data for the fourth model to predict the 20 second characteristics of the object in the second condition.
The invention has particular application to predicting the outcome of medical procedures and may be carried out to predict the outcome of a medical operative procedure wherein the object comprises a patient, the first shape characteristic 25 corresponds to the shape of underlying hard tissue structure of the patient and the second shape characteristic corresponds to the shape of a soft tissue structure that covers the hard tissue structure.
Data may be acquired from a pre-operative lateral cephalogram concerning the 30 shape of underlying hard tissue structure of the patient and data from a pre operative 3D scan of the patient may be acquired for the shape of the soft tissue structure.
The invention also includes a computer program to be run on a computer to perform the aforesaid method and data processing apparatus configured to perform the method.
5 In another aspect the invention provides a medical analysis tool comprising a processor operable to provide a statistical model configuration of at least one mode of variation of a first set of variables relating to shape characteristics of a relatively hard tissue part of a living body, and at least one mode of variation of a second set of variables relating to shape characteristics of a relatively soft tissue part of the lo body that overlies the relatively hard tissue part, an input operable to plan a change to the first set of variables for a patient, such that the processor utilises the model configuration to predict corresponding changes to the second set of variables for the patient, whereby to predict changes in shape of the soft tissue part that correspond to changes planned for the hard tissue part.
The tool may include a model fitting system operable to fit the first and second models to the corresponding pre-operative hard and soft tissue shape characteristics of a patient to provide parameterised shape data for the pre-operative hard and soft tissue shape characteristics, and a planning input system operable to define a post 20 operative hard tissue configuration for the patient using the third model to provide parameterised shape data for post-operative hard tissue configuration.
The processor may be operable to utilise the parameterised shape data and the predictive model to provide parameterised data for the fourth model to predict 25 post-operative soft tissue shape characteristics for the patient corresponding to the planned post post-operative hard tissue configuration.
The statistical model configuration may include at least one point distribution model.
A display device may be configured to provide a visual display of the predicted the post-operative soft tissue configuration and least one of the pre-operative soft and hard tissue configuration and the planned postoperative hard tissue configuration so that the outcome of the planned procedure can be reviewed and shown to the patient if desired.
In order that the invention may be more fully understood an embodiment thereof 5 will now be described with reference to the accompanying drawings in which:
Figure 1 is a schematic illustration of a hardware configuration for carrying out a predictive method according to the invention for predicting the outcome of a bimaxillary osteotomy, Figure 2 illustrates the relationship between process components of a model used in 10 predicting the outcome of the surgery, Figure 3 is a lateral cephalogram. of a patient's head with landmark points shown marked on it, Figure 4 illustrates a camera arrangement for capturing 3D data, Figure 5 is an example of a 2D rendering of a 3D image captured by the camera 15 arrangement of Fig. 4 with landmark points thereon, Figure 6 is a flow chart of a process for training the models, Figure 7 is a flow chart of a process for predicting the outcome of a bimaxillary osteotomy, using the trained models, Figure 8a illustrates a display of a 2D lateral cephalograrn of the bony tissue of a 20 patient before surgery is carried out, Figure 8b illustrates a display of a proposed surgical treatment plan for the patient, Figure 9a illustrates a display of a 3D model instance for the soft tissue shape of the head of the patient before surgery is carried out, and Figure 9b illustrates a display of a 3D predicted model of the soft tissue shape of 25 the head of the patient after surgery is carried out according to the proposed treatment plan shown in Figure 8b.
In the example of the invention described hereinafter, 2D and 3D shapemodelling techniques are used to build a statistical model of the relationship between hard and 30 soft-tissue during maxillo-facial surgery. This model can then be used to predict 3D soft-tissue changes that occur as a result of maxillo-facial surgery. For example, a surgeon may propose to break and move a patient's jawbone to improve facial function and aesthetics and the model provides a prediction of the resulting 3D shape of the head produced by the proposed surgery. The method can be split into 2 general stages:
Model-Building - this involves building a statistical model which expresses the 5 relationship between hard tissue and soft tissue, for both pre and post- operative mi lo I axil -facial patient data.
Soft- Tissue Prediaion - given patient pre-operative data for an individual patient, the statistical model is used to predict the postoperative soft-tissue appearance for the lo patient given the pre-operative data, plus knowledge of the surgeon's treatment plan.
These two stages will now be discussed individually in detail.
15 Model building A number of statistical models are constructed using a hardware configuration shown in Figure 1. A conventional personal computer 1 with a processor unit 2, display screen 3 keyboard 4 and mouse 5 is coupled to a scanner 6. The scanner 6 permits X-ray side-view images of the patient's head, known as lateral 20 cephalograms, to be scanned, digitised and fed to the processor unit 2. The resulting cephalogram data thus provides data concerning the bony or hard tissue configuration in the patient's head. It will be understood that this data can alternatively be obtained directly from digital X-ray equipment and the invention is not restricted to any particular method of hard tissue data capture. The processor 25 unit 2 is also configured to receive data concerning the external or soft tissue appearance of the patient's head. This data may be captured using a 3D scanner 7 shown schematically. One example of 3D scanner 7 is the Tricorder DSP Series 3D device supra.
3o The processor unit 2 includes a central digital processor RAM, ROM and data storage media such as a hard disc and floppy disc connected on a common bus, in a conventional manner. The central processor can execute stored programs stored on the data storage media, so as to build the statistical models and display results obtained from them on the screen 3, and allow manipulation of the displayed data using the keyboard 4 and mouse 5. The programs build statistical models for the aforesaid model building and also execute the soft tissue prediction as -will become apparent hereinafter.
Using this configuration, a statistical model is built that allows a prediction of post operative soft-tissue appearance to be made from the following data: pre- operative soft-tissue appearance, pre-operative hard-tissue appearance, and knowledge of the surgical treatment plan i.e. knowledge of a proposed post-operative hard tissue 10 appearance. The model building utilises the following components shown in Figure 2:
ò A standard 2D PDM 10 with grey-level models describing the variability of the position and grey-level appearance of key bony landmarks identifiable in the pre 15 operative lateral cephalograms. ò A 3D PDM 11 describing the variability in shape of pre-operative 3D
facial soft tissue appearance, modelled from 3D surfaces acquired using the 3D scanner 7 ò A standard 2D PDM 12 with grey-level models describing the variability of the position and grey-level appearance of bony landmarks in the post- operative 20 lateral cephalograms.
ò A 3D PDM 13 describing the variability in shape of post-operative 3D facial soft-tissue, modelled from 3D surfaces acquired using the scanner 7.
ò A predictive model 14 which links the data from the models 10 - 13 together, and describing the relationship between data from models 10- 12 and data from 25 model 13.
These models -will now be considered in more detail.
2D PDM models of lateral cephalVams (models 10 & 12) 30 A training set of pre and post-operative lateral cephalograms is obtained for human patients who have already undergone maxillo-facial surgery. The cephalograms thus constitute historical data for maxillo- facial procedures previously carried out and can be used to train the pre- and post-operative 2D PDMs 10, 12. The cephalograms are individually scanned using the scanner 6 and individually displayed on the screen 3 of the computer 1. The positions and appearance of key anatomical landmarks and structures present in both the pre and post-operative lateral cephalograms are identified and modelled using a standard 2D PDM with multi 5 resolution grey-level models as described in Cootes TY etal "Building and Using Flexible Models Incorporating Grey-Level Information", and "Active Shape Models : Evaluation of a Multi-Resolution Method for Improving Image Search", supra.
Each of the pre and post operative models includes a number of standard anatomical landmarks useful to maxillo-facial surgeons (Nasion, Sella, Porion, lo Orbitale, Gonion, Pogonion, Menton, Gnathion, Upper Incisor Root, Upper Incisor Tip, Lower Incisor Root, Lower Incisor Tip, ANS, PNS, A Point, B Point). Figure 3 shows the structures modelled.
Pre-operatize 2D Model 10 15 Considering the pre-operative cephalogram model 10, by analogy with Equation (3) a shape instance in the pre-operative cephalogram model can be described by the equation:
X i(' Pre + P b (8) Ceph Pr e. eph Ceph FIr e Ccph Pr c (where X Ceph Pr e is a vector of pre-op cephalogram. 2D landmark data, jiCephPre is the 20 mean pre-op cephalogram 2D landmark data averaged over the training set, P( 'eph Pr e is a matrix of the most significant eigenvectors of the pre-op cephalogram training data covariance matrix, andbCeph Pr cis a set of weights, one for each eigenvector.) Post-operatize 2D Model 12 25 Similarly, for the post-operative cephalogram. model 12, a shape instance can be described by the equation:
X Cephilost - iCephPos/ + PCephPonbCephPost (9) (where x(:,,,hp.,, is a vector of post-op cephalogram 2D landmark data, is the mean post-op cephalogram 2D landmark data averaged over the training set, 30 is a matrix of the most significant eigenvectors of the post-op cephalogram 16training data covariance matrix, and b(.,ephPost is a set of weights, one for each eigenvector.) In this example, identical anatomical landmarks are used in the post-operative cephalogram model to those in the pre-operative cephalogram. model.
3D Models of Facial Shape (models 11 & 13) The 3D shape of 3D pre and post-operative facial soft-tissue are each modelled using a 3D PDA This involves capturing a training set of images of pre- and post operative facial shape using the scanner 7 shown in Figure 1. The basic modelling technique used is standard, as described by I-Ell et al. supra, but an improved method io for marking up 3D training data is used, which addresses two problems with the standard method of Hill et al as will now be explained.
Hill's method is time-consuming, requiring of the order of 1000 landmarks to be marked on each training example and for the present application a final 3D model is 15 required that describes facial soft-tissue at a comparable resolution to the originally acquired 3D surfaces. However, it is difficult to manually identify a large number of reproducible 3D landmarks on facial surfaces. To overcome this problem, a method is used according to the invention which takes a smaller number of manually marked 3D facial landmarks, and uses them to interpolate a large number of 20 landmarks over the whole facial surface. The improved method exploits an assumption that the captured facial surface can be represented as a visible surface representation whereby in a particular co-ordinate frame, the facial surface height z can be described as a single-valued function of x and Y. This turns the landmark mark-up problem into a 2.5D (or 2D with depth) problem.
The improved method used in accordance with this example of the invention extends the 2D face-modelling technique of Lanitis et al, supra from 2D into 2.5D. The following steps are carried out:
30 1) A texture-mapped, triangulated 3D facial surface is acquired for each training example using the Tnicorder DSP Series 3D capture system. The acquisition is done with each person face-on to the capture system as shown in Figure 4. The system 17- includes an array of digital cameras Cl-C4 directed face-on to the patient's face which is illuminated with a spatially textured light from a source (not shown) and the outputs of the cameras are processed to produce data corresponding to a texture-mapped, triangulated 3D facial surface.
2) Each texture-mapped, triangulated 3D facial surface is converted into a 2.5D depth-map, and an image of the corresponding texture. This is done by calculating a virtual pin-hole camera model which is the average of the 4 (pre-calibrated) Tricorder DSP Series cameras models shown in Figure 4, and re-projecting the 3D io facial surface using this camera model to give a 2.5D depth-map and texture image. A depth-map is defined to be a 2D array D of 3D points D(i, j) = (x#, Y,,, ZO orthogonal to the depth direction z. Thus each point D can be considered to lie at a depth z from a common plane (x, y). The values of x and y are stored as well as the corresponding depth z.
3) Each depth-map texture image is then treated as a simple image and a relatively small (- 8 0) set of reproducible 2D points are manually marked on each image. Figure 5 shows an example marked-up texture image. The marked points consist of two types: i) landmark points (shown as filled dots 15) - distinctive facial features 20 or positions which can be reliably marked on each example image, and ii) pseudolandmark points (shown as unfilled dots 16) - intermediate points which are equally spaced along the shape boundary between the distinctive landmark points.
4) Using the method of Lanitis supra, the marked 2D points are used to warp each 25 image and depth-map into a common 'shape-free' frame using 2D thin- plate spline (TPS) interpolation. In the 'shape-free' frame, any pixel (x, y) in a given training example depth-map is nominally in correspondence with the same pixel in every other example depth-map. Thus, a small number of 2D landmark points have been used to produce texture-map and depth-map correspondences over the whole face.
5) Dense 2D re-sampling of the 'shape-free' depth-maps produces a set of 3D 'landmark' points for each example. Only points for which a data-point exists in all training examples are included in this example of the model.
6) A standard 3D PDM is built from the training data.
It will be understood that in a modification of the described method, the data mark- 5 up process could be straightforwardly extended from 2.5D into full 3D by using 3D mark-up and 3D TPS.
Two 3D facial soft-tissue models are produced, as follows:
10 Pre-operatim 3D soft-tissue Model 11 By analogy with Equation (3), a shape instance in the pre-operative 3D soft-tissue model 11 can be described by the equation:
X31)Pre __ i3DPre + P3DPreb3DPre (10) (where x3Dp,, is a vector of pre-op 3D soft-tissue data, 3Z3DP,, is the mean pre-op 15 3D soft-tissue data averaged over the training set, P3DP,, is a matrix of the most significant eigenvectors of the pre-op 3D soft-tissue training data covariance matrix, and b3DP,, is a set of weights, one for each eigenvector.) Post-operathe 3D soft-tissue nzodel 13 20 By analogy with Equation (3) a shape instance in the post-operative 3D soft-tissue model can be described by the equation:
X 3 DPost 3 D11ov + P3 Dllostb3 Dilom (where x3Dpost is a vector of post-op 3D soft-tissue data, 5 3DAW s the mean post-op 3D soft-tissue data averaged over the training set, P3DPo,l is a matrix of the most i ificant eigenvectors of the post-op 3D soft-tissue training data covariance 25 signi matrix, and b3DP,,, is a set of weights, one for each eigenvector.) In this example, identical 3D landmarks are used in the post-operative 3D softtissue model to those in the pre-operative 3D soft-tissue model.
19 - Referring to Figure 6, the building of the models 10 - 13 is shown schematically as steps S1- S4.
Predictiw Model 14 5 Once the pre- and post-operative 2D cephalogram and 3D soft-tissue models 10 13 have been built, the combined predictive model 14 that describes the relationship between the four individual models is prepared. This involves steps S5 and S6 shown in Figure 6 which will now be described in detail.
io Each training example for the predictive model 14 consists of a measurement vector X Pr edia that is the concatenation of 4 blocks of data:
1) a vector bCph Pt e of length nCxpbPre representing the pre-operative 2D bony structure of the face in parametric form. b(.,,Ph,, is calculated from the raw 2D landmark point data x(..,Pp,, by inverting equation (8), 15 2) a vector b3l)Pre of length n3DPre representing the pre-operative 3D soft tissue structure of the face in parametric form. b3 D Pr e is calculated from the raw 3D landmark point data X1111r, by inverting equation (10), 3) a vector b,ephP...., of length nCephPost representing the post- operative 2D bony structure of the face in parametric form. b,, Pa., is calculated from the raw eph 1 2D landmark point data x,,,,, by inverting equation (9), and ephl' 4) A vector b,,,,,,, of length n3DPost representing the post-operative 3D soft tissue of the face in parametric form. b3 IIP,,.,,, is calculated from the raw 3D landmark by inverting equation (11).
point data X 3 DPost The concentration of these blocks of data is carried out at step S5 in Figure 6.
in the manner described previously in relation to prior predictive models, each block of data making up X 1, e1c, is normalised by dividing by its total training set variance, so that each type of data gives a contribution of equal weight to the combined model i.e.:
bCeph Pre, bCephPre,(.,,hp,, b3D Pre, b3DPre,,,,,,,, X Predict = {_ I I I 1 9 7CephPre 7CephPre 03DPre O3DPre (12) bCephPost, bCephPost,(.,Php_, b3 DPosl, b3 DPos', 3 D P, (7CephPov tTCephPost (73DPost Cr3DPre (where the norinalisation factors CrCephPr,, C3DPre CCephPost andU3DPs, are given by the total training set variance of measurement vectors b(.'eph Pre 5 b3/) Pre) bCephllost and b3,),,,.,, respectively.) The combined predictive model is then (in step S6 of Fig. 6) built from the training data by Principal Component Analysis, using the method described previously in relation to prior predictive models. Thus, an instance of the predictive model can be described by the equation:
10 XPredia 54redict + PPredic/bPr edia (13) i he mean predictive model (where Xlel, is the predictive model instance, iPredicf is t data averaged over the training set, PPr edi, is a matrix of the most significant eigenvectors of the predictive model training data covariance matrix, and bPreeha is a set of weights, one for each eigenvector.) A useful predictive model can be built from of the order of 100 (or more) training examples, each example containing the data for a single example of a bimaxillary osteotomy procedure. Adding further training data improves the accuracy of the predictive model.
Soft-Tissue Prediction Once the trained predictive model has been produced, it can be used to predict to the outcome of a surgical maxillofacial procedure. For example a surgeon may propose a procedure which involves breaking a patient's jaw and moving the jaw- 25 line by resetting the)aw. The resulting change in the 3D physical appearance of the face produced by the procedure depends on the rearrangement of bony material produced by the surgery and has been difficult to predict and explain to the patient. The actual and the perceived success of the procedure depends greatly on the skill, experience and communication skills of the surgeon. in this example, the method according to the invention allows the surgeon to input a proposed procedure making reference to a 2D cephalogram of the patient and predict the 3D soft tissue outcome, i.e. the facial appearance after carrying out the surgery.
The main process steps are shown in Figure 7. At step S7, a standard preoperative lateral cephalogram of the patient is acquired by conventional X-ray techniques, which is scanned by means of the scanner 6 and the resulting data is supplied to the processor 2 shown in Figure 1. Then, at step S8, the 2D captured data for the pre- lo operative lateral cephalogram is converted into a parametric form by fitting the 2D pre-operative lateral cephalogram model 10 to the cephalogram of the patient.
At step S9, a pre-operative 3D facial soft-tissue surface image of the patient is acquired using the 3D Tricorder DSP Series device. The corresponding data is sent 15 from scanner 7 to the processor 2. At step S10 the captured pre- operative 3D facial soft-tissue surface data is converted into a parametric form by fitting the 3D facial soft-tissue model 12 to the 3D facial soft-tissue surface.
At step S1 1, the surgical treatment plan is set up by manipulating the 2D landmarks 20 on the pre-operative lateral cephalogram. This process is used to define an instance of the of the post-operative 2D cephalogram model 13.
The resulting data are supplied as inputs to the predictive model 14 which, at steps S12 and S13, uses the pre-op lateral cephalogram parameters, pre-op 3D soft-tissue 25 parameters and surgical treatment plan to predict post-op 3D soft- tissue shape and appearance.
These steps will now be described in more detail.
Fitting 2D Pre-OperatiM Lateral Cephalogram Model to Cephakgwn Data (Step S8) 30 The 2D pre-operative lateral cephalograrn model is fitted to the pre- operative lateral cephalogram. using the standard multi-resolution ASM of Cootes et al "Active Shape Models: Evaluation of a Multi-Resolution Method for Improving Image Search", 22- supra. The fitting algorithm determines the pre-operative cephalogram model shape' parameters b(:ephPre which best fit the given cephalogram, and also the 2D location, orientation and scaling of the model instance in the cephalogram. This permits the cephalogram to be characterised in terms of a small set of shape parameters b(:(,PhPre 3 5 from which the aforementioned corresponding anatomical landmark point positions X(:c,phPrecan be calculated.
The fitting algorithm is run on the processor unit 2 in Figure 1 and the resulting location of the landmark points relative to the cephalogram. of the patient may be lo displayed on the screen 3 of the computer to provide the user with confirmation that the 2D pre-operative model has been satisfactorily fitted to the bony tissue image of the patient if the automatic fit of the cephalogram model to the cephalograin is not acceptable 15 to the clinician, the results of the fitting can be manually improved by moving any incorrectly positioned model landmark points, and updating b,.e,,,,,, accordingly using the iterative method described in "Active Shape Models - Smart Snakes" by Cootes et al supra. This process may be carried out using the mouse 5 (Figure 1) selectively to drag the display of landmark points of the 2D model so as to get a 2o better fit.
Fitting 3D Pre-Operatiw Facial Soft- Tissue Model to 3D Facial SuiqGce Data (Step SIO) The 3D pre-operative facial soft-tissue model is fitted to the pre- operative 3D facial soft-tissue surface using an algorithm run on the processor unit 2 which is a variant of the Iterated Closest Point (ICP) algorithm described in "A method for 25 registration of 3-D shapes", Besl, P. J. and McKay, N. D., IEEE PAMI, 14(2), pp 239-256, 1992. The original search algorithm of Hill et al described in "ModelBased Interpretation of 3D Medical Images", supra was developed for deforming 3D models to fit to 3D volumetric image data whereas the modified version of the ICP algorithm deforms an initial 3D PDM in both pose and shape to produce the 3o best local fit to 3D surface data. The algorithm proceeds as follows.
23 - Initialise the position and shape of the 3D pre-operative facial soft- tissue' model 12. A reasonable initialisation is found by calculating the centroid and scale of the pre-operative 3D facial soft-tissue surface, and initialising a 3D pre-operative facial soft-tissue model instance of mean shape with this position and scale, and an 5 identity rotation matrix.
2) For each 3D PDM landmark point, find the closest point on the pre operative surface. This gives a vector of updated model points x3,,,,, which indicates to where each 3D PDM landmark point should be moved.
3) Update the 3D pre-operative facial model pose and shape parameters to X? io produce an instance of model 12 which gives the best least squares fit to 3/)Pr(!) and -which is also within 3 standard deviations of the mean model shape as described by Hill et al in Model-Based Interpretation of 3D Medical Images", supra.
4) Iterate Steps 2) and 3) until convergence occurs.
15 A display of the resulting parameterised data may be provided on the screen 3 of the computer. The process allows the pre-operative 3D facial soft-tissue surface to be characterised automatically in terms of a small set of shape parameters b3/)Pre) and a 3D model pose defined in terms of an isotropic scaling s, a translation vector t and a rotation matrix R Input Su7gical Treatmmt Plan (Step S 11) The surgical treatment plan is input using a similar User Interface to that of existing systems such as OTP and QuickCeph supra. The pre-operative lateral cephalograrn acquired at step S7 is displayed on the screen 2 with the anatomical landmark point 25 positions xCephPre marked on it. The surgeon then indicates the proposed changes to make during surgery by manipulating the bony landmark points with the mouse 5 or by means of the keyboard 4 to give a new set of landmark point positions x(.,P,,I,,.,, I indicating how the mandible and/or maxilla will move during surgery. Figure 8a is a schematic illustration of the pre-operative lateral cephalograrn of the patient and 3o Figure 8b illustrates the planned post-operative configuration to be achieved by surgery. This involves breaking the jaw and moving it forward and this is simulated by making corresponding changes to the location of the landmark points on the screen 3. The resulting configuration of the landmark points are then inputted into the post-operative 2D model 12 such that xCcphPos/ can then be used to calculate the best-fit 2D post-operative lateral cephalogram model parameters b(:ephPo.vI using the iterative method described Cootes et al "Active Models- Smart Snakes" supra.
5 Prediaion (Step S12) The parameterised form of the pre-operative data (b(.cphpre I b3/)Pre and 3D surface model pose s, t, R), and the parameterised form of the treatment plan (b(:ePhP,S,)I are used to calculate a prediction of post-operative soft-tissue shape and appearance.
This is done as follows:
Use the combined predictive model 14 described by equation (13), and the methods described above generally in relation to prior predictive models, to use the measurements b(:ephPre) b3DPre and bCcphPost to predict b,,p,,.,, by solution of a weighted linear least squares problem. The resulting instance of b3l)P,)., i is thus a 15 prediction of the post-operative 3D soft-tissue appearance in parametric form.
2. Convert b3DPre into a corresponding set of 3D surface model points X3/)Pre using equation (10). Transform x3DPre into the correct 3D frame of reference by applying the 3D pre-operative surface model pose s, t, R to each 3D point in 20 X3/)Pre to give x'3DPre 3. Convert b3DPom into a corresponding set of 3D surface model points X31)Post using equation (11). Transform x3DPnsI into the correct 3D frame of reference by applying the 3D pre-operative surface model pose s, t, R to each 3D point in 25 X31)Po.vf to give x3,,,,,,. Although x3,, itself gives a reasonable prediction of 3D soft-tissue post-operative shape, a more accurate method is given below.
4. Calculate the change in parametric model points ax3D between pre and post operative 3D surface model points:
ax3l) X'31)1'- - X3'DPre (14) 5. Apply the change in parametric model points &,, to the original texture mapped 3D pre-operative facial surface. For each point p in the pre- operative facial surface:
5 5.1 Calculate p', the closest point to p in X,,),,,, 5.2 Extract the corresponding change o5p' in the position of p' from ax,,) .
5.3 Add p' to p.
The output of this algorithm is a version of the 3D pre-operative facial surface which has been modified to simulate the required maxillo-facial surgery.
The resulting post-operative 3D data produced either by step 3 or 5 of this predictive process is then displayed to the surgeon on the screen 3. Figure 9a shows the display of the instance of pre-operative 3D model 11 for the patient, and Figure 9b illustrates the predicted 3D post-operative shape predicted by the predictive 15 model 14 for the surgeon's treatment plan. Thus the surgery planned in 2D as shown in Figures 8a and 8b is predicted to produces changes in 3D as shown in Figures 9a and 9b. The surgeon can then if desired modify the planned surgery in the screen display of Figure 8b and observe the outcome in the display of Figure 9b.
This enables the surgical procedure to be optimised to achieve the desired aesthetic 20 outcome. The displays of Figures 8 and 9 may be shown to the patient to explain and seek approval for the proposed procedure.
Many modifications and variations to the described method fall within the scope of the invention. Whilst in the described example an hybrid 2D-3D predictive model is 25 employed, a number of variants on this scheme could also be used, depending on the available training data and/or treatment planning protocol. For example, it is possible to link pre and post-operative 2D cephalogram data to pre and post operative 2D soft-tissue shape extracted from a 2D photograph of the patient.
Also, it would be possible to link pre and post-operative 3D pre and postoperative 30 X-ray CT data to 3D soft-tissue shape extracted from a 3D surface scan. Other p'bilities will be evident to those skilled in the art.
ossi I Also, the training of the predictive model 14 may be carried out on an ongoing basis. In the described example, the model training was carried out as an initial step, but in addition, the data for subsequent surgical procedures may be used to update the training of the models.
Furthermore, the invention is not restricted to maxillo-facial and craniofacial surgery and can be used for other procedures where it useful to predict changes in soft tissue shape resulting from a proposed operation to change to a corresponding relatively hard tissue configuration, and is not restricted to human surgery. The io invention may also be used for operations on non-animate objects for which a statistical correlation occurs between an inner structure and an outer structure covering the inner structure so as to predict changes in the shape of the outer structure produced by a proposed operation to change the inner structure. Conditions other than the shape of the object may be predicted by means of the 15 invention.

Claims (33)

Claims
1. A method of predicting changes for an object with first and second characteristics that are distinct from but statistically correlated with one another, 5 comprising: providing a statistical model configuration of at least one mode of variation of a first set of variables relating to the first characteristic of the object, and at least one mode of variation of a second set of variables relating to the second characteristic of the object, planning a change to the first set of variables for the object, and using the model configuration to predict a corresponding change to the lo second set of variables for the object from data corresponding to the planned change to the first set.
2. A method according to claim 1 wherein the statistical model configuration includes a first parametric model of the first characteristic of the object, a second 15 parametric model of the second characteristic of the object and a predictive model that characterises a statistical correlation between the models, the method comprising: fitting the first and second models to the corresponding characteristics of an object in the first condition to provide parameterised data for the first and second characteristics of the object in the first condition, planning a change to the 20 condition for the object so as to provide parameterised data for the first characteristic of the object in a second different condition, and utilising the parameterised data and the predictive model to provide parameterised data corresponding to a prediction of the change of second characteristic of the object in the second condition.
3. A method according to claim 2 wherein the statistical model configuration includes a first parametric model of the first characteristic of the object in the first object condition, a second parametric model of the second characteristic of the object in the first object condition, a third parametric model of the first 30 characteristic of the object in the second object condition, a fourth parametric model of the second characteristic of the object in the second object condition, and the predictive model characterises a statistical correlation between the models, the method comprising: fitting the first and second models to the corresponding 28- characteristics of an object in the first condition to provide parameterised data for the first and second characteristics of the object in the first condition, planning the second condition for the object using the third model to provide parameterised data for the first characteristic of the object in the second condition, and utilising the 5 parameterised data and the predictive model to provide pararneterised data for the fourth model to predict the second characteristics of the object in the second condition.
4. A method according to claim 3 including acquiring data concerning the first lo characteristic of the object in its first condition and fitting the first parametric model to the acquired data.
5. A method according to claim 4 wherein the planning of the second shape condition for the object includes manipulating the third parametric model relative to 15 the acquired data concerning the first characteristics of the object in its first condition.
6. A method according to claim 5 including displaying the acquired data concerning the first characteristic of the object in its first condition and 20 manipulating a display of the third model relative to the displayed data.
7. A method according to any one of claims 3 to 6 including displaying an instance of the fourth model corresponding to the parameterised data therefor produced by means of the predictive model to display a prediction of the second 25 characteristic of the of the object in the second object condition.
8. A method according to any one of claims 3 to 7 including acquiring data concerning the second characteristic of the object in its first condition and fitting the second model thereto.
9. A method according to claim 8 including modifying the data in accordance with the parameterised data produced by the fourth model and displaying the 29 - modified data to display a prediction of the second characteristic of the of the object in the second object condition
10. A method according to any preceding claim wherein the first and second characteristics relate to the shape of the object.
11. A method according to any preceding claim wherein the first characteristic relate to information concerning an interior structure of the object.
lo
12. A method according to any preceding claim wherein the second shape characteristics relate to information concerning an outer structure of the object.
13. A method according to any preceding claim carried out to predict the outcome of a medical operative procedure wherein the object comprises a patient, 15 the first characteristic corresponds to the shape of underlying hard tissue structure of the patient and the second characteristic corresponds to the shape of a soft tissue structure that covers the hard tissue structure.
14. A method according to claim 13 wherein the first condition relates to the 20 shape of the patient before carrying out the operative procedure and the second condition relates to the shape of the patient after carrying out the operative procedure.
15. A method according to claim 13 or 14 including acquiring data from a pre- 25 operative lateral cephalogram. concerning the shape of underlying hard tissue structure of the patient.
16. A method according to claim 13, 14, 15 including acquiring data from a pre operative 3D scan of the patient for the shape of the soft tissue structure.
17. A computer program to be run on a computer to perform a method as claimed in any preceding claim.
18. Apparatus configured to perform a method as claimed in any one of claims I to 16.
19. A computer software package to be run on a computer to predict changes 5 for an object that has first and second characteristics that are distinct from but statistically correlated with one another, the package being operable to provide a statistical model configuration of at least one mode of variation of a first set of variables relating to the first characteristic of the object, and at least one mode of variation of a second set of variables relating to a second characteristic of the io object, such that by planning a change to the first set of variables for the object, the model configuration is operable to predict a corresponding change to the second set of variables for the object from data corresponding to the planned change to the first set 15
20. A package according to claim 19 wherein the statistical model configuration includes a first parametric model of the first characteristic of the object, a second parametric model of the second characteristic of the object and a predictive model that characterises a statistical correlation between the models, such that by fitting the first and second models to the corresponding characteristics of an object in the 20 first condition to provide parameterised. data for the first and second characteristics of the object in the first condition, planning a change to the condition for the object so as to provide parameterised data for the first characteristic of the object in a second different condition, the parameterised data and the predictive model to provide parametenised data corresponding to a prediction of the change of second 25 characteristic of the object in the second condition.
21. A system for predicting shape changes for an object that has first and second shape characteristics that are distinct from but statistically correlated with one another, comprising 30 a statistical model configuration including a first parametric model (10) of the first shape characteristics of the object in a first condition of the object, a second parametric model of the second shape characteristics of the object in the first object condition, a third parametric model of the first shape characteristics of 31 - the object in a second different object condition, a fourth parametric model of the second shape characteristics of the object in the second object condition, and a predictive model (14) that characterises a statistical correlation between the models, a model fitting system operable to fit the first and second models to the corresponding shape characteristics of an object in the first shape condition to provide parameterised shape data for the first and second shape characteristics of the object in the first condition, a planning input system operable to define a second shape condition for the object using the third model to provide parameterised shape data for the first shape lo characteristics of the object in the second condition, and a processor operable to utilise the parameterised shape data and the predictive model to provide parameterised data for the fourth model to predict the second shape characteristics of the object in the second condition.
22. A medical analysis tool comprising a processor operable to provide a statistical model configuration of at least one mode of variation of a first set of variables relating to shape characteristics of a relatively hard tissue part of a living body, and at least one mode of variation of a second set of variables relating to shape characteristics of a relatively soft tissue part of the body that overlies the relatively soft tissue part, an input operable to plan a change to the first set of variables for a patient, such that the processor utilises the model configuration to predict corresponding changes to the second set of variables for the patient, whereby to predict changes in shape of the soft tissue part that correspond to changes planned for the hard tissue part.
23. A tool according to claim 22 wherein the statistical model configuration includes a first parametric model of pre-operative hard tissue shape characteristics of the patient, a second parametric model of pre-operative soft tissue shape 30 characteristics of the patient, a third parametric model of post- operative hard tissue shape characteristics of the patient, a fourth parametric model of post-operative shape tissue shape characteristics of the patient, and a predictive model that characterises a statistical correlation between the models.
24. A tool according to claim 23 including:
a model fitting system operable to fit the first and second models to the corresponding pre-operative hard and soft tissue shape characteristics of a patient to provide parameterised shape data for the pre-operative hard and soft tissue shape characteristics, and a planning input system operable to define a post-operative hard tissue condition for the patient using the third model to provide parameterised shape data for post-operative hard tissue condition.
25. A tool according to claim 24 wherein the processor is operable to utillse the parameterised shape data and the predictive model to provide parameterised data for the fourth model to predict post-operative soft tissue shape characteristics for the patient corresponding to the planned post post-operative hard tissue condition.
26. A tool according to any one of claims 22 to 25 wherein the statistical model configuration includes at least one point distribution model.
27. A tool according to any one of claims 22 to 26 including a display device 20 configured to provide a visual display of the predicted the post- operative soft tissue condition.
28. A tool according claim 27 wherein the display device is configured to provide a visual display of at least one of the pre-operative soft and hard tissue 25 condition and the planned post-operative hard tissue condition.
29. A tool according to any one of claims 22 to 28 including an input to receive data corresponding to a 2D representation of the pre-operative hard tissue condition for the patient, and an input to receive data corresponding to a 3D representation of the pre-operative soft tissue condition for the patient.
30. A computer program to be run by the processor claimed in any one of claims 22 to 29 to provide said statistical model configuration.
31. A method of training a medical analysis tool as claimed in any one of claims 22 to 30 including acquiring a set of training data corresponding to the model configuration and determining modes of variation thereof.
32. A system for predicting shape changes for an object substantially as hereinbefore described with reference to the accompanying drawings.
33. A method of predicting shape changes for an object substantially as lo hereinbefore described with reference to the accompanying drawings.
GB0016151A 2000-06-30 2000-06-30 Predicting changes in characteristics of an object Withdrawn GB2364494A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
GB0016151A GB2364494A (en) 2000-06-30 2000-06-30 Predicting changes in characteristics of an object
PCT/GB2001/002828 WO2002003304A2 (en) 2000-06-30 2001-06-26 Predicting changes in characteristics of an object
AU2001266169A AU2001266169A1 (en) 2000-06-30 2001-06-26 Predicting changes in characteristics of an object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0016151A GB2364494A (en) 2000-06-30 2000-06-30 Predicting changes in characteristics of an object

Publications (2)

Publication Number Publication Date
GB0016151D0 GB0016151D0 (en) 2000-08-23
GB2364494A true GB2364494A (en) 2002-01-23

Family

ID=9894812

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0016151A Withdrawn GB2364494A (en) 2000-06-30 2000-06-30 Predicting changes in characteristics of an object

Country Status (3)

Country Link
AU (1) AU2001266169A1 (en)
GB (1) GB2364494A (en)
WO (1) WO2002003304A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10342431B2 (en) 2000-07-26 2019-07-09 Melanoscan Llc Method for total immersion photography
KR20220026844A (en) * 2020-08-26 2022-03-07 (주)어셈블써클 Method and apparatus for simulating clinical image

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB0504172D0 (en) * 2005-03-01 2005-04-06 King S College London Surgical planning
RU2429539C2 (en) * 2005-09-23 2011-09-20 Конинклейке Филипс Электроникс Н.В. Method, system and computer programme for image segmentation
US20080226144A1 (en) * 2007-03-16 2008-09-18 Carestream Health, Inc. Digital video imaging system for plastic and cosmetic surgery
JP2013526934A (en) * 2010-05-21 2013-06-27 マイ オーソドンティクス プロプライエタリー リミテッド Appearance prediction after treatment
KR101223937B1 (en) * 2011-02-22 2013-01-21 주식회사 모르페우스 Face Image Correcting Simulation Method And System Using The Same
ES2651317T3 (en) 2011-03-01 2018-01-25 Dolphin Imaging Systems, Llc System and method to generate profile change using cephalometric monitoring data
US8650005B2 (en) 2011-04-07 2014-02-11 Dolphin Imaging Systems, Llc System and method for three-dimensional maxillofacial surgical simulation and planning
WO2012138627A2 (en) 2011-04-07 2012-10-11 Dolphin Imaging Systems, Llc System and method for simulated linearization of curved surface
US8417004B2 (en) 2011-04-07 2013-04-09 Dolphin Imaging Systems, Llc System and method for simulated linearization of curved surface
CN116778576A (en) * 2023-06-05 2023-09-19 吉林农业科技学院 Time-space diagram transformation network based on time sequence action segmentation of skeleton

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AT408623B (en) * 1996-10-30 2002-01-25 Voest Alpine Ind Anlagen METHOD FOR MONITORING AND CONTROLLING THE QUALITY OF ROLLING PRODUCTS FROM HOT ROLLING PROCESSES

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
INSPEC Abstract No. 6622309 Nonrigid 3D/2D registration of images using statistical models *
INSPEC Abstract No. 6628895 A simulation environment for maxillofacial surgery *
INSPEC Abstract No. 6690236 Advantages of computer assisted surgery *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10342431B2 (en) 2000-07-26 2019-07-09 Melanoscan Llc Method for total immersion photography
KR20220026844A (en) * 2020-08-26 2022-03-07 (주)어셈블써클 Method and apparatus for simulating clinical image
KR102475962B1 (en) * 2020-08-26 2022-12-09 주식회사 어셈블써클 Method and apparatus for simulating clinical image

Also Published As

Publication number Publication date
WO2002003304A2 (en) 2002-01-10
AU2001266169A1 (en) 2002-01-14
WO2002003304A3 (en) 2003-03-13
GB0016151D0 (en) 2000-08-23

Similar Documents

Publication Publication Date Title
EP3100236B1 (en) Method and system for constructing personalized avatars using a parameterized deformable mesh
AU2017281290B2 (en) Method for estimating at least one of shape, position and orientation of a dental restoration
Mollemans et al. Predicting soft tissue deformations for a maxillofacial surgery planning system: from computational strategies to a complete clinical validation
EP1872337B1 (en) Method and system for pre-operative prediction
US6434278B1 (en) Generating three-dimensional models of objects defined by two-dimensional image data
Jones Facial Reconstruction Using Volumetric Data.
WO2012027150A2 (en) Personalized orthopedic implant cad model generation
US9629599B2 (en) Imaging device, assignment system and method for assignment of localization data
Chang et al. 3D segmentation of maxilla in cone-beam computed tomography imaging using base invariant wavelet active shape model on customized two-manifold topology
GB2364494A (en) Predicting changes in characteristics of an object
Desvignes et al. 3D semi-landmarks based statistical face reconstruction
Buchaillard et al. 3D statistical models for tooth surface reconstruction
EP1851721B1 (en) A method, a system and a computer program for segmenting a surface in a multidimensional dataset
Tiddeman et al. Construction and visualisation of three-dimensional facial statistics
JP2022111705A (en) Leaning device, image processing apparatus, medical image pick-up device, leaning method, and program
JP2022111704A (en) Image processing apparatus, medical image pick-up device, image processing method, and program
CN112562070A (en) Craniosynostosis operation cutting coordinate generation system based on template matching
Lekadir et al. Effect of statistically derived fiber models on the estimation of cardiac electrical activation
Knyaz et al. Machine learning for approximating unknown face
Rhee et al. Soft-tissue deformation for in vivo volume animation
Ip et al. Simulated patient for orthognathic surgery
Danckaers et al. Statistical shape and pose model of the forearm for custom splint design
Soltaninejad et al. Automatic crown surface reconstruction using tooth statistical model for dental prosthesis planning
Barre et al. Three-dimensional visualization system as an aid for facial surgical planning
JP2022111706A (en) Image processing apparatus, image processing method, medical imaging apparatus, and program

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)