WO2000030337A2 - Three-dimensional handheld digital camera for medical applications - Google Patents

Three-dimensional handheld digital camera for medical applications Download PDF

Info

Publication number
WO2000030337A2
WO2000030337A2 PCT/US1999/027615 US9927615W WO0030337A2 WO 2000030337 A2 WO2000030337 A2 WO 2000030337A2 US 9927615 W US9927615 W US 9927615W WO 0030337 A2 WO0030337 A2 WO 0030337A2
Authority
WO
WIPO (PCT)
Prior art keywords
imaging system
images
imaging device
image
synthesizer
Prior art date
Application number
PCT/US1999/027615
Other languages
French (fr)
Inventor
Michael Stephanides
Kevin Neil Montgomery
Original Assignee
Oracis Medical Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oracis Medical Corporation filed Critical Oracis Medical Corporation
Publication of WO2000030337A2 publication Critical patent/WO2000030337A2/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/107Measuring physical dimensions, e.g. size of the entire body or parts thereof
    • A61B5/1079Measuring physical dimensions, e.g. size of the entire body or parts thereof using optical or photographic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/207Image signal generators using stereoscopic image cameras using a single 2D image sensor
    • H04N13/221Image signal generators using stereoscopic image cameras using a single 2D image sensor using the relative movement between cameras and objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/246Calibration of cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Theoretical Computer Science (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Image Processing (AREA)

Abstract

A hand-held imaging device is provided with a tracker for localizing its position and orientation during a manual scan of an object of interest. During an image acquisition phase, the imaging device captures images of the object from locations along a scan path and relays the images to a synthesizer which associates each captured image with the position of the imaging device during the capture. The captured images are stored by the synthesizer and are used subsequently, during a reconstruction phase, to reconstruct a three-dimensional representation of the object of interest.

Description

THREE-DIMENSIONAL HANDHELD DIGITAL CAMERA FOR MEDICAL APPLICATIONS
FIELD OF THE INVENTION
The invention relates to the field of imaging, and more particularly, to image-based modeling of three-dimensional objects.
DESCRIPTION OF RELATED ART
Image-based modeling has use in many areas. One such area is medical applications, which include surgical planning, documentation of patient condition, teaching and education, treatment progress monitoring, quantitative measurement and patient communication. However, existing three-dimensional imaging devices are either too large (requiring great effort to bring the patient to the device), require too much time to acquire the image (which makes them impractical to scan infants and small children), or have greatly limited accuracy. They are cumbersome to operate, expensive, error-prone, have strict lighting requirements, and are not very portable. For these reasons, they are unable to deliver the usable high quality three-dimensional images required in a clinical setting.
In the case of all the above application areas, the object of interest is patient skin. While skin has certain well-defined properties, one can not rely on a perfectly Lambertian surface (there may be specular reflection) and one in general cannot assume that there will be many features available on the surface. Only in the case of wound/burn estimation and melanoma detection will there be a particular surface object that is being imaged that could provide features for correlation. Otherwise, the featurelessness of normal skin makes for difficult image acquisition.
In accordance with one prior art system, the Cyberware™ system, the assumption of a Lambertian surface gives rise to many difficulties. Voids are constantly a problem, and scanning time is unacceptable for use with infants and children. Also, a restriction of one viewpoint/trajectory limits usefulness when occlusions and concavities are confronted. Finally, the use of laser light further makes laser-light systems unusable because of safety concerns.
The field of image-based modeling has existed since the start of computer vision work in the 1970s. Initial research focused on simple foreground/background extraction for industrial inspection applications. Later, early work involving three-dimensional object acquisition centered on stereo matching for extracting depth information. Concurrently, the use of shape from shading and other passive ranging techniques became widespread in planetary imaging. As this research progressed, other techniques for structured lighting, such as laser striping or laser range-finding found more use in industrial inspection tasks. In addition, interest in autonomous robotic navigation enabled the creation of much of the scene analysis and geometric representation work.
Image-based modeling refers to the creation of a three-dimensional model from one or more images. This may involve issues of calibration of camera (stereo matching), generation of a three-dimensional model, and extraction of non-structural (texture) information. Image-based rendering is a related field where a number of images are used to generate a new, novel view of an object or scene. A number of techniques exist in the literature including sprite-based rendering, image warping/morphing, 2D manifold creation (lumigraph), and a new technique referred to as layered depth images.
Along with these techniques, an issue that must be addressed is that of information representation. It is not merely enough to acquire the three- dimensional data, but its storage and representation is crucial to defining its utility in an application. Various techniques such as depth maps, volumetric models (voxels), surface models, image-based representations, and scene decomposition models exist for this purpose. The selection of a particular representation model drives the types of operations that are possible on that data and is crucial to the overall system design.
The prior art has taken several approaches to image-based modeling.
Among these approaches are the following:
Structured Light Methods
The most predominant systems today for acquiring three-dimensional scans of an object use active light methods such as laser striping or interference (areal) mapping to derive the surface of the object. Well-known commercial systems from Cyberware™ and Laser Design™ stripe a thin beam of laser light across the object and use a calibrated camera to triangulate depth information. This process typically takes as much as 30 seconds to scan the object, the object must remain stationary, the system only views from one trajectory (hence has problems with occlusions/concavities), and it uses laser light, which raises safety concerns when used on humans. More fundamentally, the system makes assumptions about the surface of the object being non-reflective and having certain optical properties (otherwise "voids" or holes exist in the surface). Further, the system is not portable, requires reduction of ambient room light, and is prohibitively expensive in many applications.
Interference-based techniques (such as those used by Zeiss™ and Orthovision™) project a known moire pattern onto the object and, from the interference pattern captured from the camera, recover depth information. These systems typically require careful calibration and have difficulties with occlusions. Another structured light system is the ATOS system from Newport AG. In this system, a projector projects a series of moire patterns with decreasing spatial frequency onto the object to be captured while a CCD camera captures images of the patterns. This system overcomes the time limits of the previous systems (typically a few seconds) and uses visible light (and therefore eliminates concerns for optical damage due to laser light). However, it remains limited by only viewing from one viewpoint (occlusion/concavity issues), and, as with all structured light systems, there may be some interference by room lighting.
Volumes from Silhouette The Volumes from Silhouettes methods is similar to computed tomography in that a set of images are captured from locations around the object and are used to regenerate the surface. In computed tomography, parallel X-rays pass through the object to form an image and a weighted backprojection algorithm is used to recover voxel information. This yields a volumetric representation of radiographic density throughout the tissue. Visible light can also be used, as in optical computed tomography. Should one assume the object to be opaque however, a silhouette or profile of the object would be obtained in each image and, when reconstructed, would yield the outline of the object. This is essentially the idea behind the volumes from silhouettes technique.
As mentioned previously, the shape representation of the derived data could be volumetric (voxels) in nature or, since this technique assumes an opaque object, a set of generalized cones, projected from each view, could be used with an octree-based algorithm to yield a surface-based representation of the object.
This technique has the advantages that it is relatively simple to implement and fairly robust. It yields a closed surface, which is often of some benefit in later processing. However, it only produces essentially a convex hull of an object (and therefore has problems with concavities) and can be computationally intensive for large datasets and high resolution imagery (i.e. it doesn't scale well). Further, while generally robust it can be thwarted by segmentation issues (if the outline of the object is not well defined). It suffers from concavity and occlusion problems due to the predetermined viewpoints. Also, the high computation requirements and segmentation issues further raise concerns. Three- dimensional curve matching is projected to not work well due to the assumption of large, featureless areas in skin. Also, the non-closed surface would require workarounds for later processing.
Curve Matching
In Curve Matching, images are again captured surrounding an object. A feature detection preprocessing step is performed to find curves (both internal and on the surface of the object) in the images. Then, the curves derived from each dataset are matched within the epipolar plane to obtain the curves of the true three-dimensional surface of the object. In this case, a curve-based model is needed for the representation.
The main difficulty with this technique is that the curves do not often match precisely in three-dimensional space. This is due to a number of effects such as high curvature, image quantization effects, subtle segmentation differences, and numerical precision.
The benefits of this method are that it correctly estimates the edges of concave regions, it works best on smoothly curved objects and provides information about internal and external structure. However, it is easily overwhelmed in highly textured areas or areas devoid of any markings (featureless regions). Finally, the representation yields a set of curves which may not yield a closed (or even complete) surface.
Dense Stereo Matching This technique produces a depth-map representation from a set of images surrounding the object. It accomplishes this by matching features along the epipolar line and thereby determining distance to a particular feature using simple triangulation. Also, since a correlation is being performed, one can further derive a "certainty map" of the object and use it to tailor confidence in the particular feature on the surface.
The advantages of such a system is that it provides explicit depth information along with a confidence weighting factor. Also, using multiple views from multiple locations can actually ameliorate other imaging artifacts and produce a more accurate description.
However, the requirement for surface features makes it difficult to use with highly textured regions (which would have many possible correlation peaks) or in featureless regions (no feature to match). Also, the point-based surface model can produce a space, incomplete, or incorrect surface. Finally, lighting effects (such as specular highlights) can produce an incorrect correlation result.
Three-Dimensional Surface Fitting
Another class of techniques involves fitting a predetermined surface (or surface elements) to the image data. This model-based approach can provide superior performance when the object in question can be decomposed into the primitives used. A number of methods are presented in the literature including particle systems, tessellation and mesh simplification, and distance functions with isosurface extraction.
A collection of primitive surface elements (such as triangles) are used and are oriented in the local reference frame based on predictions of the surface position, normal and curvature (if the primitive elements used are non-planar). Here the interaction constraints between elements can produce a smooth topology as well as deformation if required.
This class of techniques starts with a predetermined data representation and modifies it to conform to the acquired image data. Therefore, the data representation can provide many benefits in the domain where it is being applied. For example, the oriented particles method can conform to any shape and provide a complete object description, but also can provide easy support for interaction (virtual sculpting) and simulation (finite element/volume models). However, the interparticle dynamics are difficult to determine correctly, the method requires enormous computation to settle to a minimum energy state (and therefore doesn't scale well), and it doesn't provide for more global constraints.
Color (Texture) Recovery
For any of the above techniques, a number of methods exist for recovery of color information (image texture). In general, all the methods determine the projection of a particular patch of the surface of the object onto the view plane image that was captured. They then extract warp, and weight these sub images to recover the color information and surface properties for the patch of the surface.
Among these methods, there are subspecialties involving recovering of surface, computing view-dependent texture maps, interpolation of view-dependent information, and lightfield/lumigraph methods.
Plane-Sweep Stereo/Layered Depth Images
Another, more recent technique is that of plane sweep stereo and layered depth images. This set of techniques strives to overcome some of the problems with the techniques above. These shortcomings include difficulties with matching along the epipolar line (produces noisy/sometimes incorrect correlation), depth map issues (a single depth map breaks down when handling occluded objects), and the inability to deal with translucent objects. It strives to simultaneously recover disparities, color and transparency information, and handle occlusions. This class of techniques calculates the equation of each of the "planes" of image data (foreground/background relationships) existing in the image. From each view image, derive the object foreground/background relationship map and combine the information about these planes to derive the equation of the object in three-dimensional space to explicitly derive the depth information. This works for planar objects, but a generalization is required for more complicated geometries. A more generalized surface-sweep algorithm uses non-planar shape elements and is more conducive to real- world application. In this case, layered surface patches are derived and tracked from one view image to the next and the equations defining the surface patch are progressively refined.
While these techniques can represent occluded regions and handle translucent objects (through maintaining a transparency parameter for each surface element), they have difficulty in generalizing to other application domains (it was created for scene based applications), require great amounts of computation, fail for scenes with narrow depth of field, and may require manual intervention to resolve ambiguities.
Also, it is important to note that the techniques outlined above are only appropriate for certain domains. In most instances of image-based modeling, the object to be scanned is captured by rotating the object (or the camera) around a single axis, as if on a turntable, and existing acquisition mechanisms are thus presented with attendant physical limitations as well as the limitations outlined above. SUMMARY OF THE INVENTION
The invention overcomes the deficiencies of the prior art by providing a low cost, hand-held three-dimensional camera for medical and other applications. Medical applications include the ability to plan a surgery on a three-dimensional image of the patient, which is important to a surgeon and enables optimal results and reduced operating time. In breast cancer patients for example, creating a three-dimensional model of the healthy breast would enable the surgeon to plan the reconstruction after a mastectomy by estimating the required volume and shape to match the opposite breast as well as create a template for harvesting the exact shape of tissue from the abdominal wall. The same method can be used for other reconstructive procedures such as the fabrication of implants for craniofacial or any other body defects. Reviewing a three-dimensional patient image immediately before surgery would enable the surgeon to form a surgical plan that is realistic and does not rely on the recollection of the patient's defect during the preoperative visit.
Another important medical application of the three-dimensional imager in accordance with invention is documentation. Hospitals all over the world are becoming computerized in order to provide optimal documentation of patient encounters. Until now photographic documentation has been performed in the form of Polaroid prints, 35 mm photographs and slides. There is a need for three-dimensional images that can accurately represent the patient defects in order to assist doctors reviewing the charts as well as for legal reasons.
A device in accordance with the invention finds great teaching significance. For example, currently the Children's Hospital at Stanford is evaluating possibilities for the creation of a library of congenital craniofacial malformations in children. They were unable to find a good method of obtaining the images that could be used for both diagnostic purposes as well as for teaching. The system in accordance with the invention would readily remedy this shortcoming.
Another application for a system in accordance with the invention is in treatment progress evaluation. As an example, researchers at Stanford University Hospital are evaluating a new treatment for postoperative swelling. The bulky and problematic prior art Cyberware™ scanner, discussed above, is currently used to obtain three-dimensional facial images before and after surgery in order to measure volume changes. Having a small, portable three-dimensional camera, such as that of the invention, that can be brought to the bedside would make it easier for the patients to participate in the clinical trial. In the treatment of wounds a number of options are available to the doctor. Being able to accurately measure the volume changes in a wound over time would allow the doctor to customize the treatment for each patient in order to achieve complete wound healing in the least amount of time.
The system in accordance with the invention can be used for additional body measurements. In the treatment of burns, a precise estimation of the body surface area involved is crucial to the immediate fluid resuscitation as well as in planning subsequent skin graft sessions. Also in patients with multiple pigmented lesions (nevi) on their body, creating a three-dimensional map of the skin would enable accurate follow-up and detection of lesions that have changed in shape or enlarged and require excision. This would enable excellent surveillance and excision of suspicious lesions that could be cancerous or precancerous. Another important application of a system in accordance with the invention is in patient communication. It is difficult to communicate with the patient the outcome of a reconstructive or cosmetic procedure. The doctor usually relies on drawings or two-dimensional photographs to explain the process and illustrate the result. Often the patient is unable to understand what the final outcome will be. Being able to manipulate a three-dimensional image of the patient on the computer would show the end result more accurately and make the patient more informed about the proposed procedure.
In accordance with one aspect of the invention, sequential images together with position information are acquired and displayed. Software is used for surface and volume reconstruction using the acquired data. The invention makes it possible to acquire, view, manipulate, and interact with high-resolution three- dimensional images using a lightweight, portable device capable of handling occlusions and irregular surfaces, and yielding an image with a high degree of accuracy.
The invention preferably uses a handheld CCD camera integrated with a three-dimensional tracking device. As the user moves the camera over the desired object, sequential images are transmitted together with camera position and orientation data to the synthesizer.
Although described in terms of a three-dimensional camera and software geared towards the medical field, this system can be used in other areas as well. Graphic designers, architects and even home users would be able to take advantage of the graphics capabilities of the new commercial low cost computer imaging system. In accordance with the invention, the above advantages are realized by a system using a hand-held three-dimensional camera device consisting of an inexpensive but high-resolution color CCD camera, coupled with a six degree-of- freedom position/orientation tracking device.
This device of the invention uses an off-the-shelf, commercial, high- resolution color CCD camera for capturing image data. The tracking unit provides the position and orientation information to a computer during the scanning process. The device is hand-held, and used by scanning or sweeping the device over the region of interest of the patient. Acquisition software tracks the position and orientation of the camera device in real-time during scanning. When appropriate, the acquisition software captures images and stores them, along with their coordinates, for later processing. In this manner, the capture process is fast, and lengthier postprocessing steps can be performed off-line. The combination of image data and position/orientation information will be referred to as a frame.
Because the system is merely capturing images, no active or structured lighting is used, and hence the system does not involve laser light and is nonhazardous. The capturing device of the invention is lightweight, non-contact, instantaneous, and easy to use.
The system of the invention is supported by a novel calculation and software package that is adapted to handle a surface with few landmarks, infrequent but existent specular reflections, is very robust and allows information capture even behind occlusions and inside of concavities. The software and methodology in accordance with the invention consists of two phases. First, a real-time acquisition phase is used to capture and store images along with their position/orientation information. Next a reconstruction phase uses the stored data to reconstruct a three-dimensional representation of the surface of the object of interest.
During the acquisition phase, the synthesizer captures images, along with the position and orientation of the imaging device when the image was acquired. In accordance with one refinement during image capture control, the software of the synthesizer constantly examines the position and orientation information provided by the tracker system. It then calculate the projection pyramid from the camera toward the object and determines if the camera was pointed toward a new direction. A simple algorithm for implementing this is to calculate the intersection of perspective rays cast from the four comers of the image plane toward a tessellated great sphere, surrounding the entire scene. After the point of intersection of the rays are calculated with the interior surface of the great sphere, one can inspect the polygons of the sphere that lie between the intersection points to see if they were previously marked as "viewed". If not, the acquisition software can capture an image and mark the polygons "viewed" . In this way one can rapidly calculate whether the imaging device is viewing in a novel direction. This also provides a means for trading off redundant information (to improve accuracy) versus additional computational burden.
Another refinement that will be implemented in the acquisition software is a method to ameliorate any lag (end-to-end system latency) in the acquisition system. In order to accomplish this, the system will inspect the tracker information both immediately before and immediately after an image capture is triggered and the software will interpolate the position and orientation values. Once the frames are captured, the reconstruction process can begin. This decoupling of the acquisition and reconstruction process is necessary to allow the operator to scan the object of interest as quickly as possible, whereas the reconstruction algorithm may take some amount of time.
The basic algorithm proposed is called "Projected Tomography" . It can be thought of as a generalization of the Volumes from Silhouettes approach or a generalization of filtered backprojection in computed tomography.
In filtered backprojection in computed tomography, each image that is captured around the object is projected back through a volume represented in the memory of the synthesizer. As each image is projected through the volume, the voxels that each pixel ray comes in contact with accumulate the value of the pixel. At the end of processing, the volume can be considered an accumulation of all the images projected through the volume and the locations of highest value are the locations of maximal correlation between the images. Note that a filtering function is also necessary to normalize the data.
Projected tomography works in a related manner, except that images can be taken from any position or orientation and the imaging sensor is not required to remain in the plane. Instead, the computer keeps track of where that image was captured and "backprojects" that image through a volume.
Note that this is related to the Volumes from Silhouettes method in that one could capture images from a lateral view around to a frontal and, as more images were captured, the edges of the object (or breast in this case) would be clearly delineated within the volume due to the intersection of their projections. A significant issue is the calculation of the extent of the volume that has been scanned. The extent of the volume can be calculated by examining the intersection of the viewing pyramids from each point of view of the camera. By calculating this extent first, the system can be optimized to provide the highest resolution in the area of the object scanned.
Once the filtered backprojection algorithm completes, the volume representing the original data has been generated. This volume can then be used to generate a polygonal surface model of the object and may involve three- dimensional reconstruction and mesh generation from volumetric data.
The surface model is then visualized using traditional computer graphics techniques. In addition, existing quantitative tools are used to extract three- dimensional distance measurements, as well as surface and volume measurements of the imaged area. Note that in the area of breast reconstruction, volume measurements are particularly important for producing an optimal surgical result.
While the projected tomography algorithm could be used by itself to obtain volumetric information that leads to a surface, one may also use other techniques for verification and validation of the generated surface. Despite the assumption that the surface be featureless, the images could be automatically examined using a simple feature extractor (Laplacian for point information for example) to determine if features that could be wed by a correlation-type algorithm could be used. Should a feature be found in an image, the feature's location in three-dimensional space could be calculated by casting a ray from the feature in the image plane into the volume and calculating its intersection with the already reconstructed data. Then, one could also calculate where that feature should be located on other images as well. This would allow verification of the previously generated surface and can form a basis for refining the accuracy of the reconstructed surface. While one could not rely on stereo matching to work under nearly featureless conditions (and it may not yield a closed surface if it did), using it as a validation method is certainly warranted. Due to the difficulty of the issue of robust surface generation from multiple views, it may be necessary to take a multi-modal approach to provide benefits and make up for shortcomings of individual methods.
In addition to using stereo methods for validation, active lighting could also be used. The camera device can be outfitted with a scanning light that sweeps across the object and can be used for triangulation. The preferred implementation of this scheme would be a laser producing a stripe, projected onto a rotating, octagonally-faceted mirror. The rotational position of the mirror would be known to the system by means of an optical encoding system. Since the system knows what the rotation of the mirror surface would be, it can determine where the stripe of laser light would be reflected and the angle of projection onto the object. The system could quickly capture images, segment the stripe of laser light, and use the location of this stripe to determine the distance from the camera device to the surface being imaged via triangulation. In this manner, further validation of the reconstruction is performed.
By using the described acquisition system, together with projected tomography (optionally validated by stereo matching and/or active lighting), a complete, robust system is realized. It handles a surface without landmarks, yields texture information, and handles translucent objects. It does not suffer from concavity or occlusion issues because there is no fixed path for the camera to follow- the user may sweep behind objects that would otherwise be occluded or may look inside concave areas to further refine their representation within the model. Essentially, it breaks the limitation of the previous techniques by not requiring a predetermined viewpoint. Finally, the system is robust despite specular reflections. Because the specular highlight is a function of the light source and camera location, the highlight will not be repeated in the identical point in space for other viewpoints. Therefore, it will produce less of an obstacle to real three-dimensional acquisition.
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING(S)
Many advantages of the present invention will be apparent to those skilled in the art with a reading of this specification in conjunction with the attached drawings, wherein like reference numerals are applied to like elements and wherein:
FIG. 1 is a schematic view of a system in accordance with the preferred embodiment of the invention;
FIG. 2 is a schematic diagram illustrating the operation of the capture control algorithm in accordance with the invention;
FIG. 3 is a schematic diagram illustrating the operation of the image- based projected tomography process in accordance with the invention;
FIG. 4 is a schematic diagram illustrating the process of resolving a projected ray-voxel interaction in accordance with the invention; FIG. 5 is a flow diagram of the image-based projected tomography process in accordance with the invention;
FIG. 6 is a schematic diagram illustrating the operation of the volume- based projected tomography process in accordance with the invention; and
FIG. 7 is a flow diagram of the volume-based projected tomography process in accordance with the invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
In accordance with the invention, a hand-held imaging device such as an optical camera is manipulated over a three dimensional object of interest and captures a sequence of images of the object from different locations during an image acquisition stage. Each captured image is associated with a specific location of the imaging device, which location is determined using a tracking system. The combination of an image and an associated imaging device location is referred to as a frame. The frames, thus consisting of image data and associated position data, are compiled and stored in a synthesizer system which is in wireless or cable communication with the imaging device and the tracker system.
Following the image acquisition stage, a reconstruction stage can be selectively implemented. The reconstruction stage involves using frame information to reconstruct a representation of the object of interest for display and/or manipulation purposes. Several approaches can be taken to implement the reconstruction, depending on the particular application, the computational resources available and the time constraints imposed. In accordance with a first, image-based approach, each captured image is projected mathematically through a predetermined volume and a weight value is assigned to each voxel in the region of intersection of the image projection and the predetermined volume. The weight value is dependent upon the information contained in each pixel of the projected image. In accordance with a second, volume-based approach, each voxel in the predetermined volume is analyzed to determine which captured image projection intersects therewith. A value is assigned to each voxel reflecting the pixel value of the images whose projections through the volume encompass that voxel.
FIG. 1 schematically shows the operation of a system and method in accordance with the invention. During the image acquisition stage, a hand held imaging device, such as optical camera 30 is manually scanned over the object of interest 40, and images of the object of interest 40 are captured by the imaging device from a plurality of positions in three-dimensional space. Four of these positions—A, B, C and D— are shown, but it is to be understood that more or less such image capture positions can be used.
The images are relayed, wirelessly or via a cable 34, to a synthesizer 50. Each image captured by camera 30 and relayed to synthesizer 50 corresponds to a discrete position (A, B, C, D, etc.) of the camera. Camera 30 preferably comprises a digital, high resolution color CCD imager generating a signal pattern representative of the captured image as detected by a two-dimensional pixel array of photosensitive devices. One such camera is the Pixera Corporation's Professional Series™ camera. This camera provides scalable resolution (up to 1260 x 960), full color depth (24-bit), a large range of focus (1 inch to infinity, focal length~5.1 mm, aperture =f/2.1) with built-in macro, variable shutter speeds (1/1000 sec to 1/16 Sec), small size (4 x 4.3 x 1.3 inches), light weight (5.5 oz), full software support (Win95/NT SDK), and fast image transfer by way of PCMCIA card or PCI bus interfaces. Although operating in the visible optical spectrum, it is contemplated that camera 30 can be of the type operating in the infrared region. Alternatively, acoustic/ultrasonic, X-ray, microwave and other related imaging devices are also contemplated for use with the invention. Moreover, the invention is not limited to CCD cameras as any type of solid state imaging camera can be used, including but not limited to CID, CMOS and vidicon devices. Additionally, photographic film devices can be used as well.
The image capture positions of camera 30 are determined analytically by using a suitable tracking device. The tracking device operates to provide six- degree of freedom position and orientation information of camera 30 at least at the image capture positions. Tracking devices are known in the art and come in several types, including for example electromagnetic, acoustic, mechanical and optical systems each with its attendant advantages and shortcomings. With a sufficiently robust software support structure as used in the present invention, many of these shortcomings can be overcome, thereby expanding the range of available tracking systems usable with the invention.
In the preferred embodiment, an optical tracking system is utilized and is exemplarily represented by a trio of LEDs 32a, 32b and 32c rigidly attached to camera 30 and in optical communication with photosensors 62a, 62b, 62c and 62d disposed on assemblies 60 and 61. The photosensors 62a, 62b, 62c and 62d operate to spacially localize LEDs 32a, 32b and 32c and camera 30 and thereby provide information containing position and orientation components. Although two assemblies, 60 and 61, are shown, a different number of assemblies and photosensors can be used, and the relative location of the assemblies to the object of interest 40 and the camera 30 can be different from that schematically illustrated in FIG. 1 by way of example only. Other types of trackers which can be used with the invention are electromagnetic, mechanical and acoustic trackers as mentioned above. Additionally, the tracker device, when of the optical type, can operate in the visible or the infrared wavelength range wherein in the latter case LEDs 32a, 32b and 32c are infrared emitters to which photosensors 62a, 62b, 62c and 62d are selected to be sensitive.
In the preferred embodiment, the tracking system used is an optical system and one preferred product for use as the tracking device and which provides high accuracy, stability, resolution, and usability is FlashPoint5000™ from ImageGuided™ Technologies. This six degree-of-freedom optical tracker provides the highest accuracy, low latency tracking. In addition, it works well in environments that include metal (in contrast to electromagnetic trackers), adds almost no weight to the tracked object (that is, the camera 30), and is very easy to setup and use.
Despite specifying this highly accurate tracking system, it will be appreciated that other tracking systems can be used without inventive departure from the spirit and scope of the invention. In particular, the software used in the system of invention can be expanded to accommodate different trackers, taking advantage of their strengths while compensating for their weaknesses. In this manner is provided the ability to use a lower cost tracking device, for example. The positional information associated with each captured image is determined by for example triangulation techniques performed either in the assemblies 60 and 61 or in synthesizer 50. The actual operation of the tracking system, which localizes camera 30 along six degrees of freedom, does not form a part of this invention and details thereof will accordingly be omitted. Data representative of camera position during the capture of each image is relayed to CPU (central processing unit) 52 of synthesizer 50 wirelessly or via cables 66 and 68. The data is then stored in a memory 54 in the form of frames each comprising captured image information along with position information associated with that image. Synthesizer 50 comprises any known processing system, for example a PC (personal computer) having an associated input device such as keyboard 58 and a display device such as CRT (cathode ray tube) 56. Memory 54 can be any known electronic information storage device such a RAM (random access memory) or a persistent storage device such as magnetic disks or tapes or optical disks.
The number of images captured as camera 30 is scanned over the object of interest 40 can be determined by the operator, for example by manual operation of a shutter switch (not shown) of the camera 30. Preferably, however, the number of images captured by camera 30 is determined by synthesizer 50 based on a computational approach designed to optimize system performance.
Using appropriate software, synthesizer 50 constantly examines the position and orientation information provided by the tracking system. Using known parameters of camera 30-for example, its field of view, range, etc. -at each new position of the camera during the manual scan it is determined whether, and to what extent, the camera is pointed in a new direction. A redundancy threshold is applied, such that only when the threshold is not exceeded will a new image be captured, whereas when the threshold is exceeded—that is, when there is excessive redundancy between an image to be captured at a particular position under consideration and previously captured images— then no image is captured. The redundancy threshold is determined with a view towards the tradeoff between improved accuracy provided by redundant information versus additional computational burden.
The algorithm to implement the image capture control, described with reference to FIG. 2, calculates for each captured image the intersection of perspective rays (22a, 22b) cast from the four corners of the image plane (20a, 20b) toward a tessellated great sphere 24 surrounding the entire scene. The polygons (26a, 26b) of the great sphere that lie between the intersection points of the rays with the sphere are inspected to determine if they were previously marked as viewed, and an image is captured depending on this determination. Only when a previously unviewed determination is made, subject to the redundancy requirement discussed above, is an image captured.
Synthesizer 50 also addresses the issue of any lag, or end-to-end system latency in the image acquisition process. To accomplish this, synthesizer 50 inspects the tracker information both immediately before and immediately after an image capture is triggered and interpolates the position and orientation values derived, thereby retaining a more accurate assessment of the actual position and orientation of the camera 30. Problems of latency are particularly acute when non-optical tracking systems are used and the need to perform the interpolation calculations becomes more pressing.
After the acquisition phase is completed and the images and their associated position and orientation information are stored in memory 54, the reconstruction phase can commence. It is preferable that the reconstruction phase be decoupled from the acquisition phase in order to permit the scanning and data compilation of the acquisition phase to be completed as quickly as possible, without introducing delays associated with the reconstruction phase. This is important for example in medical applications, with the object of interest 40 being a patient or an organ whose prolonged immobilization may be problematic.
The reconstruction phase can be effected using one of two possible approaches described herein, although it is to be understood that other approaches are also feasible and fall within the purview of the present invention. The first approach, referred to as an image-based projected tomography process, is described with reference to FIGS. 3-5 and begins by using the number of frames acquired during the acquisition phase to construct a hypothetical bounding box, or reconstruction volume 36, encompassing the three-dimensional image of the object of interest being reconstructed (FIG. 5, step 100). Alternatively, the reconstruction volume 36 can be constructed based on user-selectable information, wherein the user determines the parameters of the reconstruction volume 36 based on experience and a knowledge of for example the size of the area scanned and the object of interest 40 contained therein. A memory allocation is then made depending on the anticipated memory requirements of the reconstruction process (step 110). The memory allocation is also a function of user-selectable parameters, including for example desired resolution and color versus gray-scale representation. These parameters can be preprogrammed, or entered through keyboard 58 or through a remote connection (not shown).
Subsequently, and for each image (designated as 38 in FIG. 3 and representative of the corresponding two-dimensional array of pixels of CCD camera 30), a determination is made of the position of a frustum, or volume 42, occupied by a projection 44 of the image through the reconstruction volume 36 (steps 120, 130 and 140). The boundaries of projection 44 are calculated in accordance with known characteristics of the camera 30, and especially the field of view of the camera and the various light-shaping optics utilized thereby. Angle calculations are made indicating the roll, pitch and yaw of the projection 44 through the reconstruction volume 36 from the particular position and orientation associated with the image 38 (step 140). A sequence of iterative computations is then performed for each pixel 46 in the image 38 (steps 150, 160), whereby a hypothetical ray 48 is projected from the pixel through the volume 42 (step 170) and a determination is made of which voxels in the volume 42 the ray 48 intersects (step 180). Each intersected voxel (64a, 64b, 64c) is then assigned a value corresponding to the value of the pixel 46 from which the ray 48 emanated (step 190).
The above process is performed for all the captured images, with the value of each intersected voxel being accumulated in accordance with the value of the pixels whose projected rays intersect the voxel. The procedure is exited at step 135. In this manner a representation in three-dimensional space of the object of interest is derived, with the voxels having the highest cumulative value defining the surface of the represented object.
It should be noted that the intersection between the projected rays and the voxels is not necessarily an all-or-nothing affair. Rather, depending on the computational resources dedicated to the calculation, these intersections can be further resolved and an interpolation process can be applied, as described with reference to FIG. 4. With the voxels (64n, 64m) each being considered to occupy a finite volume, the position (72n, 72m) on a surface (74n, 74m) of that volume which the projected ray 48 impinges can be further resolved to determine the weight which the pixel (46) corresponding to that ray will receive. In assigning a value to a voxel based on the value of the pixel whose projected ray impinges the voxel, a pixel whose ray impinges a voxel squarely (voxel 64n) will be weighted maximally, whereas a pixel whose projected ray barely grazes a voxel (voxel 64m) will be weighted minimally, thereby providing a more accurate representation of the object of interest. One of ordinary skill in the art will recognize that other interpolation techniques can also be applied, including linear, cubic or higher order interpolation, as well as nearest neighbor techniques in which the value of a pixel whose projected ray impinges one voxel is distributed to neighboring voxels as well as the impinged voxel.
The second approach which can be taken for the reconstruction phase, referred to as a volume-based projected tomography process, is described with reference to FIGS. 6 and 7 and conceptually operates as the reverse of the above image-based projected tomography process. Specifically, in accordance with the volume-based process, synthesizer 50 individually polls each voxel 64d (steps 200, 210 of FIG. 7) in the reconstruction volume 36 to determine which frustums, or volumes 42 encompass the polled voxel (step 220). The image associated with each such volume (steps 230, 240) is then analyzed to isolate the particular pixel 46, in the image 38, whose projected ray 48 impinges the voxel (step 250). This analysis can be performed by for instance calculating the relative X and Y positions of the polled voxel 64d in the volume 42 along a plane 76 parallel to the image, and then determining a proportional (X', Y') location on the image. The proportional location corresponds to the location of the desired pixel 46, and the value of that pixel is used as the contribution which is assigned to the polled voxel 64d (step 260). The analysis is performed iteratively to isolate all the pixels of all the images whose projected volumes 42 encompass the polled pixel 64d, and to apply the contributions of these pixels cumulatively to the value of the polled pixel 64d. Then, after all the voxels of the volume 36 are thus polled, a representation in three-dimensional space of the object of interest reconstructed, wherein the voxels having the highest cumulative value define the surface of the represented object. The procedure is exited at step 215, after the last voxel in the reconstruction volume 36 has been polled.
In accordance with the invention, the above procedures can be augmented with a validation procedure, which can entail the use stereo matching methods as discussed above, or using active lighting methods. For active lighting validation, the camera device 30 can be outfitted with a scanning light (not shown) that sweeps across the object and used for triangulation. The preferred implementation of this scheme would be a laser producing a stripe, projected onto a rotating, octagonally-faceted mirror. The rotational position of the mirror would be known to the system by means of an optical encoding system. Since the system knows what the rotation of the mirror surface would be, it can determine where the stripe of laser light would be reflected and the angle of projection onto the object. The system could quickly capture images, segment the stripe of laser light, and use the location of this stripe to determine the distance from the camera device to the surface being imaged via triangulation.
With the completion of the reconstruction stage using either the image- based or the volume based projected tomography approaches, the operator has at his disposal information representative of the object of interest in three dimensional space. Depending on the application, the operator can then display this information on CRT 56, using known segmentation or other processes to build a direct surface by for example thresholding. The operator can also interact with and modify the information, for example for education or prognosis. The information can also be transmitted to a remote location for analysis using different equipment from synthesizer 50 or by remote personnel connected via the internet.
The synthesizer 50 preferably comprises a Pentium Il-based workstation, with 256MB of memory, a 9GB disk drive, and a high-end graphics card with significant texture memory. This workstation will drive real-time acquisition, as well as three-dimensional reconstruction. During acquisition, the live image from the camera can be displayed on CRT 54 the workstation screen so that the user is aware of what is being digitized. As outlined above, the acquisition software will be evaluating the projection of the camera device in real time and capturing images when appropriate.
After reconstruction, a three-dimensional, texture-mapped volume rendering of the data will be displayed on CRT 54. Then, a simple segmentation will be performed using three-dimensional morphological operators and a mesh will be generated by an improved Marching Cubes algorithm. The mesh will then be rendered on the CRT 54 and the user will be able to interactively visualize the data. This mesh will provide the basis for later work on surgical planning.
For application to breast reconstruction, an accuracy of approximately 1- 2 mm is required. The invention achieves accuracy on the order of 1 mm and possibly sub-millimeter accuracy. In addition, the reconstruction process requires less than 10 minutes-of processing time on currently available hardware
The above are exemplary modes of carrying out the invention and are not intended to be limiting. It will be apparent to one of ordinary skill in the art that modifications thereto can be made without inventive departure from the spirit and scope of the invention as set forth in the following claims.

Claims

Claims
1. An imaging system comprising:
an imaging device for acquiring images of an object from capture positions associated with each image;
a tracker for providing position information indicative of the position of the imaging device; and
a synthesizer for constructing a three-dimensional representation of the object from the images and the position information, the synthesizer operating to predetermine the capture positions based on the position information.
2. The imaging system of Claim 1, wherein the tracker provides six degree of freedom imaging device position and orientation information.
3. The imaging system of Claim 2, wherein the tracker is selected from the group consisting of electromagnetic, mechanical, acoustic and optical trackers.
4. The imaging system of Claim 1, wherein the imaging device is selected from the group consisting of X-ray, microwave and infrared devices.
5. The imaging system of Claim 1, wherein the capture positions are predetermined by the synthesizer in accordance with an image capture control algorithm whereby images to be captured are compared with previously captured images and subjected to a redundancy threshold determination.
6. The imaging system of Claim 1, wherein the synthesizer compensates for lag time by performing imaging device position and orientation interpolation calculations.
7. The imaging system of Claim 1, wherein the synthesizer performs image-based projected tomography to reconstruct a three-dimensional representation of the object of interest.
8. The imaging system of Claim 1, wherein the synthesizer performs volume-based projected tomography to reconstruct a three-dimensional representation of the object of interest.
9. An imaging system comprising:
an imaging device for acquiring images of an object from capture positions associated with each image;
a tracker for providing position information indicative of the position of the imaging device; and
a synthesizer for constructing a three-dimensional representation of the object from the images and the position information using an image-based projected tomography process.
10. The imaging system of Claim 9, wherein the tracker provides six degree of freedom imaging device position and orientation information.
11. The imaging system of Claim 10, wherein the tracker is selected from the group consisting of electromagnetic, mechanical, acoustic and optical trackers.
12. The imaging system of Claim 9, wherein the imaging device is selected from the group consisting of X-ray, microwave and infrared devices.
13. The imaging system of Claim 9, wherein the capture positions are predetermined by the synthesizer in accordance with an image capture control algorithm whereby images to be captured are compared with previously captured images and subjected to a redundancy threshold determination.
14. The imaging system of Claim 9, wherein the synthesizer compensates for lag time by performing imaging device position and orientation interpolation calculations.
15. The imaging system of Claim 9, wherein the synthesizer performs volume-based projected tomography to reconstruct a three-dimensional representation of the object of interest.
16. An imaging system comprising:
an imaging device for acquiring images of an object from capture positions associated with each image;
a tracker for providing position information indicative of the position of the imaging device; and
a synthesizer for constructing a three-dimensional representation of the object from the images and the position information using a volume-based projected tomography process.
17. The imaging system of Claim 16, wherein the tracker provides six degree of freedom imaging device position and orientation information.
18. The imaging system of Claim 17, wherein the tracker is selected from the group consisting of electromagnetic, mechanical, acoustic and optical trackers.
19. The imaging system of Claim 16, wherein the imaging device is selected from the group consisting of X-ray, microwave and infrared devices.
20. The imaging system of Claim 16, wherein the capture positions are predetermined by the synthesizer in accordance with an image capture control algorithm whereby images to be captured are compared with previously captured images and subjected to a redundancy threshold determination.
21. The imaging system of Claim 16, wherein the synthesizer compensates for lag time by performing imaging device position and orientation interpolation calculations.
22. The imaging system of Claim 16, wherein the synthesizer performs image-based projected tomography to reconstruct a three-dimensional representation of the object of interest.
23. An imaging system comprising:
a hand-held imaging device for acquiring images of an object from capture positions associated with each image, the capture positions being disposed along a manual scan path; a tracker for providing position information indicative of the position of the imaging device along the scan path; and
a synthesizer for constructing a three-dimensional representation of the object from the images and the position information.
24. The imaging system of Claim 23, wherein the tracker provides six degree of freedom imaging device position and orientation information.
25. The imaging system of Claim 24, wherein the tracker is selected from the group consisting of electromagnetic, mechanical, acoustic and optical trackers.
26. The imaging system of Claim 23, wherein the imaging device is selected from the group consisting of X-ray, microwave and infrared devices.
27. The imaging system of Claim 23, wherein the capture positions are predetermined by the synthesizer in accordance with an image capture control algorithm whereby images to be captured are compared with previously captured images and subjected to a redundancy threshold determination.
28. The imaging system of Claim 23, wherein the synthesizer compensates for lag time by performing imaging device position and orientation interpolation calculations.
29. The imaging system of Claim 23, wherein the synthesizer performs image-based projected tomography to reconstruct a three-dimensional representation of the object of interest.
30. The imaging system of Claim 23, wherein the synthesizer performs volume-based projected tomography to reconstruct a three-dimensional representation of the object of interest.
31. A method for constructing a three-dimensional representation of an object of interest, the method comprising:
acquiring at least one image of the object by manually scanning an imaging device along a scan path;
generating position information indicative of the position of the imaging device along the scan path; and
constructing the three-dimensional representation of the object of interest using the acquired images and the generated position information.
32. The method of Claim 31 , wherein the position information is indicative of position and orientation of the imaging device along six degrees of freedom.
33. The method of Claim 31 , further comprising the step of predetermining positions from which images are acquired in accordance with an image capture control algorithm whereby images to be acquired are compared with previously acquired images and subjected to a redundancy threshold determination.
34. The method of Claim 31, further comprising the step of compensating for lag time by performing imaging device position and orientation interpolation calculations .
35. The method of Claim 31 , wherein the step of constructing comprises performing image-based projected tomography to reconstruct a three- dimensional representation of the object of interest.
36. The method of Claim 31 , wherein the step of constructing comprises performing volume-based projected tomography to reconstruct a three- dimensional representation of the object of interest.
37. The method of Claim 31, further comprising the step of displaying the three-dimensional representation.
38. The method of Claim 31 , further comprising the step of transmitting the three-dimensional representation to a remote location.
39. The method of Claim 31 , further comprising the step of validating using stereo matching methods.
40. The method of Claim 31, further comprising the step of validating using active lighting methods.
PCT/US1999/027615 1998-11-19 1999-11-18 Three-dimensional handheld digital camera for medical applications WO2000030337A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US19561298A 1998-11-19 1998-11-19
US09/195,612 1998-11-19

Publications (1)

Publication Number Publication Date
WO2000030337A2 true WO2000030337A2 (en) 2000-05-25

Family

ID=22722061

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1999/027615 WO2000030337A2 (en) 1998-11-19 1999-11-18 Three-dimensional handheld digital camera for medical applications

Country Status (1)

Country Link
WO (1) WO2000030337A2 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007030026A1 (en) * 2005-09-09 2007-03-15 Industrial Research Limited A 3d scene scanner and a position and orientation system
WO2008039539A2 (en) * 2006-09-27 2008-04-03 Georgia Tech Research Corporation Systems and methods for the measurement of surfaces
US8755053B2 (en) 2005-10-14 2014-06-17 Applied Research Associates Nz Limited Method of monitoring a surface feature and apparatus therefor
US9179844B2 (en) 2011-11-28 2015-11-10 Aranz Healthcare Limited Handheld skin measuring or monitoring device
US9939911B2 (en) 2004-01-30 2018-04-10 Electronic Scripting Products, Inc. Computer interface for remotely controlled objects and wearable articles with absolute pose detection component
US10013527B2 (en) 2016-05-02 2018-07-03 Aranz Healthcare Limited Automatically assessing an anatomical surface feature and securely managing information related to the same
US11116407B2 (en) 2016-11-17 2021-09-14 Aranz Healthcare Limited Anatomical surface assessment methods, devices and systems
US11577159B2 (en) 2016-05-26 2023-02-14 Electronic Scripting Products Inc. Realistic virtual/augmented/mixed reality viewing and interactions
US11903723B2 (en) 2017-04-04 2024-02-20 Aranz Healthcare Limited Anatomical surface assessment methods, devices and systems

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9939911B2 (en) 2004-01-30 2018-04-10 Electronic Scripting Products, Inc. Computer interface for remotely controlled objects and wearable articles with absolute pose detection component
US10191559B2 (en) 2004-01-30 2019-01-29 Electronic Scripting Products, Inc. Computer interface for manipulated objects with an absolute pose detection component
WO2007030026A1 (en) * 2005-09-09 2007-03-15 Industrial Research Limited A 3d scene scanner and a position and orientation system
US20090323121A1 (en) * 2005-09-09 2009-12-31 Robert Jan Valkenburg A 3D Scene Scanner and a Position and Orientation System
US8625854B2 (en) 2005-09-09 2014-01-07 Industrial Research Limited 3D scene scanner and a position and orientation system
US9955910B2 (en) 2005-10-14 2018-05-01 Aranz Healthcare Limited Method of monitoring a surface feature and apparatus therefor
US8755053B2 (en) 2005-10-14 2014-06-17 Applied Research Associates Nz Limited Method of monitoring a surface feature and apparatus therefor
US10827970B2 (en) 2005-10-14 2020-11-10 Aranz Healthcare Limited Method of monitoring a surface feature and apparatus therefor
WO2008039539A3 (en) * 2006-09-27 2008-09-04 Georgia Tech Res Inst Systems and methods for the measurement of surfaces
WO2008039539A2 (en) * 2006-09-27 2008-04-03 Georgia Tech Research Corporation Systems and methods for the measurement of surfaces
US9179844B2 (en) 2011-11-28 2015-11-10 Aranz Healthcare Limited Handheld skin measuring or monitoring device
US9861285B2 (en) 2011-11-28 2018-01-09 Aranz Healthcare Limited Handheld skin measuring or monitoring device
US10874302B2 (en) 2011-11-28 2020-12-29 Aranz Healthcare Limited Handheld skin measuring or monitoring device
US11850025B2 (en) 2011-11-28 2023-12-26 Aranz Healthcare Limited Handheld skin measuring or monitoring device
US10013527B2 (en) 2016-05-02 2018-07-03 Aranz Healthcare Limited Automatically assessing an anatomical surface feature and securely managing information related to the same
US11250945B2 (en) 2016-05-02 2022-02-15 Aranz Healthcare Limited Automatically assessing an anatomical surface feature and securely managing information related to the same
US10777317B2 (en) 2016-05-02 2020-09-15 Aranz Healthcare Limited Automatically assessing an anatomical surface feature and securely managing information related to the same
US11923073B2 (en) 2016-05-02 2024-03-05 Aranz Healthcare Limited Automatically assessing an anatomical surface feature and securely managing information related to the same
US11577159B2 (en) 2016-05-26 2023-02-14 Electronic Scripting Products Inc. Realistic virtual/augmented/mixed reality viewing and interactions
US11116407B2 (en) 2016-11-17 2021-09-14 Aranz Healthcare Limited Anatomical surface assessment methods, devices and systems
US11903723B2 (en) 2017-04-04 2024-02-20 Aranz Healthcare Limited Anatomical surface assessment methods, devices and systems

Similar Documents

Publication Publication Date Title
Carceroni et al. Multi-view scene capture by surfel sampling: From video streams to non-rigid 3D motion, shape and reflectance
Wang et al. DeepOrganNet: on-the-fly reconstruction and visualization of 3D/4D lung models from single-view projections by deep deformation network
Schmalz et al. An endoscopic 3D scanner based on structured light
US20170135655A1 (en) Facial texture mapping to volume image
US20050089213A1 (en) Method and apparatus for three-dimensional modeling via an image mosaic system
US20170337732A1 (en) Human Body Representation With Non-Rigid Parts In An Imaging System
US20090010507A1 (en) System and method for generating a 3d model of anatomical structure using a plurality of 2d images
JP2004534584A (en) Image processing method for interacting with 3D surface displayed on 3D image
US20220183775A1 (en) Method for guiding a robot arm, guiding system
WO2010081094A2 (en) A system for registration and information overlay on deformable surfaces from video data
Macedo et al. A semi-automatic markerless augmented reality approach for on-patient volumetric medical data visualization
Chernov et al. 3D dynamic thermography system for biomedical applications
Weng et al. Three-dimensional surface reconstruction using optical flow for medical imaging
WO2000030337A2 (en) Three-dimensional handheld digital camera for medical applications
CN108885797B (en) Imaging system and method
Tokgozoglu et al. Color-based hybrid reconstruction for endoscopy
US20230206561A1 (en) Method for generating a three-dimensional working surface of a human body, system
Ben-Hamadou et al. Construction of extended 3D field of views of the internal bladder wall surface: A proof of concept
Decker et al. Performance evaluation and clinical applications of 3D plenoptic cameras
De Oliveira An affordable and practical 3d solution for the aesthetic evaluation of breast cancer conservative treatment
Stoyanov et al. Current issues of photorealistic rendering for virtual and augmented reality in minimally invasive surgery
Chan et al. Using game controller as position tracking sensor for 3D freehand ultrasound imaging
KR102007316B1 (en) Apparatus and method of producing three dimensional image of orbital prosthesis
Conen et al. Development and evaluation of a miniature trinocular camera system for surgical measurement applications
Lacher 3D breast surface reconstructions from consumer-grade RGB-D cameras

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): JP KR

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

121 Ep: the epo has been informed by wipo that ep was designated in this application
WA Withdrawal of international application