WO2020140044A1 - Generation of synthetic three-dimensional imaging from partial depth maps - Google Patents

Generation of synthetic three-dimensional imaging from partial depth maps Download PDF

Info

Publication number
WO2020140044A1
WO2020140044A1 PCT/US2019/068760 US2019068760W WO2020140044A1 WO 2020140044 A1 WO2020140044 A1 WO 2020140044A1 US 2019068760 W US2019068760 W US 2019068760W WO 2020140044 A1 WO2020140044 A1 WO 2020140044A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
computer program
program product
model
anatomical structure
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US2019/068760
Other languages
English (en)
French (fr)
Inventor
Vasiliy Evgenyevich BUHARIN
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Activ Surgical Inc
Original Assignee
Activ Surgical Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Activ Surgical Inc filed Critical Activ Surgical Inc
Priority to EP19905077.4A priority Critical patent/EP3903281A4/en
Priority to CA3125288A priority patent/CA3125288A1/en
Priority to CN201980093251.XA priority patent/CN113906479A/zh
Priority to KR1020217024095A priority patent/KR20210146283A/ko
Priority to JP2021537826A priority patent/JP2022516472A/ja
Publication of WO2020140044A1 publication Critical patent/WO2020140044A1/en
Priority to US17/349,713 priority patent/US20220012954A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30084Kidney; Renal
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/56Particle system, point based geometry or rendering
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2004Aligning objects, relative positioning of parts
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling

Definitions

  • Embodiments of the present disclosure relate to synthetic three-dimensional imaging, and more specifically, to generation of synthetic three-dimensional imaging from partial depth maps.
  • a method is performed where an image of an anatomical structure is received from a camera.
  • a depth map corresponding to the image is received from a depth sensor that may be a part of the camera or separate from the camera.
  • a point cloud corresponding to the anatomical structure is generated based on the depth map and the image.
  • the point cloud is rotated in space.
  • the point cloud is rendered.
  • the rendered point cloud is displayed to a user.
  • the point cloud is a preliminary point cloud.
  • the preliminary point cloud is registered with a model of the anatomical structure.
  • an augmented point cloud is generated from the preliminary point cloud and the model.
  • the augmented point cloud is rotated in space, rendered, and displayed to the user.
  • an indication is received from the user to further rotate the augmented point cloud, the augmented point cloud is rotated in space according to the indication, the augmented point cloud is rendered after further rotating, and the rendered augmented point cloud is displayed to the user after further rotating.
  • the camera includes the depth sensor. In various embodiments, the camera is separate from the depth sensor. In various embodiments, the depth sensor includes a structure light sensor and a structured light projector. In various embodiments, the depth sensor comprises a time-of-flight sensor. In various embodiments, the depth map is determined from a single image frame. In various embodiments, the depth map is determined from two or more image frames.
  • the method further includes generating a surface mesh from the preliminary point cloud.
  • generating a surface mesh includes interpolating the preliminary point cloud.
  • interpolating is performed directly.
  • interpolating is performed on a grid.
  • interpolating includes splining.
  • the preliminary point cloud may be segmented into two or more sematic regions.
  • generating a surface mesh comprises generating a separate surface mesh for each of the two or more sematic regions.
  • the method further includes combining each of the separate surface meshes into a combined surface mesh.
  • the method further includes displaying the combined surface mesh to the user.
  • the model of the anatomical structure comprises a virtual 3D model.
  • the model of the anatomical structure is determined from an anatomical atlas.
  • the model of the anatomical structure is determined from pre-operative imaging of the patient.
  • the model of the anatomical structure is a 3D reconstruction from the pre-operative imaging.
  • the pre-operative imaging may be retrieved from a picture archiving and communications system (PACS).
  • registering comprises a deformable registration.
  • registering comprises a rigid body registration.
  • each point in the point cloud comprises a depth value derived from the depth map and a color value derived from the image.
  • a system including a digital camera configured to image an interior of a body cavity, a display, and a computing node including a computer readable storage medium having program instructions embodied therewith.
  • the program instructions are executable by a processor of the computing node to cause the processor to perform a method where an image of an anatomical structure is received from a camera.
  • a depth map corresponding to the image is received from a depth sensor that may be a part of the camera or separate from the camera.
  • a point cloud corresponding to the anatomical structure is generated based on the depth map and the image.
  • the point cloud is rotated in space.
  • the point cloud is rendered.
  • the rendered point cloud is displayed to a user.
  • the point cloud is a preliminary point cloud.
  • the preliminary point cloud is registered with a model of the anatomical structure.
  • an augmented point cloud is generated from the preliminary point cloud and the model.
  • the augmented point cloud is rotated in space, rendered, and displayed to the user.
  • an indication is received from the user to further rotate the augmented point cloud, the augmented point cloud is rotated in space according to the indication, the augmented point cloud is rendered after further rotating, and the rendered augmented point cloud is displayed to the user after further rotating.
  • the camera includes the depth sensor. In various embodiments, the camera is separate from the depth sensor.
  • the depth sensor includes a structure light sensor and a structured light projector. In various embodiments, the depth sensor comprises a time-of-flight sensor. In various embodiments, the depth map is determined from a single image frame. In various embodiments, the depth map is determined from two or more image frames.
  • the method further includes generating a surface mesh from the preliminary point cloud.
  • generating a surface mesh includes interpolating the preliminary point cloud.
  • interpolating is performed directly.
  • interpolating is performed on a grid.
  • interpolating includes splining.
  • the preliminary point cloud may be segmented into two or more sematic regions.
  • generating a surface mesh comprises generating a separate surface mesh for each of the two or more sematic regions.
  • the method further includes combining each of the separate surface meshes into a combined surface mesh.
  • the method further includes displaying the combined surface mesh to the user.
  • the model of the anatomical structure comprises a virtual 3D model.
  • the model of the anatomical structure is determined from an anatomical atlas.
  • the model of the anatomical structure is determined from pre-operative imaging of the patient.
  • the model of the anatomical structure is a 3D reconstruction from the pre-operative imaging.
  • the pre-operative imaging may be retrieved from a picture archiving and communications system (PACS).
  • registering comprises a deformable registration.
  • registering comprises a rigid body registration.
  • each point in the point cloud comprises a depth value derived from the depth map and a color value derived from the image.
  • a computer program product for synthetic three-dimensional imaging including a computer readable storage medium having program instructions embodied therewith.
  • the program instructions are executable by a processor of the computing node to cause the processor to perform a method where an image of an anatomical structure is received from a camera.
  • a depth map corresponding to the image is received from a depth sensor that may be a part of the camera or separate from the camera.
  • a point cloud is provided.
  • corresponding to the anatomical structure is generated based on the depth map and the image.
  • the point cloud is rotated in space.
  • the point cloud is rendered.
  • the rendered point cloud is displayed to a user.
  • the point cloud is a preliminary point cloud.
  • the preliminary point cloud is registered with a model of the anatomical structure.
  • an augmented point cloud is generated from the preliminary point cloud and the model.
  • the augmented point cloud is rotated in space, rendered, and displayed to the user.
  • an indication is received from the user to further rotate the augmented point cloud, the augmented point cloud is rotated in space according to the indication, the augmented point cloud is rendered after further rotating, and the rendered augmented point cloud is displayed to the user after further rotating.
  • the camera includes the depth sensor. In various embodiments, the camera is separate from the depth sensor. In various embodiments, the depth sensor includes a structure light sensor and a structured light projector. In various embodiments, the depth sensor comprises a time-of-flight sensor. In various embodiments, the depth map is determined from a single image frame. In various embodiments, the depth map is determined from two or more image frames.
  • the method further includes generating a surface mesh from the preliminary point cloud.
  • generating a surface mesh includes interpolating the preliminary point cloud.
  • interpolating is performed directly.
  • interpolating is performed on a grid.
  • interpolating includes splining.
  • the preliminary point cloud may be segmented into two or more sematic regions.
  • generating a surface mesh comprises generating a separate surface mesh for each of the two or more sematic regions.
  • the method further includes combining each of the separate surface meshes into a combined surface mesh.
  • the method further includes displaying the combined surface mesh to the user.
  • the model of the anatomical structure comprises a virtual 3D model.
  • the model of the anatomical structure is determined from an anatomical atlas.
  • the model of the anatomical structure is determined from pre-operative imaging of the patient.
  • the model of the anatomical structure is a 3D reconstruction from the pre-operative imaging.
  • the pre-operative imaging may be retrieved from a picture archiving and communications system (PACS).
  • registering comprises a deformable registration.
  • registering comprises a rigid body registration.
  • each point in the point cloud comprises a depth value derived from the depth map and a color value derived from the image.
  • FIG. 1 depicts a system for robotic surgery according to embodiments of the present disclosure.
  • Figs. 2A-2B shows a first synthetic view according to embodiments of the present disclosure.
  • Figs. 3A-3B shows a second synthetic view according to embodiments of the present disclosure.
  • Figs. 4A-4B shows a third synthetic view according to embodiments of the present disclosure.
  • Fig. 5A shows a kidney according to embodiments of the present disclosure.
  • Fig. 5B shows a point cloud of the kidney shown in Fig. 5A according to embodiments of the present disclosure.
  • Fig. 6A shows a kidney according to embodiments of the present disclosure.
  • Fig. 6B shows an augmented point cloud of the kidney shown in Fig. 6A according to embodiments of the present disclosure.
  • Fig. 7 illustrates a method of synthetic three-dimensional imaging according to embodiments of the present disclosure.
  • Fig. 8 depicts an exemplary Picture Archiving and Communication System (PACS).
  • PACS Picture Archiving and Communication System
  • Fig. 9 depicts a computing node according to an embodiment of the present disclosure.
  • An endoscope is an illuminated optical, typically slender and tubular instrument (a type of borescope) used to look within the body.
  • An endoscope may be used to examine internal organs for diagnostic or surgical purposes. Specialized instruments are named after their target anatomy, e.g., the cystoscope (bladder), nephroscope (kidney), bronchoscope (bronchus), arthroscope (joints), colonoscope (colon), laparoscope (abdomen or pelvis).
  • Laparoscopic surgery is commonly performed in the abdomen or pelvis using small incisions (usually 0.5-1.5 cm) with the aid of a laparoscope.
  • small incisions usually 0.5-1.5 cm
  • the advantages of such minimally invasive techniques are well-known, and include reduced pain due to smaller incisions, less hemorrhaging, and shorter recovery time as compared to open surgery.
  • a laparoscope may be equipped to provide a two-dimensional, image, a stereo image, or a depth field image (as described further below).
  • Robotic surgery is similar to laparoscopic surgery insofar as it also uses small incisions, a camera and surgical instruments. However, instead of holding and manipulating the surgical instruments directly, a surgeon uses controls to remotely manipulate the robot.
  • a console provides the surgeon with high-definition images, which allow for increased accuracy and vision.
  • An image console can provide three-dimensional, high definition, and magnified images.
  • Various electronic tools may be applied to further aid surgeons. These include visual magnification (e.g., the use of a large viewing screen that improves visibility) and stabilization (e.g., electromechanical damping of vibrations due to machinery or shaky human hands).
  • Simulator may also be provided, in the form of specialized virtual reality training tools to improve physicians' proficiency in surgery.
  • a depth field camera may be used to collect a depth field at the same time as an image.
  • An example of a depth field camera is a plenoptic camera that uses an array of micro lenses placed in front of an otherwise conventional image sensor to sense intensity, color, and distance information.
  • Multi-camera arrays are another type of light- field camera.
  • the standard plenoptic camera is a standardized mathematical model used by researchers to compare different types of plenoptic (or light-field) cameras.
  • the standard plenoptic camera has microlenses placed one focal length away from the image plane of a sensor. Research has shown that its maximum baseline is confined to the main lens entrance pupil size which proves to be small compared to stereoscopic setups. This implies that the standard plenoptic camera may be intended for close range applications as it exhibits increased depth resolution at very close distances that can be metrically predicted based on the camera's parameters.
  • plenoptic cameras may be used, such as focused plenoptic cameras, coded aperture cameras, and/or stereo with plenoptic cameras.
  • a structured pattern may be projected from a structured light source.
  • the projected pattern may change shape, size, and/or spacing of pattern features when projected on a surface.
  • one or more cameras e.g ., digital cameras
  • positional information e.g., depth
  • the system may include a structured light source (e.g., a projector) that projects a specific structured pattern of lines (e.g ., a matrix of dots or a series of stripes) onto the surface of an object (e.g., an anatomical structure).
  • a structured light source e.g., a projector
  • the pattern of lines produces a line of illumination that appears distorted from other perspectives than that of the source and these lines can be used for geometric reconstruction of the surface shape, thus providing positional information about the surface of the object.
  • range imaging may be used with the systems and methods described herein to determine positional and/or depth information of a scene, for example, using a range camera.
  • one or more time-of-flight (ToF) sensors may be used.
  • the time-of-flight sensor may be a flash LIDAR sensor.
  • the time-of-flight sensor emits a very short infrared light pulse and each pixel of the camera sensor measures the return time.
  • the time-of-flight sensor can measure depth of a scene in a single shot.
  • a 3D time-of-flight laser radar includes a fast gating intensified charge-coupled device (CCD) camera configured to achieve sub-millimeter depth resolution.
  • CCD charge-coupled device
  • a short laser pulse may illuminate a scene, and the intensified CCD camera opens its high speed shutter.
  • the high speed shutter may be open only for a few hundred picoseconds.
  • 3D ToF information may be calculated from a 2D image series which was gathered with increasing delay between the laser pulse and the shutter opening.
  • various types of signals are used with ToF, such as, for example, sound and/or light.
  • using light sensors as a carrier may combine speed, range, low weight, and eye-safety.
  • infrared light may provide for less signal disturbance and easier distinction from natural ambient light, resulting in higher-performing sensors for a given size and weight.
  • ultrasonic sensors may be used for determining the proximity of objects (reflectors).
  • a distance of the nearest reflector may be determined using the speed of sound in air and the emitted pulse and echo arrival times.
  • an image console can provide a limited three-dimensional image based on stereo imaging or based on a depth field camera, a basic stereo or depth field view does not provide comprehensive spatial awareness for the surgeon.
  • various embodiments of the present disclosure provide for generation of synthetic three-dimensional imaging from partial depth maps.
  • Robotic arm 101 deploys scope 102 within abdomen 103.
  • a digital image is collected via scope 102.
  • a digital image is captured by one or more digital cameras at the scope tip.
  • a digital image is captured by one or more fiber optic element running from the scope tip to one or more digital camera elsewhere.
  • the digital image is provided to computing node 104, where it is processed and then displayed on display 105.
  • each pixel is paired with corresponding depth information.
  • each pixel of the digital image is associated with a point in three-dimensional space.
  • the pixel value of the pixels of the digital image may then be used to define a point cloud in space.
  • Such a point cloud may then be rendered using techniques known in the art. Once a point cloud is defined, it may be rendered from multiple vantage points in addition to the original vantage of the camera. Accordingly, a physician may then rotate, zoom, or otherwise change a synthetic view of the underlying anatomy. For example, a synthetic sideview may be rendered, allowing the surgeon to obtain more robust positional awareness than with a conventional direct view.
  • the one or more cameras may include depth sensors.
  • the one or more cameras may include a light-field camera configured to capture depth data at each pixel.
  • the depth sensor may be separate from the one or more cameras.
  • the system may include a digital camera configured to capture a RGB image and the depth sensor may include a light-field camera configured to capture depth data.
  • the one or more cameras may include a stereoscopic camera.
  • the stereoscopic camera may be implemented by two separate cameras.
  • the two separate cameras may be disposed at a predetermined distance from one another.
  • the stereoscopic camera may be located at a distal- most end of a surgical instrument (e.g ., laparoscope, endoscope, etc.).
  • Positional information as used herein, may generally be defined as (X, Y, Z) in a three-dimensional coordinate system.
  • the one or more cameras may be, for example, infrared cameras, that emit infrared radiation and detect the reflection of the emitted infrared radiation.
  • the one or more cameras may be digital cameras as are known in the art.
  • the one or more cameras may be plenoptic cameras.
  • the one or more cameras (e.g., one, two, three, four, or five) may be capable of detecting a projected pattern(s) from a source of structured light (e.g ., a projector).
  • the one or more cameras may be connected to a computing node as described in more detail below. Using the images from the one or more cameras, the computing node may compute positional information (X, Y, Z) for any suitable number of points along the surface of the object to thereby generate a depth map of the surface.
  • the one or more cameras may include a light- field camera (e.g., a plenoptic camera).
  • the plenoptic camera may be used to generate accurate positional information for the surface of the object by having appropriate zoom and focus depth settings
  • one type of light-field (e.g., plenoptic) camera that may be used according to the present disclosure uses an array of micro-lenses placed in front of an otherwise conventional image sensor to sense intensity, color, and directional information. Multi-camera arrays are another type of light-field camera.
  • the "standard plenoptic camera” is a standardized mathematical model used by researchers to compare different types of plenoptic (or light-field) cameras.
  • the "standard plenoptic camera” has microlenses placed one focal length away from the image plane of a sensor. Research has shown that its maximum baseline is confined to the main lens entrance pupil size which proves to be small compared to stereoscopic setups. This implies that the "standard plenoptic camera” may be intended for close range applications as it exhibits increased depth resolution at very close distances that can be metrically predicted based on the camera's parameters. Other types/orientations of plenoptic cameras may be used, such as focused plenoptic cameras, coded aperture cameras, and/or stereo with plenoptic cameras.
  • the resulting depth map including the computed depths at each pixel may be post-processed.
  • Depth map post-processing refers to processing of the depth map such that it is useable for a specific application.
  • depth map post processing may include accuracy improvement.
  • depth map post processing may be used to speed up performance and/or for aesthetic reasons.
  • subsampling may be biased to remove the depth pixels that lack a depth value (e.g ., not capable of being calculated and/or having a value of zero).
  • spatial filtering e.g., smoothing
  • temporal filtering may be performed to decrease temporal depth noise using data from multiple frames.
  • a simple or time-biased average may be employed.
  • holes in the depth map can be filled in, for example, when the pixel shows a depth value inconsistently.
  • temporal variations in the signal may lead to blur and may require processing to decrease and/or remove the blur.
  • some applications may require a depth value present at every pixel.
  • post processing techniques may be used to extrapolate the depth map to every pixel.
  • the extrapolation may be performed with any suitable form of extrapolation (e.g., linear, exponential, logarithmic, etc.).
  • two or more frames may be captured by the one or more cameras.
  • the point cloud may be determined from the two or more frames.
  • determining the point cloud from two or more frames may provide for noise reduction.
  • determining the point cloud from two or more frames may allow for the generation of 3D views around line of sight obstructions.
  • a point cloud may be determined for each captured frame in the two or more frames.
  • each point cloud may be aligned to one or more (e.g ., all) of the other point clouds.
  • the point clouds may be aligned via rigid body registration.
  • rigid body registration algorithms may include rotation, translation, zoom, and/or shear.
  • the point clouds may be aligned via deformable registration.
  • deformable registration algorithms may include the B-spbne method, level-set motion method, original demons method, modified demons method, symmetric force demons method, double force demons method, deformation with intensity simultaneously corrected method, original Horn-Schunck optical flow, combined Horn-Schunck and Lucas-Kanade method, and/or free-form deformation method.
  • Fig. 2A shows an original source image.
  • Fig. 2B shows a rendered point cloud assembled from the pixels of the original image and the corresponding depth information.
  • FIG. 3A shows an original source image.
  • Fig. 3B shows a rendered point cloud assembled from the pixels of the original image and the corresponding depth information.
  • the subject is rotated so as to provide a sideview.
  • Fig. 4A shows an original source image.
  • Fig. 4B shows a rendered point cloud assembled from the pixels of the original image and the corresponding depth information.
  • the subject is rotated so as to provide a sideview.
  • a 3D surface mesh may be generated from any of the 3D point clouds.
  • the 3D surface mesh may be generated by interpolation of a 3D point cloud ( e.g ., directly or on a grid).
  • a 3D surface mesh may perform better when zooming in/out of the rendered mesh.
  • semantic segmentation may be performed on a 3D surface mesh to thereby smooth out any 3D artifacts that may occur at anatomical boundaries.
  • the point cloud prior to generation of a 3D mesh, the point cloud can be segmented into two or more semantic regions. For example, a first semantic region may be identified as a first 3D structure (e.g., liver), a second semantic region may be identified as a second 3D structure (e.g., stomach), and a third semantic region may be identified as a third 3D structure (e.g., a laparoscopic instrument) in a scene.
  • an image frame may be segmented using any suitable known segmentation technique.
  • point clouds for each identified sematic region may be used to generate separate 3D surface meshes for each semantic region.
  • each of the separate 3D surface meshes may be rendered in a single display to provide the geometry of the imaged scene.
  • presenting the separate meshes may avoid various artifacts that occur at the boundaries of defined regions (e.g., organs).
  • the point cloud may be augmented with one or more model of the approximate or expected shape of a particular object in the image.
  • the point cloud may be augmented with a virtual 3D model of the particular organ (e.g ., a 3D model of the kidney).
  • a surface represented by the point cloud may be used to register the virtual 3D model of an object within the scene.
  • Fig. 5A shows a kidney 502 according to embodiments of the present disclosure.
  • Fig. 5B shows a point cloud of the kidney shown in Fig. 5A according to embodiments of the present disclosure.
  • a point cloud 504 of a scene including the kidney 502 may be generated by imaging the kidney with a digital camera and/or a depth sensor.
  • the point cloud may be augmented via a virtual 3D model of an object (e.g., a kidney).
  • Fig. 6A shows a kidney 602 according to embodiments of the present disclosure.
  • a virtual 3D model 606 may be generated of the kidney 602 and applied to the point cloud 604 generated of the scene including the kidney 604.
  • Fig. 6B shows an augmented point cloud of the kidney shown in Fig. 6A according to embodiments of the present disclosure.
  • the virtual 3D model 606 of the kidney 602 is registered (i.e., aligned) with the point cloud 604 thereby providing additional geometric information regarding parts of the kidney 602 that are not seen from the perspective of the camera and/or depth sensor.
  • the virtual 3D model 606 is registered to the point cloud 604 using any suitable method as described above.
  • Fig. 6B thus provides a better perspective view of an object (e.g., kidney 602) within the scene.
  • the virtual 3D model may be obtained from any suitable source, including, but not limited to, a manufacturer, a general anatomical atlas of organs, a patient’s pre-operative 3D imaging reconstruction of the target anatomy from multiple viewpoints using the system presented in this disclosure, etc.
  • the system may include pre-programmed clinical anatomical viewpoints (e.g., antero-posterior, medio-lateral, etc.).
  • the clinical anatomical viewpoints could be further tailored for the clinical procedure (e.g ., right-anterior- oblique view for cardiac geometry).
  • the user may choose to present the 3D synthetic view from one of the predefined clinical anatomical viewpoints (e.g., antero-posterior, medio-lateral, etc.).
  • the clinical anatomical viewpoints could be further tailored for the clinical procedure (e.g ., right-anterior- oblique view for cardiac geometry).
  • the user may choose to present the 3D synthetic view from one of the predefined medical procedure.
  • pre-programmed views may help a physician re-orient themselves in the event they lose orientation during a procedure.
  • a method for synthetic three-dimensional imaging is illustrated according to embodiments of the present disclosure.
  • an image of an anatomical structure of a patient is received from a camera.
  • a depth map corresponding to the image is received from a depth sensor.
  • a point cloud corresponding to the anatomical structure is generated based on the depth map and the image.
  • the point cloud is rotated in space.
  • the point cloud is rendered.
  • the rendered point cloud is displayed to a user.
  • systems and methods described herein may be used in any suitable application, such as, for example, diagnostic applications and/or surgical applications.
  • the systems and methods described herein may be used in colonoscopy to image a polyp in the gastrointestinal tract and determine dimensions of the polyp. Information such as the dimensions of the polyp may be used by healthcare professionals to determine a treatment plan for a patient (e.g., surgery, chemotherapy, further testing, etc.).
  • the systems and methods described herein may be used to measure the size of an incision or hole when extracting a part of or whole internal organ.
  • the systems and methods described herein may be used in handheld surgical applications, such as, for example, handheld laparoscopic surgery, handheld endoscopic procedures, and/or any other suitable surgical applications where imaging and depth sensing may be necessary.
  • the systems and methods described herein may be used to compute the depth of a surgical field, including tissue, organs, thread, and/or any instruments.
  • the systems and methods described herein may be capable of making measurements in absolute units (e.g ., millimeters).
  • GI catheters such as an endoscope.
  • the endoscope may include an atomized sprayer, an IR source, a camera system and optics, a robotic arm, and an image processor.
  • an exemplary PACS 800 consists of four major components.
  • Various imaging modalities 801...809 such as computed tomography (CT) 801, magnetic resonance imaging (MRI) 802, or ultrasound (US) 803 provide imagery to the system.
  • CT computed tomography
  • MRI magnetic resonance imaging
  • US ultrasound
  • imagery is transmitted to a PACS Gateway 811, before being stored in archive 812.
  • Archive 812 provides for the storage and retrieval of images and reports.
  • Workstations 821...829 provide for interpreting and reviewing images in archive 812.
  • a secured network is used for the transmission of patient information between the components of the system.
  • workstations 821...829 may be web-based viewers.
  • PACS delivers timely and efficient access to images, interpretations, and related data, eliminating the drawbacks of traditional fdm-based image retrieval, distribution, and display.
  • a PACS may handle images from various medical imaging instruments, such as X-ray plain film (PF), ultrasound (US), magnetic resonance (MR), Nuclear Medicine imaging, positron emission tomography (PET), computed tomography (CT), endoscopy (ES), mammograms (MG), digital radiography (DR), computed radiography (CR), Histopathology, or ophthalmology.
  • medical imaging instruments such as X-ray plain film (PF), ultrasound (US), magnetic resonance (MR), Nuclear Medicine imaging, positron emission tomography (PET), computed tomography (CT), endoscopy (ES), mammograms (MG), digital radiography (DR), computed radiography (CR), Histopathology, or ophthalmology.
  • a PACS is not limited to a predetermined list of images, and supports clinical areas beyond conventional sources of imaging such as radiology, cardiology, oncology, or gastroenterology.
  • Different users may have a different view into the overall PACS system. For example, while a radiologist may typically access a viewing station, a technologist may typically access a QA workstation.
  • the PACS Gateway 811 comprises a quality assurance (QA) workstation.
  • the QA workstation provides a checkpoint to make sure patient demographics are correct as well as other important attributes of a study. If the study information is correct the images are passed to the archive 812 for storage.
  • the central storage device, archive 812 stores images and in some implementations, reports, measurements and other information that resides with the images.
  • images may be accessed from reading workstations 821...829.
  • the reading workstation is where a radiologist reviews the patient's study and formulates their diagnosis.
  • a reporting package is tied to the reading workstation to assist the radiologist with dictating a final report.
  • a variety of reporting systems may be integrated with the PACS, including those that rely upon traditional dictation.
  • CD or DVD authoring software is included in workstations 821...829 to burn patient studies for distribution to patients or referring physicians.
  • a PACS includes web-based interfaces for workstations 821...829. Such web interfaces may be accessed via the internet or a Wide Area Network (WAN).
  • connection security is provided by a VPN (Virtual Private Network) or SSL (Secure Sockets Layer).
  • the client side software may comprise ActiveX, JavaScript, or a Java Applet.
  • PACS clients may also be full applications which utilize the full resources of the computer they are executing on outside of the web environment.
  • DICOM Communications in Medicine
  • the communication protocol is an application protocol that uses TCP/IP to communicate between systems. DICOM files can be exchanged between two entities that are capable of receiving image and patient data in DICOM format.
  • DICOM groups information into data sets. For example, a file containing a particular image, generally contains a patient ID within the file, so that the image can never be separated from this information by mistake.
  • a DICOM data object consists of a number of attributes, including items such as name and patient ID, as well as a special attribute containing the image pixel data. Thus, the main object has no header as such, but instead comprises a list of attributes, including the pixel data.
  • a DICOM object containing pixel data may correspond to a single image, or may contain multiple frames, allowing storage of cine loops or other multi-frame data. DICOM supports three- or four-dimensional data encapsulated in a single DICOM object. Pixel data may be compressed using a variety of standards, including JPEG, Lossless JPEG, JPEG 2000, and Run-length encoding (RLE). LZW (zip) compression may be used for the whole data set or just the pixel data.
  • FIG. 9 a schematic of an example of a computing node is shown.
  • Computing node 10 is only one example of a suitable computing node and is not intended to suggest any limitation as to the scope of use or functionality of embodiments described herein. Regardless, computing node 10 is capable of being implemented and/or performing any of the functionality set forth hereinabove.
  • computing node 10 there is a computer system/server 12, which is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well-known computing systems, environments, and/or
  • computer system/server 12 examples include, but are not limited to, personal computer systems, server computer systems, thin clients, thick clients, handheld or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputer systems, mainframe computer systems, and distributed cloud computing environments that include any of the above systems or devices, and the like.
  • Computer system/server 12 may be described in the general context of computer system- executable instructions, such as program modules, being executed by a computer system.
  • program modules may include routines, programs, objects, components, logic, data structures, and so on that perform particular tasks or implement particular abstract data types.
  • Computer system/server 12 may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a
  • program modules may be located in both local and remote computer system storage media including memory storage devices.
  • computer system/server 12 in computing node 10 is shown in the form of a general-purpose computing device.
  • the components of computer system/server 12 may include, but are not limited to, one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including system memory 28 to processor 16.
  • Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures.
  • bus architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, Peripheral Component Interconnect (PCI) bus, Peripheral Component Interconnect Express (PCIe), and Advanced Microcontroller Bus Architecture (AMBA).
  • Computer system/server 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer system/server 12, and it includes both volatile and non-volatile media, removable and non-removable media.
  • System memory 28 can include computer system readable media in the form of volatile memory, such as random access memory (RAM) 30 and/or cache memory 32.
  • Computer system/server 12 may further include other removable/non-removable, volatile/non-volatile computer system storage media.
  • storage system 34 can be provided for reading from and writing to a non-removable, non-volatile magnetic media (not shown and typically called a "hard drive").
  • a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g ., a "floppy disk")
  • an optical disk drive for reading from or writing to a removable, non-volatile optical disk such as a CD-ROM, DVD-ROM or other optical media
  • each can be connected to bus 18 by one or more data media interfaces.
  • memory 28 may include at least one program product having a set ( e.g ., at least one) of program modules that are configured to carry out the functions of embodiments of the disclosure.
  • Program/utility 40 having a set (at least one) of program modules 42, may be stored in memory 28 by way of example, and not limitation, as well as an operating system, one or more application programs, other program modules, and program data. Each of the operating system, one or more application programs, other program modules, and program data or some combination thereof, may include an implementation of a networking environment.
  • Program modules 42 generally carry out the functions and/or methodologies of embodiments as described herein.
  • Computer system/server 12 may also communicate with one or more external devices 14 such as a keyboard, a pointing device, a display 24, etc.; one or more devices that enable a user to interact with computer system/server 12; and/or any devices (e.g., network card, modem, etc.) that enable computer system/server 12 to communicate with one or more other computing devices. Such communication can occur via Input/Output (I/O) interfaces 22. Still yet, computer system/server 12 can communicate with one or more networks such as a local area network (LAN), a general wide area network (WAN), and/or a public network (e.g., the Internet) via network adapter 20. As depicted, network adapter 20 communicates with the other components of computer system/server 12 via bus 18.
  • LAN local area network
  • WAN wide area network
  • public network e.g., the Internet
  • the present disclosure may be embodied as a system, a method, and/or a computer program product.
  • the computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present disclosure.
  • the computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device.
  • the computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • a non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read-only memory
  • EPROM or Flash memory erasable programmable read-only memory
  • SRAM static random access memory
  • CD-ROM compact disc read-only memory
  • DVD digital versatile disk
  • memory stick a floppy disk
  • a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon
  • a computer readable storage medium is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g ., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
  • Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network.
  • the network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers.
  • a network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
  • Computer readable program instructions for carrying out operations of the present disclosure may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more
  • the computer readable program instructions may execute entirely on the user’s computer, partly on the user’s computer, as a stand-alone software package, partly on the user’s computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user’s computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • LAN local area network
  • WAN wide area network
  • Internet Service Provider for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present disclosure.
  • FPGA field-programmable gate arrays
  • PLA programmable logic arrays
  • These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
  • the computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • the flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present disclosure.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s).
  • the functions noted in the block may occur out of the order noted in the figures.
  • two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Architecture (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)
  • Endoscopes (AREA)
PCT/US2019/068760 2018-12-28 2019-12-27 Generation of synthetic three-dimensional imaging from partial depth maps Ceased WO2020140044A1 (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
EP19905077.4A EP3903281A4 (en) 2018-12-28 2019-12-27 GENERATION OF A SYNTHETIC THREE-DIMENSIONAL IMAGE FROM PARTIAL DEPTH MAPS
CA3125288A CA3125288A1 (en) 2018-12-28 2019-12-27 Generation of synthetic three-dimensional imaging from partial depth maps
CN201980093251.XA CN113906479A (zh) 2018-12-28 2019-12-27 从局部深度图生成合成三维成像
KR1020217024095A KR20210146283A (ko) 2018-12-28 2019-12-27 부분 깊이 맵으로부터의 합성 삼차원 이미징의 생성
JP2021537826A JP2022516472A (ja) 2018-12-28 2019-12-27 部分的深度マップからの合成3次元撮像の発生
US17/349,713 US20220012954A1 (en) 2018-12-28 2021-06-16 Generation of synthetic three-dimensional imaging from partial depth maps

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862785950P 2018-12-28 2018-12-28
US62/785,950 2018-12-28

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/349,713 Continuation US20220012954A1 (en) 2018-12-28 2021-06-16 Generation of synthetic three-dimensional imaging from partial depth maps

Publications (1)

Publication Number Publication Date
WO2020140044A1 true WO2020140044A1 (en) 2020-07-02

Family

ID=71127363

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2019/068760 Ceased WO2020140044A1 (en) 2018-12-28 2019-12-27 Generation of synthetic three-dimensional imaging from partial depth maps

Country Status (7)

Country Link
US (1) US20220012954A1 (enExample)
EP (1) EP3903281A4 (enExample)
JP (1) JP2022516472A (enExample)
KR (1) KR20210146283A (enExample)
CN (1) CN113906479A (enExample)
CA (1) CA3125288A1 (enExample)
WO (1) WO2020140044A1 (enExample)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022049489A1 (en) * 2020-09-04 2022-03-10 Karl Storz Se & Co. Kg Devices, systems, and methods for identifying unexamined regions during a medical procedure
US11857153B2 (en) 2018-07-19 2024-01-02 Activ Surgical, Inc. Systems and methods for multi-modal sensing of depth in vision systems for automated surgical robots
WO2024077075A1 (en) * 2022-10-04 2024-04-11 Illuminant Surgical, Inc. Systems for projection mapping and markerless registration for surgical navigation, and methods of use thereof

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3545675A4 (en) 2016-11-24 2020-07-01 The University of Washington CAPTURE AND RESTITUTION OF LIGHT FIELD FOR HEADSETS
US10623660B1 (en) 2018-09-27 2020-04-14 Eloupes, Inc. Camera array for a mediated-reality system
US10949986B1 (en) 2020-05-12 2021-03-16 Proprio, Inc. Methods and systems for imaging a scene, such as a medical scene, and tracking objects within the scene
CN113436211B (zh) * 2021-08-03 2022-07-15 天津大学 一种基于深度学习的医学图像活动轮廓分割方法
EP4144298A1 (en) * 2021-09-02 2023-03-08 Koninklijke Philips N.V. Object visualisation in x-ray imaging
CA3236816A1 (en) * 2021-11-02 2023-05-11 Angelo D'alessandro Automated decisioning based on predicted user intent
US12261988B2 (en) 2021-11-08 2025-03-25 Proprio, Inc. Methods for generating stereoscopic views in multicamera systems, and associated devices and systems

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050116950A1 (en) * 1998-07-14 2005-06-02 Microsoft Corporation Regional progressive meshes
US20050253849A1 (en) * 2004-05-13 2005-11-17 Pixar Custom spline interpolation
WO2017180097A1 (en) * 2016-04-12 2017-10-19 Siemens Aktiengesellschaft Deformable registration of intra and preoperative inputs using generative mixture models and biomechanical deformation
US20170372504A1 (en) * 2014-12-22 2017-12-28 Seuk Jun JANG Method and system for generating 3d synthetic image by combining body data and clothes data
US20180253593A1 (en) * 2017-03-01 2018-09-06 Sony Corporation Virtual reality-based apparatus and method to generate a three dimensional (3d) human face model using image and depth data

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101526866B1 (ko) * 2009-01-21 2015-06-10 삼성전자주식회사 깊이 정보를 이용한 깊이 노이즈 필터링 방법 및 장치
WO2011151858A1 (ja) * 2010-05-31 2011-12-08 ビジュアツール株式会社 可視化用携帯端末装置、可視化プログラム及びボディ3d計測システム
US20150086956A1 (en) * 2013-09-23 2015-03-26 Eric Savitsky System and method for co-registration and navigation of three-dimensional ultrasound and alternative radiographic data sets
US9524582B2 (en) * 2014-01-28 2016-12-20 Siemens Healthcare Gmbh Method and system for constructing personalized avatars using a parameterized deformable mesh
JP6706026B2 (ja) * 2015-04-01 2020-06-03 オリンパス株式会社 内視鏡システムおよび内視鏡装置の作動方法
US10810799B2 (en) * 2015-09-28 2020-10-20 Montefiore Medical Center Methods and devices for intraoperative viewing of patient 3D surface images
JP6905323B2 (ja) * 2016-01-15 2021-07-21 キヤノン株式会社 画像処理装置、画像処理方法、およびプログラム
CN116650106A (zh) * 2016-03-14 2023-08-29 穆罕默德·R·马赫福兹 用于无线超声跟踪和通信的超宽带定位
US10204448B2 (en) * 2016-11-04 2019-02-12 Aquifi, Inc. System and method for portable active 3D scanning
CN108694740A (zh) * 2017-03-06 2018-10-23 索尼公司 信息处理设备、信息处理方法以及用户设备
US10432913B2 (en) * 2017-05-31 2019-10-01 Proximie, Inc. Systems and methods for determining three dimensional measurements in telemedicine application
CN107292965B (zh) * 2017-08-03 2020-10-13 北京航空航天大学青岛研究院 一种基于深度图像数据流的虚实遮挡处理方法
US11125861B2 (en) * 2018-10-05 2021-09-21 Zoox, Inc. Mesh validation
US10823855B2 (en) * 2018-11-19 2020-11-03 Fca Us Llc Traffic recognition and adaptive ground removal based on LIDAR point cloud statistics

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050116950A1 (en) * 1998-07-14 2005-06-02 Microsoft Corporation Regional progressive meshes
US20050253849A1 (en) * 2004-05-13 2005-11-17 Pixar Custom spline interpolation
US20170372504A1 (en) * 2014-12-22 2017-12-28 Seuk Jun JANG Method and system for generating 3d synthetic image by combining body data and clothes data
WO2017180097A1 (en) * 2016-04-12 2017-10-19 Siemens Aktiengesellschaft Deformable registration of intra and preoperative inputs using generative mixture models and biomechanical deformation
US20180253593A1 (en) * 2017-03-01 2018-09-06 Sony Corporation Virtual reality-based apparatus and method to generate a three dimensional (3d) human face model using image and depth data

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3903281A4 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11857153B2 (en) 2018-07-19 2024-01-02 Activ Surgical, Inc. Systems and methods for multi-modal sensing of depth in vision systems for automated surgical robots
WO2022049489A1 (en) * 2020-09-04 2022-03-10 Karl Storz Se & Co. Kg Devices, systems, and methods for identifying unexamined regions during a medical procedure
JP2023552032A (ja) * 2020-09-04 2023-12-14 カール ストルツ エスエー ウント コ.カーゲー 医療処置中に未検査領域を識別するためのデバイス、システム、及び方法
JP7770392B2 (ja) 2020-09-04 2025-11-14 カール ストルツ エスエー ウント コ.カーゲー 医療処置中に未検査領域を識別するためのデバイス、システム、及び方法
WO2024077075A1 (en) * 2022-10-04 2024-04-11 Illuminant Surgical, Inc. Systems for projection mapping and markerless registration for surgical navigation, and methods of use thereof

Also Published As

Publication number Publication date
KR20210146283A (ko) 2021-12-03
EP3903281A1 (en) 2021-11-03
CA3125288A1 (en) 2020-07-02
EP3903281A4 (en) 2022-09-07
JP2022516472A (ja) 2022-02-28
US20220012954A1 (en) 2022-01-13
CN113906479A (zh) 2022-01-07

Similar Documents

Publication Publication Date Title
US20220012954A1 (en) Generation of synthetic three-dimensional imaging from partial depth maps
US12400340B2 (en) User interface elements for orientation of remote camera during surgery
CN114126527B (zh) 复合医学成像系统和方法
US8090174B2 (en) Virtual penetrating mirror device for visualizing virtual objects in angiographic applications
EP2883353B1 (en) System and method of overlaying images of different modalities
CN102892018B (zh) 图像处理系统、装置、方法以及医用图像诊断装置
US20120053408A1 (en) Endoscopic image processing device, method and program
WO2005101323A1 (en) System and method for creating a panoramic view of a volumetric image
CN103200871B (zh) 图像处理系统、装置、方法以及医用图像诊断装置
US9426443B2 (en) Image processing system, terminal device, and image processing method
Karargyris et al. Three-dimensional reconstruction of the digestive wall in capsule endoscopy videos using elastic video interpolation
EP4094184A1 (en) Systems and methods for masking a recognized object during an application of a synthetic element to an original image
Kumar et al. Stereoscopic visualization of laparoscope image using depth information from 3D model
US9911225B2 (en) Live capturing of light map image sequences for image-based lighting of medical data
JP2023004884A (ja) 拡張現実のグラフィック表現を表示するための表現装置
Hong et al. Colonoscopy simulation
Shoji et al. Camera motion tracking of real endoscope by using virtual endoscopy system and texture information
Kumar et al. Stereoscopic laparoscopy using depth information from 3D model
Westwood Development of a 3D visualization system for surgical field deformation with geometric pattern projection
Kim et al. Development of 3-D stereo endoscopic image processing system
Chung Calibration of Optical See-Through Head Mounted Display with Mobile C-arm for Visualization of Cone Beam CT Data
HK1191434A (en) Method and system for performing rendering

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19905077

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021537826

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 3125288

Country of ref document: CA

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019905077

Country of ref document: EP

Effective date: 20210728

WWR Wipo information: refused in national office

Ref document number: 1020217024095

Country of ref document: KR