US11625890B2 - Geometry buffer slice tool - Google Patents

Geometry buffer slice tool Download PDF

Info

Publication number
US11625890B2
US11625890B2 US17/376,364 US202117376364A US11625890B2 US 11625890 B2 US11625890 B2 US 11625890B2 US 202117376364 A US202117376364 A US 202117376364A US 11625890 B2 US11625890 B2 US 11625890B2
Authority
US
United States
Prior art keywords
voxel
volume
slice plane
plane
slicing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US17/376,364
Other versions
US20210343068A1 (en
Inventor
Michael Jones
Kyle RUSSELL
Chanler Crowe Cantor
Michael Yohe
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intuitive Research and Technology Corp
Original Assignee
Intuitive Research and Technology Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intuitive Research and Technology Corp filed Critical Intuitive Research and Technology Corp
Priority to US17/376,364 priority Critical patent/US11625890B2/en
Assigned to INTUITIVE RESEARCH AND TECHNOLOGY CORPORATION reassignment INTUITIVE RESEARCH AND TECHNOLOGY CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CROWE, CHANLER, JONES, MICHAEL, RUSSELL, Kyle, YOHE, MICHAEL
Publication of US20210343068A1 publication Critical patent/US20210343068A1/en
Application granted granted Critical
Publication of US11625890B2 publication Critical patent/US11625890B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/08Volume rendering
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/388Volumetric displays, i.e. systems where the image is built up from picture elements distributed through a volume
    • H04N13/395Volumetric displays, i.e. systems where the image is built up from picture elements distributed through a volume with depth sampling, i.e. the volume being constructed from a stack or sequence of 2D image planes
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2215/00Indexing scheme for image rendering
    • G06T2215/06Curved planar reformation of 3D line structures
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/008Cut plane or projection plane definition

Definitions

  • a voxel is to be understood to mean a volume element in the three-dimensional examination zone and a pixel is an image element of the two-dimensional image.
  • a voxel image value is assigned a numerical value that characterizes a physical quantity in the relevant voxel.
  • the conventional approach involves the following two different methods: three-dimensional printing and evaluating scan data on a computer.
  • three-dimensional printing the user performs various conversions of the scan data, loads scan data into software, exports a file that can be printed, imports a three-dimensional mesh file into a three-dimensional modeling software for polishing, exports the polished model to a file, and prints the model.
  • a user can use scan data on a computer processing unit.
  • a user evaluates scan data on a computer scrolls through the scan images, uses a software package to load the scans and turn them into a three-dimensional mesh, the software renders the mesh onto a two-dimensional screen, and user can rotate the mesh around the z axis.
  • Some users may make a three-dimensional mesh out of the scans; however, these are rudimentary and time-consuming.
  • the method according to the present disclosure improves upon existing methods to build up voxels and to create volumetric three-dimensional data, such as using two-dimensional images loaded into a three-dimensional mesh generating program.
  • the mesh generating program creates a three-dimensional mesh based on two-dimensional images input into the program and the user described predetermined threshold value. Once the mesh is generated, the user can load the mesh into virtual space for evaluation.
  • this process can be time consuming and often necessitates repetition before yielding desired results, because it is not done at runtime.
  • planer slice tools can be used to see data inside three-dimensional object mesh.
  • FIG. 1 depicts a system for mapping two-dimensional and three-dimensional images according to an exemplary embodiment of the present disclosure.
  • FIG. 2 illustrates the relationship between three-dimensional assets, the data representing those assets, and the communication between that data and the software, which leads to the representation on the XR platform.
  • FIG. 3 is a flow diagram depicting the data importation and asset creation process performed by the software.
  • FIG. 4 is a flow diagram depicting the creation of the voxel representation from two-dimensional images.
  • FIG. 5 depicts a method of displaying a slice of a voxel model according to an exemplary embodiment of the present disclosure.
  • FIG. 6 depicts a method for creating a slice with a user-determined thickness.
  • FIG. 7 depicts a method for creating a spherical slice, according to an embodiment of the present disclosure.
  • FIG. 8 is a pictorial depiction of a two-dimensional image and a voxel volume.
  • FIG. 9 depicts a two-dimensional deck of images and a three-dimensional representation of the two-dimensional deck.
  • FIG. 10 depicts a voxel model shown in front of a plane.
  • FIG. 11 depicts the voxel model of FIG. 10 having been sliced by the plane.
  • XR is used to describe Virtual Reality, Augmented Reality, or Mixed Reality displays and associated software-based environments.
  • mesh is used to describe a three-dimensional object in a virtual world, including, but not limited to, systems, assemblies, subassemblies, cabling, piping, landscapes, avatars, molecules, proteins, ligands, or chemical compounds.
  • voxel is used to describe a value on a regular grid in three-dimensional space. The position of the voxel is determined by the software by the location of the pixels over a user-determined threshold.
  • FIG. 1 depicts a system 100 for supporting two-dimensional to three-dimensional spatial mapping according to an exemplary embodiment of the present disclosure.
  • the system 100 comprises an input device 110 communicating across a network 120 to a computer 130 .
  • the input device 110 may comprise, for example, a keyboard, a switch, a mouse, a joystick, a touch pad and/or other type of interface, which can be used to input data from a user of the system 100 .
  • the network 120 may be of any type network or networks known in the art or future-developed, such as the internet backbone, Ethernet, Wifi, WiMax, and the like.
  • the network 120 may be any combination of hardware, software, or both.
  • XR hardware 140 comprises virtual or mixed reality hardware that can be used to visualize a three-dimensional world.
  • a video monitor 150 is used to display a VR space to a user.
  • the input device 110 receives input from the processor 130 and translates that input into an XR event or function call.
  • the input device 110 allows a user to input data to the system 100 , by translating user commands into computer commands.
  • FIG. 2 illustrates the relationship between three-dimensional assets 210 , the data representing those assets 220 , and the communication between that data and the software, which leads to the representation on the XR platform.
  • the three-dimensional assets 210 may be any three-dimensional assets, which are any set of points that define geometry in three-dimensional space.
  • FIG. 3 depicts a method 300 for importing and analyzing data, in accordance with an exemplary embodiment of the present disclosure.
  • the user uploads a series of two-dimensional images. This can be done through a graphical user interface, copying the files into a designated folder, or another upload method.
  • the computer imports the two-dimensional images.
  • the software loops through the rows and columns of each two-dimensional image and evaluates each pixel and reads the color values of each pixel.
  • the software compares each individual pixel color value for the image against a specified threshold value.
  • the threshold value can be user-defined or computer-generated.
  • FIG. 4 depicts a method 400 of creating voxel representations of a series of images, according to an exemplary embodiment of the present disclosure.
  • the 2D images are imported as described in FIG. 3 , and the image data is evaluated and saved into two-dimensional textures, so that they may be accessed sequentially and height adjusted.
  • the software creates the voxel volume by using the pixel at a certain location in one of the tiles for the color and the number of the tile as a reference point for the height of the voxel.
  • image 0 is located at a Z-height value of 0 and image 100 is located at a Z value of 247.5, which is 99 images*2.5 mm.
  • step 430 the voxel locations are added to an array of instances.
  • the array can then be used as the basis for locations of each of the meshes that will represent the voxel volume.
  • Each voxel is drawn at corresponding locations in the array. If no height value is given with the image set, the system determines the appropriate height value.
  • step 440 voxel volumes are spawned at the saved pixel locations.
  • FIG. 5 depicts a method 500 of displaying a slice of a virtual volume, according to an exemplary embodiment of the present disclosure.
  • a plane to be sliced is defined.
  • a planar object is used to check against for slicing operations.
  • the planar object could be an object in the world that is shaped like a plane, or it could be some other arbitrary set of points that define a plane.
  • a user uses a tool in VR space to identify the desired location to be sliced, and the user-identified location is translated by the system into a plane.
  • step 520 when the plane is in contact with the rendered volume, the system updates the values for the plane's location and the plane normal.
  • step 530 the system determines whether the location of the currently evaluated voxel is in front of or behind the plane. In this regard, the system software checks the location of the plane and the plane normal against the current voxel's location by using the dot-product of the vector from the plane to the voxel against the normal of the plane in the object's local space.
  • step 540 if the location of the plane is greater than or equal to zero, then the voxel is in front of the plane and is drawn.
  • step 550 if the location in 530 is less than zero, then the voxel is not in front of the plane and is not drawn, i.e., the voxel is discarded, and the next voxel location is checked.
  • FIG. 6 depicts a method 600 according to another embodiment of the present disclosure, for creating a slice with a user-determined thickness.
  • the user defines the plane to be sliced in the same manner as step 510 of FIG. 5 .
  • the software updates the values for the plane's location and the plane normal. In this regard, the software creates a vector from a known point on the plane to the voxel, and then takes the dot product of the vector over the normal and multiplies the original plane normal by the dot product.
  • step 630 the system determines how far away from the plane the voxel is compared to the normal of the plane by subtracting the result of the normal multiplied from the dot product from the original voxel point. The result is a point on the plane directly below the voxel.
  • FIG. 8 is a pictorial depiction of a two-dimensional image 810 and a voxel volume 820 .
  • the image and volume depict a human pelvis.
  • FIG. 9 depicts a two-dimensional deck 910 of images 810 and a three-dimensional representation (a voxel model) 920 of the two-dimensional deck 910 .
  • FIG. 10 depicts a voxel model 920 shown in front of a plane 1001 .
  • FIG. 11 depicts the voxel model 920 of FIG. 10 having been sliced by the plane 1001 using the methods disclosed herein, such that voxels behind the plane have been culled out.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Image Generation (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A method for visualizing a three-dimensional volume for use in a virtual reality environment is performed by uploading two-dimensional images for evaluation, creating planar depictions of the two-dimensional images, and using thresholds to determine if voxels should be drawn. A voxel volume is created from the planar depictions and voxels. A user defines a plane to be used for slicing the voxel volume, and sets values of the plane location and plane normal. The slice plane is placed within the voxel volume and defines a desired remaining portion of the volumetric plane to be displayed. All but the desired remaining portion of the voxel volume is not drawn and the remaining portion is displayed.

Description

REFERENCE TO RELATED APPLICATIONS
This application is a continuation of, and claims priority to, U.S. Ser. No. 16/844,453, entitled “Geometry Buffer Slice Tool” and filed on Apr. 9, 2020, which claims priority to Provisional Patent Application U.S. Ser. No. 62/831,309, entitled “Geometry Buffer Slice Tool” and filed on Apr. 9, 2019. Both applications are fully incorporated herein by reference.
BACKGROUND AND SUMMARY
The disclosure relates to an image-generation method for three-dimensional images. In this context, a voxel is to be understood to mean a volume element in the three-dimensional examination zone and a pixel is an image element of the two-dimensional image. A voxel image value is assigned a numerical value that characterizes a physical quantity in the relevant voxel.
Currently, there are a limited number of methods software packages use to see scanned data. Usually these methods use a combination of software packages to evaluate the data set. One current method is to view the scans in software and then exporting to create a three-dimensional image or three-dimensional printed model. Most current methods involve scrolling through the scans one by one or for health professionals to clean the data. These methods are time consuming and expensive, and they create data sets that are static and not manipulatable.
The conventional approach involves the following two different methods: three-dimensional printing and evaluating scan data on a computer. For three-dimensional printing, the user performs various conversions of the scan data, loads scan data into software, exports a file that can be printed, imports a three-dimensional mesh file into a three-dimensional modeling software for polishing, exports the polished model to a file, and prints the model.
Users can use scan data on a computer processing unit. In the most common method, a user evaluates scan data on a computer scrolls through the scan images, uses a software package to load the scans and turn them into a three-dimensional mesh, the software renders the mesh onto a two-dimensional screen, and user can rotate the mesh around the z axis. Some users may make a three-dimensional mesh out of the scans; however, these are rudimentary and time-consuming.
What is needed is a method that allows software to use virtual reality tools to view the inside of virtual object clusters.
The method according to the present disclosure improves upon existing methods to build up voxels and to create volumetric three-dimensional data, such as using two-dimensional images loaded into a three-dimensional mesh generating program. The mesh generating program creates a three-dimensional mesh based on two-dimensional images input into the program and the user described predetermined threshold value. Once the mesh is generated, the user can load the mesh into virtual space for evaluation. However, this process can be time consuming and often necessitates repetition before yielding desired results, because it is not done at runtime.
Under the described method, users can create a planer slice tool based on arbitrary sizes. These planer slice tools can be used to see data inside three-dimensional object mesh.
BRIEF DESCRIPTION OF THE DRAWINGS
The disclosure can be better understood with reference to the following drawings. The elements of the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the disclosure. Furthermore, like reference numerals designate corresponding parts throughout the several views.
FIG. 1 depicts a system for mapping two-dimensional and three-dimensional images according to an exemplary embodiment of the present disclosure.
FIG. 2 illustrates the relationship between three-dimensional assets, the data representing those assets, and the communication between that data and the software, which leads to the representation on the XR platform.
FIG. 3 is a flow diagram depicting the data importation and asset creation process performed by the software.
FIG. 4 is a flow diagram depicting the creation of the voxel representation from two-dimensional images.
FIG. 5 depicts a method of displaying a slice of a voxel model according to an exemplary embodiment of the present disclosure.
FIG. 6 depicts a method for creating a slice with a user-determined thickness.
FIG. 7 depicts a method for creating a spherical slice, according to an embodiment of the present disclosure.
FIG. 8 is a pictorial depiction of a two-dimensional image and a voxel volume.
FIG. 9 depicts a two-dimensional deck of images and a three-dimensional representation of the two-dimensional deck.
FIG. 10 depicts a voxel model shown in front of a plane.
FIG. 11 depicts the voxel model of FIG. 10 having been sliced by the plane.
DETAILED DESCRIPTION
As used herein, the term “XR” is used to describe Virtual Reality, Augmented Reality, or Mixed Reality displays and associated software-based environments. As used herein, “mesh” is used to describe a three-dimensional object in a virtual world, including, but not limited to, systems, assemblies, subassemblies, cabling, piping, landscapes, avatars, molecules, proteins, ligands, or chemical compounds. As used herein, “voxel” is used to describe a value on a regular grid in three-dimensional space. The position of the voxel is determined by the software by the location of the pixels over a user-determined threshold.
FIG. 1 depicts a system 100 for supporting two-dimensional to three-dimensional spatial mapping according to an exemplary embodiment of the present disclosure. The system 100 comprises an input device 110 communicating across a network 120 to a computer 130. The input device 110 may comprise, for example, a keyboard, a switch, a mouse, a joystick, a touch pad and/or other type of interface, which can be used to input data from a user of the system 100. The network 120 may be of any type network or networks known in the art or future-developed, such as the internet backbone, Ethernet, Wifi, WiMax, and the like. The network 120 may be any combination of hardware, software, or both. XR hardware 140 comprises virtual or mixed reality hardware that can be used to visualize a three-dimensional world. A video monitor 150 is used to display a VR space to a user. The input device 110 receives input from the processor 130 and translates that input into an XR event or function call. The input device 110 allows a user to input data to the system 100, by translating user commands into computer commands.
FIG. 2 illustrates the relationship between three-dimensional assets 210, the data representing those assets 220, and the communication between that data and the software, which leads to the representation on the XR platform. The three-dimensional assets 210 may be any three-dimensional assets, which are any set of points that define geometry in three-dimensional space.
The data representing a three-dimensional world 220 is a procedural mesh that may be generated by importing three-dimensional models, images representing two-dimensional data, or other data converted into a three-dimensional format. The software for visualization 230 of the data representing a three-dimensional world 220 allows for the processor 130 (FIG. 1 ) to facilitate the visualization of the data representing a three-dimensional world 220 to be depicted as three-dimensional assets 210 in the XR display 240.
FIG. 3 depicts a method 300 for importing and analyzing data, in accordance with an exemplary embodiment of the present disclosure. In step 310, the user uploads a series of two-dimensional images. This can be done through a graphical user interface, copying the files into a designated folder, or another upload method. In step 320, the computer imports the two-dimensional images. In step 330, the software loops through the rows and columns of each two-dimensional image and evaluates each pixel and reads the color values of each pixel. In step 340, the software compares each individual pixel color value for the image against a specified threshold value. The threshold value can be user-defined or computer-generated. In step 350, the software saves and tracks the location of pixels with a color value above the specified threshold, or within a range of values. Pixels outside of the desired threshold are ignored, i.e., are not further tracked or displayed. The pixels may be saved in a variety of ways, such as: an array associated with a certain image, a text file, or other save methods. Adjustments can be made in this step to pixel colors or opacity based on user thresholds.
FIG. 4 depicts a method 400 of creating voxel representations of a series of images, according to an exemplary embodiment of the present disclosure. In step 410, the 2D images are imported as described in FIG. 3 , and the image data is evaluated and saved into two-dimensional textures, so that they may be accessed sequentially and height adjusted. In step 420, the software creates the voxel volume by using the pixel at a certain location in one of the tiles for the color and the number of the tile as a reference point for the height of the voxel. For example, if there are 100 images in a deck and each image is spaced 0.5 mm apart, then image 0 is located at a Z-height value of 0 and image 100 is located at a Z value of 247.5, which is 99 images*2.5 mm.
In step 430, the voxel locations are added to an array of instances. The array can then be used as the basis for locations of each of the meshes that will represent the voxel volume. Each voxel is drawn at corresponding locations in the array. If no height value is given with the image set, the system determines the appropriate height value. In step 440, voxel volumes are spawned at the saved pixel locations.
FIG. 5 depicts a method 500 of displaying a slice of a virtual volume, according to an exemplary embodiment of the present disclosure. In step 510, a plane to be sliced is defined. In this regard, a planar object is used to check against for slicing operations. The planar object could be an object in the world that is shaped like a plane, or it could be some other arbitrary set of points that define a plane. A user uses a tool in VR space to identify the desired location to be sliced, and the user-identified location is translated by the system into a plane.
In step 520, when the plane is in contact with the rendered volume, the system updates the values for the plane's location and the plane normal. In step 530, the system determines whether the location of the currently evaluated voxel is in front of or behind the plane. In this regard, the system software checks the location of the plane and the plane normal against the current voxel's location by using the dot-product of the vector from the plane to the voxel against the normal of the plane in the object's local space.
In step 540, if the location of the plane is greater than or equal to zero, then the voxel is in front of the plane and is drawn. In step 550, if the location in 530 is less than zero, then the voxel is not in front of the plane and is not drawn, i.e., the voxel is discarded, and the next voxel location is checked.
FIG. 6 depicts a method 600 according to another embodiment of the present disclosure, for creating a slice with a user-determined thickness. In step 610, the user defines the plane to be sliced in the same manner as step 510 of FIG. 5 . In step 620, when the plane is in contact with the rendered volume, the software updates the values for the plane's location and the plane normal. In this regard, the software creates a vector from a known point on the plane to the voxel, and then takes the dot product of the vector over the normal and multiplies the original plane normal by the dot product. In step 630, the system determines how far away from the plane the voxel is compared to the normal of the plane by subtracting the result of the normal multiplied from the dot product from the original voxel point. The result is a point on the plane directly below the voxel.
In step 640, the software determines the voxel location relative to the plane point by determining whether the location of the current voxel is in front of or behind the plane. In step 650, if the current voxel is behind the plane, then the voxel is discarded.
In steps 660 and 670, if the voxel is in front of the plane and the distance from the plane point to the voxel is less than or equal to the user-desired slice thickness, then the software determines the magnitude of the vector from the projected plane point to the voxel location. In step 680, the voxel is drawn. If the distance from the plane point to the voxel is greater than the desired threshold slice thickness, the voxel is discarded, in step 690.
FIG. 7 depicts a method 700 for creating a spherical slice, according to an embodiment of the present disclosure. In step 710, the user defines a sphere. In step 720, when the sphere is in contact with the rendered volume, the software sends the values of the sphere center location into the geometry shader. In step 730, the software creates a vector between the voxel and the sphere center. In step 740, if the distance from the voxel to the sphere center is less than the radius of the sphere, that voxel is drawn. After the voxels are determined to be drawn or not drawn, the image will show the remaining volume with respect to the selected area.
FIG. 8 is a pictorial depiction of a two-dimensional image 810 and a voxel volume 820. In this example, the image and volume depict a human pelvis. FIG. 9 depicts a two-dimensional deck 910 of images 810 and a three-dimensional representation (a voxel model) 920 of the two-dimensional deck 910.
FIG. 10 depicts a voxel model 920 shown in front of a plane 1001. FIG. 11 depicts the voxel model 920 of FIG. 10 having been sliced by the plane 1001 using the methods disclosed herein, such that voxels behind the plane have been culled out.

Claims (18)

What is claimed is:
1. A method of visualizing a three-dimensional volume for use in a virtual reality environment, comprising:
uploading two-dimensional images for evaluation;
creating a voxel volume from the two-dimensional images;
defining a slice plane to be used for slicing the voxel volume, the slice plane defining a slice with a user-determined thickness;
determining whether the location of a current voxel is in front of or behind the slice plane, and if the current voxel is in front of the slice plane, drawing the voxel, and if the current voxel is behind the slice plane, not drawing the voxel.
2. The method of claim 1, wherein the step of not drawing the voxel further comprises discarding the voxel.
3. The method of claim 1, wherein the step of not drawing the voxel further comprises assigning the voxel a numerical value of “1” or “0.”
4. The method of claim 1, wherein the step of determining if the location of the current voxel is in front of or behind the slice plane comprises using the dot-product of a vector from the slice plane to the current voxel against a normal of the slice plane.
5. The method of claim 4, wherein if the dot-product of the vector is less than zero, then the voxel is not in front of the slice plane, and the voxel is not drawn.
6. The method of claim 5, wherein if the dot-product of the vector is greater than zero, then the voxel is in front of the slice plane, and the voxel is drawn.
7. The method of claim 1, wherein the step of defining a slice plane to be used for slicing the voxel volume, the slice plane defining a slice with a user-determined thickness comprises using vectors to determine how far away from the slice plane the voxel is compared to a normal of the slice plane.
8. The method of claim 7, wherein the step of using vectors to determine how far away from the slice plane the voxel is compared to the normal of the plane comprises projecting the voxel onto the slice plane to find the location on the slice plane directly below the voxel, and if the voxel is in front of the slice plane and the distance from the slice plane to the voxel is less than or equal to a user-desired slice thickness, draw the voxel.
9. A method of visualizing a three-dimensional volume for use in a virtual reality environment, comprising:
uploading two-dimensional images for evaluation;
creating planar depictions of the two-dimensional images;
creating a voxel volume from the two-dimensional images;
defining a slice plane to be used for slicing the voxel volume, the slice plane defining a slice with a user-determined thickness;
determining whether the location of a current voxel is in front of or behind the slice plane, and if the current voxel is behind the slice plane, drawing the voxel, and if the current voxel is in front of the slice plane, not drawing the voxel.
10. The method of claim 9, wherein the step of determining whether the location of the current voxel is in front of or behind the slice plane comprises using the dot-product of a vector from the slice plane to the current voxel against a normal of the slice plane.
11. The method of claim 9, wherein the step of defining a slice plane to be used for slicing the voxel volume, the slice plane defining a slice with a user-determined thickness comprises using vectors to determine how far away from the slice plane the voxel is compared to a normal of the slice plane.
12. The method of claim 9, wherein the step of not drawing the voxel further comprises assigning the voxel a numerical value of “1” or “0.”
13. A method of visualizing a three-dimensional volume for use in a virtual reality environment, comprising:
uploading two-dimensional images for evaluation;
creating a voxel volume based on the two-dimensional images;
defining a slicing volume to be used for slicing the voxel volume;
when the slicing volume is in contact with the voxel volume, send values representing all boundaries of the slicing volume to a geometry shader;
if a current voxel in the voxel volume is within the boundaries of the slicing volume, drawing the voxel, and if the current voxel is outside of the boundaries of the slicing volume, not drawing the voxel.
14. The method of claim 13, wherein the step of not drawing the voxel further comprises discarding the voxel.
15. The method of claim 13, wherein the step of not drawing the voxel further comprises assigning the voxel a numerical value of “1” or “0.”
16. A method of visualizing a three-dimensional volume for use in a virtual reality environment, comprising:
uploading two-dimensional images for evaluation;
creating a voxel volume based on the two-dimensional images;
defining a slicing volume to be used for slicing the voxel volume;
when the slicing volume is in contact with the voxel volume, send values representing all boundaries of the slicing volume to a geometry shader;
if a current voxel in the voxel volume is outside of the boundaries of the slicing volume, drawing the voxel, and if the current voxel is inside of the boundaries of the slicing volume, not drawing the voxel.
17. The method of claim 16, wherein the step of not drawing the voxel further comprises discarding the voxel.
18. The method of claim 16, wherein the step of not drawing the voxel further comprises assigning the voxel a numerical value of “1” or “0.”
US17/376,364 2019-04-09 2021-07-15 Geometry buffer slice tool Active US11625890B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/376,364 US11625890B2 (en) 2019-04-09 2021-07-15 Geometry buffer slice tool

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201962831309P 2019-04-09 2019-04-09
US16/844,453 US11069125B2 (en) 2019-04-09 2020-04-09 Geometry buffer slice tool
US17/376,364 US11625890B2 (en) 2019-04-09 2021-07-15 Geometry buffer slice tool

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US16/844,453 Continuation US11069125B2 (en) 2019-04-09 2020-04-09 Geometry buffer slice tool

Publications (2)

Publication Number Publication Date
US20210343068A1 US20210343068A1 (en) 2021-11-04
US11625890B2 true US11625890B2 (en) 2023-04-11

Family

ID=72747525

Family Applications (2)

Application Number Title Priority Date Filing Date
US16/844,453 Active US11069125B2 (en) 2019-04-09 2020-04-09 Geometry buffer slice tool
US17/376,364 Active US11625890B2 (en) 2019-04-09 2021-07-15 Geometry buffer slice tool

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US16/844,453 Active US11069125B2 (en) 2019-04-09 2020-04-09 Geometry buffer slice tool

Country Status (1)

Country Link
US (2) US11069125B2 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11069125B2 (en) * 2019-04-09 2021-07-20 Intuitive Research And Technology Corporation Geometry buffer slice tool
EP3996050A1 (en) * 2020-11-05 2022-05-11 Koninklijke Philips N.V. Image rendering method for tomographic image data

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150087979A1 (en) * 2010-07-19 2015-03-26 QView Medical Inc. Automated breast ultrasound equipment and methods using enhanced navigator aids
US20170263023A1 (en) * 2016-03-08 2017-09-14 Siemens Healthcare Gmbh Methods and systems for accelerated reading of a 3D medical volume
US20200074718A1 (en) * 2018-08-30 2020-03-05 Fuji Xerox Co., Ltd. Three-dimensional object data generation apparatus, three-dimensional object forming apparatus, and non-transitory computer readable medium
US11069125B2 (en) * 2019-04-09 2021-07-20 Intuitive Research And Technology Corporation Geometry buffer slice tool
US20220028154A1 (en) * 2018-09-18 2022-01-27 Seoul National University R&Db Foundation Apparatus and method for reconstructing three-dimensional image

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5782762A (en) * 1994-10-27 1998-07-21 Wake Forest University Method and system for producing interactive, three-dimensional renderings of selected body organs having hollow lumens to enable simulated movement through the lumen
US6961058B2 (en) * 2001-08-10 2005-11-01 Microsoft Corporation Macrostructure modeling with microstructure reflectance slices
US7889194B2 (en) * 2006-03-30 2011-02-15 Siemens Medical Solutions Usa, Inc. System and method for in-context MPR visualization using virtual incision volume visualization
US9058679B2 (en) * 2007-09-26 2015-06-16 Koninklijke Philips N.V. Visualization of anatomical data
US8289320B2 (en) * 2007-10-22 2012-10-16 Samsung Electronics Co., Ltd. 3D graphic rendering apparatus and method
JP5224451B2 (en) * 2008-06-03 2013-07-03 富士フイルム株式会社 Projection image creation apparatus, method and program
KR101202533B1 (en) * 2009-07-30 2012-11-16 삼성메디슨 주식회사 Control device, ultrasound system, method and computer readable medium for providing a plurality of slice images
US8477153B2 (en) * 2011-08-24 2013-07-02 General Electric Company Method and system for navigating, segmenting, and extracting a three-dimensional image
JP5846373B2 (en) * 2012-01-13 2016-01-20 国立研究開発法人海洋研究開発機構 Image processing apparatus, image processing method, image processing program, and image processing system
WO2018186758A1 (en) * 2017-04-07 2018-10-11 Auckland Uniservices Limited System for transmitting and viewing a series of images
US10499879B2 (en) * 2017-05-31 2019-12-10 General Electric Company Systems and methods for displaying intersections on ultrasound images
ES3052996T3 (en) * 2017-06-19 2026-01-16 Lima Usa Inc Method of tracking motion of a body part using a motion sensor and radiographic images to create a virtual dynamic model
US20200175756A1 (en) * 2018-12-03 2020-06-04 Intuitive Research And Technology Corporation Two-dimensional to three-dimensional spatial indexing
US20200219329A1 (en) * 2019-01-09 2020-07-09 Intuitive Research And Technology Corporation Multi axis translation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150087979A1 (en) * 2010-07-19 2015-03-26 QView Medical Inc. Automated breast ultrasound equipment and methods using enhanced navigator aids
US20170263023A1 (en) * 2016-03-08 2017-09-14 Siemens Healthcare Gmbh Methods and systems for accelerated reading of a 3D medical volume
US20200074718A1 (en) * 2018-08-30 2020-03-05 Fuji Xerox Co., Ltd. Three-dimensional object data generation apparatus, three-dimensional object forming apparatus, and non-transitory computer readable medium
US20220028154A1 (en) * 2018-09-18 2022-01-27 Seoul National University R&Db Foundation Apparatus and method for reconstructing three-dimensional image
US11069125B2 (en) * 2019-04-09 2021-07-20 Intuitive Research And Technology Corporation Geometry buffer slice tool

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Chen, Hung-Li Jason, Faramarz F. Samavati, and Mario Costa Sousa. "GPU-based point radiation for interactive volume sculpting and segmentation." The visual computer 24.7 (2008): 689-698. (Year: 2008). *
Huff, Rafael, et al. "Erasing, digging and clipping in volumetric datasets with one or two hands." Proceedings of the 2006 ACM international conference on Virtual reality continuum and its applications. 2006. (Year: 2006). *

Also Published As

Publication number Publication date
US11069125B2 (en) 2021-07-20
US20210343068A1 (en) 2021-11-04
US20200327722A1 (en) 2020-10-15

Similar Documents

Publication Publication Date Title
AU2017213540B2 (en) 3d sculpting
CN107103638B (en) Rapid rendering method of virtual scene and model
EP2786353B1 (en) Methods and systems for capturing and moving 3d models and true-scale metadata of real world objects
JP2022522279A (en) How to merge point clouds to identify and retain priorities
US20040155877A1 (en) Image processing apparatus
US11625890B2 (en) Geometry buffer slice tool
US10403040B2 (en) Vector graphics rendering techniques
CN110503718A (en) Three-dimensional engineering model lightweight display methods
WO2021171118A1 (en) Face mesh deformation with detailed wrinkles
US20240404202A1 (en) Systems and Methods for Retexturizing Mesh-Based Objects Using Point Clouds
GB2370737A (en) 3D-model image generation
US20050190194A1 (en) Using polynomial texture maps for micro-scale occlusions
US6437784B1 (en) Image producing system for three-dimensional pieces
CN106406693B (en) Image chooses method and device
US7904825B2 (en) Graphical user interface for gathering image evaluation information
Callieri et al. Virtual Inspector: a flexible visualizer for dense 3D scanned models
US6967653B2 (en) Apparatus and method for semi-automatic classification of volume data
Vázquez et al. Realtime automatic selection of good molecular views
US11138791B2 (en) Voxel to volumetric relationship
US11941759B2 (en) Voxel build
US11288863B2 (en) Voxel build
US20200175746A1 (en) Volumetric slicer
US12033268B2 (en) Volumetric dynamic depth delineation
US11113868B2 (en) Rastered volume renderer and manipulator
US5821942A (en) Ray tracing through an ordered array

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTUITIVE RESEARCH AND TECHNOLOGY CORPORATION, ALABAMA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:JONES, MICHAEL;RUSSELL, KYLE;CROWE, CHANLER;AND OTHERS;REEL/FRAME:056864/0310

Effective date: 20190410

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCF Information on status: patent grant

Free format text: PATENTED CASE