CN110458926B - Three-dimensional virtualization processing method and system for tomograms - Google Patents
Three-dimensional virtualization processing method and system for tomograms Download PDFInfo
- Publication number
- CN110458926B CN110458926B CN201910708916.5A CN201910708916A CN110458926B CN 110458926 B CN110458926 B CN 110458926B CN 201910708916 A CN201910708916 A CN 201910708916A CN 110458926 B CN110458926 B CN 110458926B
- Authority
- CN
- China
- Prior art keywords
- transparent layer
- dimensional
- intersection
- transparent
- blanking
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 28
- 238000000034 method Methods 0.000 claims abstract description 77
- 230000008569 process Effects 0.000 claims abstract description 56
- 239000011159 matrix material Substances 0.000 claims abstract description 37
- 238000012545 processing Methods 0.000 claims abstract description 15
- 210000001747 pupil Anatomy 0.000 claims description 13
- 238000004886 process control Methods 0.000 claims description 7
- 238000006243 chemical reaction Methods 0.000 claims description 5
- 238000012790 confirmation Methods 0.000 claims 1
- 230000004438 eyesight Effects 0.000 abstract description 5
- 238000005516 engineering process Methods 0.000 abstract description 3
- 238000003384 imaging method Methods 0.000 abstract description 3
- 230000002452 interceptive effect Effects 0.000 abstract description 2
- 230000002950 deficient Effects 0.000 abstract 1
- 230000008859 change Effects 0.000 description 12
- 230000000875 corresponding effect Effects 0.000 description 9
- 230000008447 perception Effects 0.000 description 7
- 210000000056 organ Anatomy 0.000 description 6
- 230000001276 controlling effect Effects 0.000 description 5
- 230000009286 beneficial effect Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000013507 mapping Methods 0.000 description 4
- 238000003325 tomography Methods 0.000 description 4
- 238000013519 translation Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000002708 enhancing effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000002596 correlated effect Effects 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000000547 structure data Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
- 230000016776 visual perception Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
- 230000003313 weakening effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G06T5/90—
Abstract
The invention provides a three-dimensional virtualization processing method and a three-dimensional virtualization processing system for a tomogram, and solves the technical problem that the three-dimensional virtual object stereoscopic vision expression of the tomogram is defective in the existing virtual reality technology. The method comprises the following steps: establishing a transparent layer covering the field angle in an object scene of a virtual reality space, and laying a reference dot matrix on the transparent layer, wherein reference points in the reference dot matrix are uniformly distributed in a matrix manner; forming an intersection process between an object in the interactive control object scene and the transparent layer; local blanking of the reference lattice is formed from the primary object during the intersection, and local blanking of the secondary object is formed from the reference lattice. The enhancement processing of three-dimensional imaging is performed on the gray-scale map attribute of an anatomical object formed by a tomographic image, and the integrity of information contained in the gray-scale map is maintained. The stereoscopic impression of the anatomical object is emphasized by the blanking positions and the blanking contours, so that the two-dimensional impression and the three-dimensional impression of the misalignment, which are formed when the anatomical object is viewed continuously, are reduced.
Description
Technical Field
The invention relates to the technical field of virtual reality, in particular to a three-dimensional virtualization processing method and a three-dimensional virtualization processing system for a tomographic image.
Background
In the prior art, the basic contour features of a specific medical object such as a tissue or an organ in a medical tomography image can be obtained by using machine learning algorithms such as depth learning, and the like, so that basic three-dimensional data and contours of independent tissues or organs are formed by fitting among tomography images. In a Virtual Reality (VR) technique, the requirement for the three-dimensional accuracy of a scene object in a virtual field of view is high, and a tomographic image is a gray image, wherein the basic three-dimensional data of a medical object is missing, so that the virtual imaging cannot obtain good user experience. The main body is as follows: the medical objects are often closely combined with each other, so that the medical objects and the standard forms of the medical objects are determined to have larger deformation, and the difficulty of identifying the details of the objects is increased by observation in a virtual field; the detailed observation of the medical object needs to refer to the forms and relative positions of other medical objects, the two-dimensional feeling and the three-dimensional dislocation feeling are easily formed when the gray-scale object is observed in the virtual view field, the image detail information is lost when the medical object is simply subjected to pseudo-color processing, and the control and control advantages of the virtual view field are not easily embodied.
Disclosure of Invention
In view of the above problems, embodiments of the present invention provide a method and a system for processing three-dimensional virtualization of a tomographic image, which solve the technical problem that the existing virtual reality technology has a defect in stereoscopic visual expression of a three-dimensional virtual object of a tomographic image.
The three-dimensional virtualization processing method of the tomographic image of the embodiment of the invention comprises the following steps:
establishing a transparent layer covering a field angle in an object scene of a virtual reality space, and laying a reference dot matrix on the transparent layer, wherein reference points in the reference dot matrix are uniformly distributed in a matrix manner;
interactively controlling the object in the object scene and the transparent layer to form an intersection process;
during the intersection, local blanking of the reference lattice is formed from the primary object and local blanking of the secondary object is formed from the reference lattice.
In an embodiment of the present invention, the interactively controlling the intersection process between the object in the object scene and the transparent layer includes:
the density of the reference points during said intersection varies with the proportion of the primary object within the field of view.
In an embodiment of the present invention, the interactively controlling the intersection process between the object in the object scene and the transparent layer includes:
and in the intersecting process, the transparent layer drives the reference point to vibrate with low-frequency transverse waves.
In an embodiment of the present invention, the forming local blanking of the reference lattice according to the primary object during the intersecting process, and the forming local blanking of the secondary object according to the reference lattice includes:
confirming the primary object and the secondary object through the object attribute;
adjusting the primary object pose and starting the intersection process of the primary object and the transparent layer;
setting the reference point of the reference lattice as transparent blanking within the formed intersecting surface contour when the main object passes through the transparent layer;
setting a pass-through portion of the secondary object as transparent blanking within the formed intersecting surface contour as the secondary object passes through the transparent layer.
In an embodiment of the present invention, the method further includes:
and filling the cross-sectional image of the corresponding position in the profile of the intersecting surface formed by the secondary object.
In an embodiment of the present invention, the method further includes:
and forming object attribute conversion in the intersection process, and converting the primary object and the secondary object.
In an embodiment of the present invention, the axis of the transparent layer is parallel to the pupil line of sight and moves along with the pupil line of sight, and the axis of the transparent layer intersects with or is perpendicular to a focal plane of a device presenting a virtual reality space.
In an embodiment of the present invention, the transparent layers include two layers and are parallel to each other.
The three-dimensional virtual processing system of the tomographic image of the embodiment of the invention comprises:
the memory is used for storing program codes corresponding to the three-dimensional virtual processing method processing procedures of the tomographic images;
a processor for executing the program code.
The three-dimensional virtual processing system of the tomographic image of the embodiment of the invention comprises:
the layer setting device is used for setting a transparent layer covering a field angle in an object scene of a virtual reality space, and a reference dot matrix is distributed on the transparent layer, wherein reference points in the reference dot matrix are uniformly distributed in a matrix manner;
the intersection process adjusting device is used for interactively controlling the object in the object scene and the transparent layer to form an intersection process;
and the intersection process control device is used for forming local blanking of the reference dot matrix according to the main object and forming local blanking of the secondary object according to the reference dot matrix in the intersection process.
The three-dimensional virtualization processing method and the system for the tomographic image, provided by the embodiment of the invention, form the enhancement processing of three-dimensional imaging aiming at the gray level map attribute of the anatomical object formed by the tomographic image, and simultaneously keep the integrity of the information contained in the gray level map. The method utilizes the constructed reference dot matrix and the continuous intersection-blanking of the anatomical object to form the distinction of the main object and the secondary object, and utilizes the blanking position and the blanking outline to highlight the stereoscopic vision feeling of the anatomical object, so that the two-dimensional feeling and the three-dimensional dislocation feeling formed when the gray level map of the anatomical object is continuously observed are weakened. The anatomical object with complex relative position and drifting occlusion relation formed by the tomography images can keep the three-dimensional perception independence of a single individual and the three-dimensional perception consistency of related individuals in continuous observation. The method ensures that the three-dimensional feeling is enhanced and the minimum loss of the gray level mapping information is ensured by utilizing the formed reference dot matrix during continuous observation, and has limited influence on the information interference of an observer. The regular plane reference dot matrix is more beneficial for an observer to neglect the vision during the static observation period of the anatomical object, and simultaneously, the information integrity of the observation gray-scale map information during the continuous observation period of the anatomical object is not influenced.
Drawings
Fig. 1 is a schematic flow chart illustrating a three-dimensional virtualization processing method for a tomographic image according to an embodiment of the invention.
Fig. 2 is a schematic flow chart illustrating an intersection process in a three-dimensional virtualization processing method of a tomographic image according to an embodiment of the present invention.
Fig. 3 is a schematic diagram illustrating a transparent layer in a three-dimensional virtualization processing method for a tomographic image according to an embodiment of the present invention.
Fig. 4 is a schematic flow chart illustrating local blanking in a three-dimensional virtualization processing method of a tomographic image according to an embodiment of the invention.
Fig. 5 is a schematic diagram showing an intersection process of a three-dimensional virtualization processing method for a tomographic image according to an embodiment of the present invention.
Fig. 6 is a schematic diagram illustrating a three-dimensional virtualization processing system for a tomographic image according to an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer and more obvious, the present invention is further described below with reference to the accompanying drawings and the detailed description. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, a three-dimensional virtualization processing method of a tomographic image according to an embodiment of the present invention is shown in fig. 1. In fig. 1, the processing procedure of this embodiment includes:
step 100: and establishing a transparent layer covering the field angle in an object scene of the virtual reality space, and laying a reference dot matrix on the transparent layer, wherein the reference points in the reference dot matrix are uniformly distributed in a matrix.
Those skilled in the art can understand that the virtual reality space corresponds to a determined three-dimensional coordinate space, a preset object scene can be implemented in the three-dimensional coordinate space, and the object scene is composed of a virtual control object and a virtual real object. The control class object may be a three-dimensional tool image for adjusting object scene attributes such as brightness, resolution, scene orientation angle, and manipulation of a specific real object class object, which is a three-dimensional real object image corresponding to a real object that is specifically observed, manipulated, and deconstructed.
Those skilled in the art can understand that the control class object in this embodiment includes adjustment tools for adjusting the direction, position, angle, zoom, and the like of the real class object in the scene, and also includes adjustment tools for enhancing, weakening, enabling, and the like of the scene attribute, and the control class object can interactively feed back with the function keys on the real operation handle in real time. The object of the object class in this embodiment is formed according to three-dimensional data formed by tomographic images of specific organs and tissues, the object of the object class forms three-dimensional contour data by using recognizable contours of the organs or tissues in a series of tomographic images to establish a corresponding three-dimensional frame, a gray level map on the three-dimensional frame and gray level maps on sections of the three-dimensional frame are formed according to gray level values of the organs or tissues in the series of tomographic images, and the gray level maps are consistent with gray level values and gray level value changes of specific positions of the organs or tissues in the tomographic images to ensure lossless conversion of information content in the tomographic images.
The transparent layer is located between the pupil and the object. The transparent layer is a plane layer without curvature, completely covers an angle of horizontal field (COD) and a vertical field (AOC),
step 200: and forming an intersection process between the object in the interactive control object scene and the transparent layer.
And the object of the object type and the transparent layer are interactively controlled through the control object, so that the object of the object type and the transparent layer generate relative movement and are intersected. The intersection process refers to a process in which the object gradually intersects the transparent layer from one end to an opposite end, and in the process, the whole object and the transparent layer may intersect back and forth along an end-opposite end direction, or the part of the object and the transparent layer may intersect back and forth along one direction. The intersection process can be the intersection of the object and the transparent layer on a translation degree of freedom, and can also be the intersection of the object and the transparent layer on a rotation degree of freedom. The intersection is generally oriented towards the pupil for easy viewing.
In an embodiment of the invention, the object class is controlled to approach and intersect the transparent layer.
In an embodiment of the invention, the transparent layer is controlled to approach and intersect the physical object.
Step 300: local blanking of the reference lattice is formed from the primary object during the intersection, and local blanking of the secondary object is formed from the reference lattice.
The object attribute is specifically a physical object attribute, and in this embodiment, the attribute includes a primary object and a secondary object, that is, the physical object includes the primary object and the secondary object, and the physical object attribute can be changed by controlling the physical object, that is, the primary object and the secondary object can be converted.
As will be understood by those skilled in the art, the local blanking of the virtual object is to set a part or the whole of the object to be visually invisible according to the blanking condition, and the virtual object still keeps the object structure data intact.
The three-dimensional virtualization processing method of the tomographic image forms the enhancement processing of three-dimensional imaging aiming at the gray level map attribute of the anatomical object formed by the tomographic image, and simultaneously keeps the integrity of the information contained in the gray level map. The method utilizes the constructed reference dot matrix and the continuous intersection-blanking of the anatomical object to form the distinction of the main object and the secondary object, and utilizes the blanking position and the blanking outline to highlight the stereoscopic vision feeling of the anatomical object, so that the two-dimensional feeling and the three-dimensional dislocation feeling formed when the gray level map of the anatomical object is continuously observed are weakened. The anatomical object with complex relative position and drifting occlusion relation formed by the tomography images can keep the three-dimensional perception independence of a single individual and the three-dimensional perception consistency of related individuals in continuous observation. The method ensures that the three-dimensional feeling is enhanced and the minimum loss of the gray level mapping information is ensured by utilizing the formed reference dot matrix during continuous observation, and has limited influence on the information interference of an observer. The regular plane reference dot matrix is more beneficial for an observer to neglect the vision during the static observation period of the anatomical object, and simultaneously, the information integrity of the observation gray-scale map information during the continuous observation period of the anatomical object is not influenced.
Fig. 2 shows a three-dimensional virtualization processing method for a tomographic image according to an embodiment of the present invention. In fig. 2, on the basis of the above embodiment, the process of intersecting the object with the transparent layer includes:
step 210: the density of the reference points during the intersection varies with the occupancy of the primary object within the field of view.
The main object forms an object projection on the transparent layer in the intersection process, the larger the proportion of the object projection in the field of view is, the more reference points are needed, namely, the density of the reference dot matrix is higher, and the density coefficient formed by the method is positively correlated with the proportion of the object projection in the field of view.
As shown in fig. 2, in an embodiment of the present invention, the process of forming the intersection between the object and the transparent layer further includes:
step 220: and in the intersecting process, the transparent layer drives the reference point to vibrate with low-frequency transverse waves.
The low-frequency transverse wave vibration forms a water wave vibration effect, the low-frequency is in negative correlation with the relative movement rate in the intersecting process, and a single vibration source of the low-frequency transverse wave vibration is positioned at the edge of a view field.
In one embodiment of the present invention, the amplitude of the transverse wave vibration decreases gradually from the vibration source to the edge of the field of view at the distal end with attenuation.
According to the three-dimensional virtualization processing method of the tomographic image, the density of the reference points is adjusted according to the proportion change of the main object in the field of view, so that enough reference points are arranged at the outline of the intersecting surface formed by the instant intersection of the main object and the transparent layer in the intersection process to reflect the outline change of the main object in the intersecting surface, the change degree of the main object in the outline of the intersecting surface is embodied in detail, an observer can restore the outline of the main object in the intersecting surface position in the visual sense according to the reference points, and the change feeling of the three-dimensional outline of the main object is enhanced. The low-frequency transverse wave vibration can be used for forming cycle repetition in a segment change process of the main object in the intersecting surface profile in a complete continuous intersecting process, for example, the cycle repetition is formed by determining the intersecting surface profile at the previous moment, the intersecting surface profile at the current moment and the intersecting surface profile predicted at the next moment, so that the reference point is subjected to cycle change near the intersecting surface of the main object, and the change feeling of the three-dimensional profile of the main object is further improved. The attenuation effect formed by the transverse wave vibration is beneficial to enhancing the scene depth perception, and the space attitude perception of the three-dimensional outline of the main object is improved.
The setting of the transparent layer in the three-dimensional virtualization processing method of the tomographic image according to an embodiment of the present invention is shown in fig. 3. In fig. 3, the axis of the transparent layer of an embodiment of the present invention (top schematic view) is perpendicular to the focal plane of the device (e.g., VR glasses) presenting the virtual reality space.
The three-dimensional virtualization processing method of the fault image simplifies the layout complexity of the transparent layer and the reference dot matrix, effectively reduces the calculation load of the outline of the intersecting surface in the intersecting process, and has good adaptability to translation and rotation of the main object in the intersecting process.
The elementary cell reference points of the reference lattice may be a single pixel or a simple pattern of a group of pixels such as a solitary point, a criss-cross line segment or a polygon. The reference dot matrix is distributed in a matrix mode, that is, the adjacent reference points are consistent in distance.
In fig. 3, the axis of the transparent layer of an embodiment of the present invention (the top view schematic diagram of the middle and lower sides) is parallel to the pupil line of sight and moves with the pupil line of sight, and the axis of the transparent layer intersects with or is perpendicular to the focal plane of the device presenting the virtual reality space.
When the axis of the transparent layer is parallel to the pupil sight line to form a follow-up pupil sight line, the visual representation of the reference point interval in matrix distribution presents corresponding change according to the three-dimensional space coordinate.
According to the three-dimensional virtualization processing method for the tomographic image, disclosed by the embodiment of the invention, the fatigue of a viewer for long-time continuous observation is reduced by the transparent layer following the pupil sight, and the translation and rotation adaptability for a plurality of discrete main objects in the intersecting process is good on the basis that the intersecting surface contour calculation load can be borne, so that the relevance continuous observation of the plurality of main objects is facilitated.
In an embodiment of the present invention, the transparent layers include two layers and are kept parallel. In one embodiment, each transparent layer comprises a reference lattice, and the position of a reference point projected by a first reference lattice to a second transparent layer is staggered with the position of a reference point of a second reference lattice.
According to the three-dimensional virtualization processing method for the fault image, disclosed by the embodiment of the invention, the three-dimensional change of the reference point formed by the outline of the double-intersection surface formed by the instant intersection of the main object and the transparent layer in the intersection process is formed by utilizing the two transparent layers, and the key three-dimensional stereo perception factors such as the change amplitude, the change trend, the depth of field and the like of the three-dimensional change are expanded by utilizing the parallel correlation of the two transparent layers.
Fig. 4 shows a local blanking in the three-dimensional virtualization processing method of a tomographic image according to an embodiment of the present invention. In fig. 4, the local blanking process of the embodiment of the present invention includes:
step 310: the primary object and the secondary object are identified by object attributes.
The primary object and the secondary object are usually combined together according to the true morphology in the tomogram, and each real object type object can only have one object attribute at the same time.
Step 320: and adjusting the posture of the main object and starting the intersection process of the main object and the transparent layer.
The adjustment is to adjust the posture of the primary object, and the posture of the secondary object follows the adjustment of the primary object, wherein the adjustment comprises zooming, translation and rotation. The intersection process may also include zoom, pan, and rotate adjustments of the primary object.
Step 330: and setting the reference point of the reference dot matrix as transparent blanking in the formed intersecting surface contour when the main object passes through the transparent layer.
The transparent blanking of the reference point may be gradual or activated, e.g. gradually transparent or broken up immediately.
Step 340: and setting the passing part of the secondary object as transparent blanking in the formed intersecting surface outline when the secondary object passes through the transparent layer.
The transparent blanking across the portion may be gradual or activated, e.g. gradually transparent or immediately broken.
According to the three-dimensional virtualization processing method of the tomographic image, disclosed by the embodiment of the invention, the three-dimensional visual perception of the main object concerned by the observer is enhanced through the differentiation processing of the objects in the intersection process, the observation interference of the secondary object on the main object is eliminated, meanwhile, the sufficient fusion of the gray level mapping information and the three-dimensional form information of the main object is ensured, and the observation quality and the observation efficiency are improved.
As shown in fig. 4, in an embodiment of the present invention, the method further includes:
step 350: and filling the tomograms at the corresponding positions in the intersecting surface contour formed by the secondary objects.
The three-dimensional virtualization processing method for the tomographic image ensures the information correlation sufficiency between the main object and the secondary object in the intersection process. The gray profile information update of the secondary object can accompany the continuous observation of the primary object with the intersection process.
As shown in fig. 4, in an embodiment of the present invention, the method further includes:
step 360: and forming object attribute conversion in the intersection process, and converting the primary object and the secondary object.
The three-dimensional virtualization processing method of the sectional image improves the observation flexibility of an observer to the medical object through object attribute conversion in the intersection process, can effectively separate the local parts of the distorted and combined medical object, obtains the three-dimensional outline and the gray level mapping of the same medical object from multiple angles, is beneficial to the observer to understand the medical information of a single medical object and understand the mutual evidence information among related medical objects, and better expands the observation means of the medical object in the display process of the existing virtual reality technology.
Fig. 5 shows a three-dimensional virtualization processing method of a specific tomographic image. In fig. 5, the initial positional relationship of the object and the transparent layer is shown by a side view before the intersection. And indicating the influence state of each object and the transparent layer by using the side view and the main view when the objects and the transparent layer are intersected. The three-dimensional image distinguishing and main object observation of the entangled object have better three-dimensional enhancement effect.
The three-dimensional virtual processing system of a tomographic image according to an embodiment of the present invention includes:
a memory for storing program codes corresponding to the three-dimensional virtual processing method processing procedures of the tomographic images;
and the processor is used for executing program codes corresponding to the three-dimensional virtualization processing method and the processing process of the tomographic image.
The processor may be a dsp (digital Signal processing) digital Signal processor, an FPGA (Field-Programmable Gate Array), an mcu (microcontroller unit) system board, an soc (system on a chip) system board, or a plc (Programmable Logic controller) minimum system including I/O.
Fig. 6 shows a three-dimensional virtual processing system for a tomographic image according to an embodiment of the present invention. In fig. 6, an embodiment of the present invention includes:
the layer setting device 1100 is used for setting a transparent layer covering a field angle in an object scene of a virtual reality space, and laying a reference dot matrix on the transparent layer, wherein reference points in the reference dot matrix are uniformly distributed in a matrix;
the intersection process adjusting device 1200 is configured to interactively control an intersection process between an object in an object scene and a transparent layer;
an intersection process control means 1300 for forming a local blanking of the reference lattice from the primary object and a local blanking of the secondary object from the reference lattice during the intersection.
As shown in fig. 5, in an embodiment of the present invention, the intersection procedure adjusting apparatus 200 includes:
a reference point adjusting module 1210 for changing the density of the reference points with the ratio of the main object in the field of view during the intersection process;
as shown in fig. 5, in an embodiment of the present invention, the intersection procedure adjusting apparatus 200 further includes:
and the layer adjusting module 1220 is configured to drive the reference point to perform low-frequency transverse wave vibration in the transparent layer during the intersection process.
As shown in fig. 5, in an embodiment of the present invention, the intersection process control apparatus 300 includes:
an attribute validation module 1310 for validating the primary object and the secondary object by object attributes;
an intersection initialization module 1320, configured to adjust a posture of the main object and start an intersection process of the main object and the transparent layer;
a main object control module 1330 configured to set a reference point of the reference dot matrix as a transparent blanking within the formed outline of the intersection when the main object passes through the transparent layer;
and the secondary object control module 1340 is used for setting the passing part of the secondary object as transparent blanking in the formed intersecting surface outline when the secondary object passes through the transparent layer.
As shown in fig. 5, in an embodiment of the present invention, the intersection process control apparatus 300 further includes:
and a section forming module 1350, configured to fill the cross-sectional image at the corresponding position in the intersecting surface contour formed by the secondary object.
As shown in fig. 5, in an embodiment of the present invention, the intersection process control apparatus 300 further includes:
the property transformation module 1360 is used to form object property transformations in the intersection process, transforming the primary and secondary objects.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (8)
1. A three-dimensional virtualization processing method of a tomographic image is characterized by comprising:
establishing a transparent layer covering a field angle in an object scene of a virtual reality space, wherein the axis of the transparent layer is parallel to a pupil sight and moves along with the pupil sight, the axis of the transparent layer is intersected with or vertical to a focal plane of equipment presenting the virtual reality space, a reference dot matrix is distributed on the transparent layer, and reference points in the reference dot matrix are uniformly distributed in a matrix manner;
interactively controlling the object in the object scene and the transparent layer to form an intersection process;
forming local blanking of the reference lattice from the primary object and local blanking of the secondary object from the reference lattice during the intersection, comprising:
confirming the primary object and the secondary object through the object attribute;
adjusting the primary object pose and starting the intersection process of the primary object and the transparent layer;
setting the reference point of the reference lattice as transparent blanking within the formed intersecting surface contour when the main object passes through the transparent layer;
setting a pass-through portion of the secondary object as transparent blanking within the formed intersecting surface contour as the secondary object passes through the transparent layer.
2. The method for three-dimensional virtual processing of tomographic images as claimed in claim 1, wherein said interactively controlling the intersection process of the object in the object scene and the transparent layer comprises:
the density of the reference points during said intersection varies with the proportion of the primary object within the field of view.
3. The method for three-dimensional virtual processing of tomographic images as claimed in claim 2, wherein said interactively controlling the intersection process of the object in the object scene and the transparent layer comprises:
and in the intersecting process, the transparent layer drives the reference point to vibrate with low-frequency transverse waves.
4. The three-dimensional virtualization processing method of a tomographic image according to claim 1, further comprising:
and filling the cross-sectional image of the corresponding position in the profile of the intersecting surface formed by the secondary object.
5. The three-dimensional virtualization processing method of a tomographic image according to claim 1, further comprising:
and forming object attribute conversion in the intersection process, and converting the primary object and the secondary object.
6. The method for three-dimensional virtual processing of tomographic images according to claim 1, wherein the transparent layer includes two layers and remains parallel.
7. A three-dimensional virtual processing system for a tomographic image, comprising:
a memory for storing program codes corresponding to the processing procedures of the three-dimensional virtual processing method of the tomographic image according to any one of claims 1 to 6;
a processor for executing the program code.
8. A three-dimensional virtual processing system for a tomographic image, comprising:
the image layer setting device is used for setting a transparent image layer covering a field angle in an object scene of a virtual reality space, the axis of the transparent image layer is parallel to a pupil sight and moves along with the pupil sight, the axis of the transparent image layer is intersected with or vertical to a focal plane of equipment presenting the virtual reality space, a reference dot matrix is distributed on the transparent image layer, and reference points in the reference dot matrix are uniformly distributed in a matrix manner;
the intersection process adjusting device is used for interactively controlling the object in the object scene and the transparent layer to form an intersection process;
an intersection process control means for forming a local blanking of said reference lattice from a primary object and a local blanking of a secondary object from said reference lattice during said intersection process;
the intersecting process control device includes:
the attribute confirmation module is used for confirming the primary object and the secondary object through the object attribute;
the intersection initialization module is used for adjusting the posture of the main object and starting the intersection process of the main object and the transparent layer;
the main object control module is used for setting the reference point of the reference dot matrix as transparent blanking in the formed intersecting surface contour when the main object passes through the transparent layer;
and the secondary object control module is used for setting the passing part of the secondary object as transparent blanking in the formed intersecting surface contour when the secondary object passes through the transparent layer.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011144009.1A CN112258612B (en) | 2019-08-01 | 2019-08-01 | Method and system for observing virtual anatomical object based on tomogram |
CN201910708916.5A CN110458926B (en) | 2019-08-01 | 2019-08-01 | Three-dimensional virtualization processing method and system for tomograms |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910708916.5A CN110458926B (en) | 2019-08-01 | 2019-08-01 | Three-dimensional virtualization processing method and system for tomograms |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011144009.1A Division CN112258612B (en) | 2019-08-01 | 2019-08-01 | Method and system for observing virtual anatomical object based on tomogram |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110458926A CN110458926A (en) | 2019-11-15 |
CN110458926B true CN110458926B (en) | 2020-11-20 |
Family
ID=68484556
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910708916.5A Active CN110458926B (en) | 2019-08-01 | 2019-08-01 | Three-dimensional virtualization processing method and system for tomograms |
CN202011144009.1A Active CN112258612B (en) | 2019-08-01 | 2019-08-01 | Method and system for observing virtual anatomical object based on tomogram |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011144009.1A Active CN112258612B (en) | 2019-08-01 | 2019-08-01 | Method and system for observing virtual anatomical object based on tomogram |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN110458926B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006072577A (en) * | 2004-08-31 | 2006-03-16 | Sega Corp | Image processor, image processing method, and image processing program |
CN101674776A (en) * | 2007-01-10 | 2010-03-17 | 剑桥有限公司 | Be used to obtain the equipment and the method for faultage image |
WO2011094543A1 (en) * | 2010-01-28 | 2011-08-04 | Weinberg Medical Physics Llc | Reconstruction of linearly moving objects with intermittent x-ray sources |
CN102930602A (en) * | 2012-10-20 | 2013-02-13 | 西北大学 | Tomography-image-based facial skin three-dimensional surface model reconstructing method |
CN105739093A (en) * | 2014-12-08 | 2016-07-06 | 北京蚁视科技有限公司 | See-through type augmented reality near-eye display |
CN106327532A (en) * | 2016-08-31 | 2017-01-11 | 北京天睿空间科技股份有限公司 | Three-dimensional registering method for single image |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102007057208A1 (en) * | 2007-11-15 | 2009-05-28 | Spatial View Gmbh | Method for displaying image objects in a virtual three-dimensional image space |
CN102222352B (en) * | 2010-04-16 | 2014-07-23 | 株式会社日立医疗器械 | Image processing method and image processing apparatus |
ITTO20111150A1 (en) * | 2011-12-14 | 2013-06-15 | Univ Degli Studi Genova | PERFECT THREE-DIMENSIONAL STEREOSCOPIC REPRESENTATION OF VIRTUAL ITEMS FOR A MOVING OBSERVER |
JP5519753B2 (en) * | 2012-09-28 | 2014-06-11 | 富士フイルム株式会社 | Tomographic image generating apparatus and method |
US10713760B2 (en) * | 2015-12-31 | 2020-07-14 | Thomson Licensing | Configuration for rendering virtual reality with an adaptive focal plane |
CN106293082A (en) * | 2016-08-05 | 2017-01-04 | 成都华域天府数字科技有限公司 | A kind of human dissection interactive system based on virtual reality |
CN107895400A (en) * | 2017-11-09 | 2018-04-10 | 深圳赛隆文化科技有限公司 | A kind of three-dimensional cell domain object of virtual reality renders analogy method and device |
CN108511043B (en) * | 2018-02-27 | 2022-06-03 | 华东师范大学 | X-CT virtual data acquisition and image reconstruction method and system based on numerical simulation |
CN111467801B (en) * | 2020-04-20 | 2023-09-08 | 网易(杭州)网络有限公司 | Model blanking method and device, storage medium and electronic equipment |
-
2019
- 2019-08-01 CN CN201910708916.5A patent/CN110458926B/en active Active
- 2019-08-01 CN CN202011144009.1A patent/CN112258612B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006072577A (en) * | 2004-08-31 | 2006-03-16 | Sega Corp | Image processor, image processing method, and image processing program |
CN101674776A (en) * | 2007-01-10 | 2010-03-17 | 剑桥有限公司 | Be used to obtain the equipment and the method for faultage image |
WO2011094543A1 (en) * | 2010-01-28 | 2011-08-04 | Weinberg Medical Physics Llc | Reconstruction of linearly moving objects with intermittent x-ray sources |
CN102930602A (en) * | 2012-10-20 | 2013-02-13 | 西北大学 | Tomography-image-based facial skin three-dimensional surface model reconstructing method |
CN105739093A (en) * | 2014-12-08 | 2016-07-06 | 北京蚁视科技有限公司 | See-through type augmented reality near-eye display |
CN106327532A (en) * | 2016-08-31 | 2017-01-11 | 北京天睿空间科技股份有限公司 | Three-dimensional registering method for single image |
Non-Patent Citations (3)
Title |
---|
Extraction of Any Angle Virtual Slice on 3D CT Image;Zhanli Hu;《2008 Second International Symposium on Intelligent Information Technology Application》;20090106;第356-360页 * |
工业 CT断层序列图像三维重建的实现;赵俊红 等;《微机发展》;20030731;第13卷(第7期);第20-21、50页 * |
连续切片三维重构绘图过程的消隐技术;李华清 等;《图形图像与多媒体》;20061231(第3期);第41-43页 * |
Also Published As
Publication number | Publication date |
---|---|
CN112258612B (en) | 2022-04-22 |
CN110458926A (en) | 2019-11-15 |
CN112258612A (en) | 2021-01-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10929656B2 (en) | Method and system of hand segmentation and overlay using depth data | |
US10622111B2 (en) | System and method for image registration of multiple video streams | |
CN109564471B (en) | Distributed interactive medical visualization system with primary/secondary interaction features | |
US20140285641A1 (en) | Three-dimensional display device, three-dimensional image processing device, and three-dimensional display method | |
US5715836A (en) | Method and apparatus for planning and monitoring a surgical operation | |
JP3478606B2 (en) | Stereoscopic image display method and apparatus | |
EP0646263B1 (en) | Computer graphic and live video system for enhancing visualisation of body structures during surgery | |
EP2568355A2 (en) | Combined stereo camera and stereo display interaction | |
US8520027B2 (en) | Method and system of see-through console overlay | |
JP2016511888A (en) | Improvements in and on image formation | |
WO2007078581A1 (en) | Analyzing radiological images using 3d stereo pairs | |
CN109255843A (en) | Three-dimensional rebuilding method, device and augmented reality AR equipment | |
KR20200144097A (en) | Light field image generation system, image display system, shape information acquisition server, image generation server, display device, light field image generation method and image display method | |
WO2020145826A1 (en) | Method and assembly for spatial mapping of a model, such as a holographic model, of a surgical tool and/or anatomical structure onto a spatial position of the surgical tool respectively anatomical structure, as well as a surgical tool | |
KR101454780B1 (en) | Apparatus and method for generating texture for three dimensional model | |
KR101929656B1 (en) | Method for the multisensory representation of an object and a representation system | |
CN110458926B (en) | Three-dimensional virtualization processing method and system for tomograms | |
US20180213215A1 (en) | Method and device for displaying a three-dimensional scene on display surface having an arbitrary non-planar shape | |
JPH1176228A (en) | Three-dimensional image construction apparatus | |
KR101339452B1 (en) | Virtual arthroscope surgery simulation apparatus | |
CN113397705A (en) | Fracture reduction navigation method and system | |
CN103162708B (en) | Navigation system with improved map denotation | |
JP7393842B1 (en) | Support system, support device, supported device | |
KR20180016823A (en) | Apparatus for correcting image and method using the same | |
Wang et al. | 68‐1: A 3D Augmented Reality Training System for Endoscopic Surgery |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |