CN109147061B - Method for carrying out human-computer interaction editing on segmentation result of volume data - Google Patents

Method for carrying out human-computer interaction editing on segmentation result of volume data Download PDF

Info

Publication number
CN109147061B
CN109147061B CN201810801542.7A CN201810801542A CN109147061B CN 109147061 B CN109147061 B CN 109147061B CN 201810801542 A CN201810801542 A CN 201810801542A CN 109147061 B CN109147061 B CN 109147061B
Authority
CN
China
Prior art keywords
volume data
triangular
grid
triangular mesh
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810801542.7A
Other languages
Chinese (zh)
Other versions
CN109147061A (en
Inventor
陈莉
雍俊海
宋艺博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201810801542.7A priority Critical patent/CN109147061B/en
Publication of CN109147061A publication Critical patent/CN109147061A/en
Application granted granted Critical
Publication of CN109147061B publication Critical patent/CN109147061B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to a method for carrying out human-computer interaction editing on a segmentation result of volume data, and belongs to the technical field of digital medical treatment. The method comprises the following steps: performing triangular mesh model generation and surface drawing on the volume data segmentation result to be edited; performing volume rendering on the volume data; mixing the volume rendering and surface rendering results to obtain a uniform view; recording user mouse interaction to obtain a pen touch line, acquiring intersection of an extension plane and volume data by using a ray projection method to obtain a cross section grid, and displaying the volume data gray level; intersecting the section grids with the segmentation result grids to obtain intersecting lines; carrying out planarization on the cross-section grids and the intersecting lines, and carrying out interactive editing on the intersecting lines on the obtained two-dimensional view; and establishing a triangular mesh optimization model based on volume data gradient, and decomposing and iteratively optimizing the objective function according to the cross line editing result to obtain the optimized triangular mesh vertex coordinates, namely the interactive editing result. The method improves the segmentation precision and the interactive editing precision of the volume data.

Description

Method for carrying out human-computer interaction editing on segmentation result of volume data
Technical Field
The invention relates to a method for carrying out human-computer interaction editing on a segmentation result of volume data, and belongs to the technical field of digital medical treatment.
Background
In the process of digital medical treatment, the volume data of a patient is acquired through diagnosis and treatment means such as CT and MRI, and a target region needs to be acquired through a volume data segmentation technology, which is helpful for further analysis and diagnosis. Due to the complexity of medical volume data, the existing automatic volume data segmentation technology has poor universality, the segmentation result usually has problems, and the segmentation result needs to be further edited by using a human-computer interaction technology. However, it is difficult to perform digitized human-computer interaction editing on three-dimensional volume data, and the existing technology cannot well solve the problems of high degree of freedom, poor interaction precision and the like faced by three-dimensional human-computer interaction. In order to efficiently edit the volume data division result, a well-designed interaction method is required. In the interactive volume data segmentation method adopted by software such as Seg3D, the user marks a segmentation target on a two-dimensional slice of volume data using a brush. In order to complete the segmentation task, a large number of two-dimensional slices need to be marked, and the interaction efficiency is low. Or by means of an automatic segmentation method provided by software, the obtained segmentation result may have certain errors. The existing interactive body data segmentation method based on grid editing expresses the segmentation result in the form of a triangular grid model, and the segmentation purpose is completed by constructing the triangular grid model through user interaction or edits the grid shape through interaction to edit the segmentation result. The method does not fully consider the volume data information in the grid shape calculation process, has strong dependence on user interaction, and may need additional interaction times to ensure that a better segmentation result is obtained. The existing volume data segmentation method based on contour editing adopted by TurtleSeg and other software selects a planar slice for volume data, and a user marks a contour line of a segmentation target on the slice as a constraint condition for grid generation. And performing curved surface reconstruction on the result obtained by marking the plurality of contour lines to obtain a mesh model of the segmentation result. The method has the defects of accuracy and efficiency, and more contours need to be marked to ensure the accuracy of the details of the segmentation result. Related methods can be found in Ijiri T, Yokota H.content-based interface for refining volume segment [ J ]. Computer Graphics Forum,2010,29(7), 2153-.
The existing volume data interactive segmentation method based on two-dimensional slice interaction needs an operator to mark a large number of slices to obtain a good result, the interaction efficiency is low, and the time required for segmenting one piece of data can be as long as several hours. In the grid-based segmentation method, the algorithm has a low utilization rate of volume data information and depends heavily on user interaction, and due to the lack of volume data as constraint, only the grid shape is considered, so that the segmentation precision is low. The curved surface reconstruction method based on the two-dimensional contour line is difficult to accurately segment the fine structure and needs more user interaction times.
Disclosure of Invention
The invention aims to provide a method for carrying out human-computer interaction editing on a segmentation result of volume data, so as to overcome the technical problems that the advantages of three-dimensional interaction and two-dimensional interaction cannot be combined and the volume data information cannot be fully utilized in the prior art, and further improve the interaction efficiency and the segmentation precision.
The invention provides a method for carrying out human-computer interaction editing on a segmentation result of volume data, which comprises the following steps:
(1) reading the segmentation result of the volume data stored in the form of binary data from the computer, and performing isosurface extraction on the segmentation result by using a cube stepping method to obtain a triangular mesh model M of the segmentation result0Triangular mesh model M0Representing the shape and volume characteristics of the segmented object, the target value of the extracted isosurface in the cube stepping method is 0.5, and the triangle is usedMesh model M0Storing in a mode of vertex coordinates and triangular patch indexes; adopting a graphical processing unit rasterization method to carry out the triangular mesh model M0Performing surface drawing, and recording the volume data coordinates of the positions of all the surface patches in the triangular mesh to obtain a surface drawing two-dimensional image;
(2) reading volume data corresponding to the segmentation result of the step (1) from a computer, respectively performing volume rendering on the volume data from a spatial position corresponding to each computer screen pixel by using a ray projection method according to the position of an observer, namely, emitting rays to the volume data along the sight line direction to obtain projected rays, judging the rays projected onto the volume data according to the volume data coordinates of the position of each patch in the triangular grid of the step (1), if the rays are projected onto the triangular grid patch in the step (1), ending the ray projection, if the rays are not projected onto the patch, intersecting the projected rays and the volume data to obtain an intersecting line segment, obtaining a plurality of sampling points at equal intervals on the intersecting line segment, reading the volume data gray value at the sampling points, and accumulating the volume data gray value along the ray projection direction, obtaining a brightness value accumulated in the transmission process of the projection light in the volume data, and obtaining a color value of each screen pixel according to the brightness value to obtain a two-dimensional image drawn by the volume;
(3) in the same view, mixing the surface rendering two-dimensional image in the step (1) and the volume rendering two-dimensional image in the step (2) by adopting a transparency mixing method to obtain a unified display image;
(4) recording a pen touch line for mouse interaction of a user in the unified display image in the step (3), storing the pen touch line in a coordinate point sequence form of screen pixels, sequentially connecting every two coordinate points in the pen touch line to form a broken line segment, extending the screen pixels corresponding to all points on the broken line segment in the direction of sight of an observer by using the light projection method in the step (2) to obtain a plurality of parallel rays, intersecting the plurality of parallel rays with the volume data in the step (2) to obtain a plurality of parallel line segments, forming a quadrangle by two adjacent parallel line segments, and enabling the quadrangle to be located according to the pen touch lineThe punctuation sequences are connected in sequence to obtain an extension surface, and the extension surface is stored in a triangular mesh form and is recorded as a section mesh MSMaking the section grid MSIntersecting the volume data and meshing M according to the sectionSThe grid vertex coordinates obtain corresponding volume data coordinates, further obtain the gray level of the volume data, and grid M on the sectionSDisplaying a gray scale of the volume data;
(5) the section grid M obtained in the step (4)SAnd (2) performing the step (1) on the triangular mesh model M0One triangular patch is taken from each triangular patch, two triangles corresponding to the two triangular patches are subjected to space intersection test to obtain three-dimensional space coordinates and barycentric coordinates of two end points of two intersected triangular patch cross-section in space, all triangular patch selection modes in two triangular meshes are traversed to obtain all cross-section lines of the triangular patches in the two triangular meshes, all cross-section lines are connected into continuous broken line segments according to space adjacent relations and are recorded as cross-section meshes MSAnd triangular mesh model M0Cross line C of0
(6) The section grid M of the step (4) is processedSA plurality of quadrangles connected in sequence are arranged on a two-dimensional plane according to the stroke line coordinate point sequence in the step (4) to obtain a section grid P on the two-dimensional planeSRasterizing the two-dimensional cross-section grid P by adopting a graphic processing unitSSurface drawing is carried out to obtain volume data gray scale on a section grid PSThe display result image of (1); according to the above cross-section mesh PSSetting the intersecting line C of the step (5)0In the cross section of the grid PSThe barycentric coordinates on the middle triangular patch are not changed to obtain an intersection line C on a two-dimensional planeP(ii) a Making the above two-dimensional section mesh PSIntersecting the two dimensions CPThe views are displayed together in the same view and are recorded as a two-dimensional view;
(7) the visual boundary of the volume data gray scale on the two-dimensional section grid in the two-dimensional view of the step (6) and the two-dimensional intersection line C in the two-dimensional viewPComparing the positions of the two lines, and if the intersection line deviates from the above steps(1) When the boundary of the object is divided, the point on the intersecting line is moved by using the mouse, and the point on the intersecting line is moved to the boundary of the divided object to obtain the intersecting line CNAnd obtaining an intersecting line CNThe barycentric coordinates of the points above;
(8) the intersection line C according to the above step (7)NBarycentric coordinates of points thereon and the cross-sectional mesh M in the above-described step (4)SObtaining the intersecting line C by the grid vertex coordinatesNThe three-dimensional space coordinate C of the upper point;
(9) establishing a triangular mesh optimization model based on volume data gradient, wherein an objective function E of the triangular mesh optimization model is as follows:
E=||L(V′)-T(V′)δ0||2+ω||MV′-C||2-κG(V′)
wherein V' is an independent variable of the objective function, namely the optimized triangular mesh model M0The vertex coordinates are solved to obtain V' by minimizing an objective function E,
‖L(V′)-T(V′)δ02is a laplace shape constraint, where L is the laplace coordinate operator,
Figure GDA0003463738920000031
l (V') is an optimized triangular mesh model M0Laplace coordinate of (d)0Is an initial grid M0T (V') is a local transformation of vertices represented by a linear combination of vertex coordinates;
ω‖MV′-C‖2for user interaction constraint, ω is a weight parameter, and C is the intersection line C obtained in the above step (8)NThe three-dimensional space coordinates C, M of the upper point are parameter matrixes, namely the intersection line CNThe three-dimensional space coordinate C of the upper point is expressed as a triangular mesh model M0Linear combination of the vertex coordinates is carried out, and the linear combination is expressed into a matrix form, so that a parameter matrix M can be obtained;
- κ G (V ') is a volume data gradient constraint, κ is a weight parameter of the term, G is a volume data gradient function, G (V ') represents a volume data gradient magnitude at the vertex V ', the gradient function G is numerically calculated using a difference;
(10) rewriting the optimization function E of the above step (9) to E':
E′=||L(V′)-T(V′)δ0||2+ω||MV′-C||2-κG(P)+θ||V′-P||2
wherein P is an auxiliary variable and represents a triangular mesh model M in the optimization process0Theta is a relaxation variable representing | V' -P |2The weight coefficient of (2) is gradually increased in the optimization process;
(11) extracting the related terms of E ' in the step (10) and taking V ' as a variable to obtain an optimization sub-problem S1, wherein the optimization sub-problem S1 is a known auxiliary variable P, and solving a vertex coordinate V ', and an optimization function is as follows:
Figure GDA0003463738920000041
the optimization subproblem S1 is a linear equation, which is solved by using a linear least square method to obtain V ', and V' is updated;
(12) extracting the relevant items of E 'in the step (10) by taking P as a variable to obtain an optimization sub-problem S2, wherein the optimization sub-problem S2 is a known vertex coordinate V', and the auxiliary variable P is solved, and the optimization function is as follows:
Figure GDA0003463738920000042
the optimization sub-problem S2 is a gradient optimization problem, the objective function being independent for each vertex, for the triangular mesh M0Solving the gradient optimization problem by using a neighborhood searching method to traverse the triangular mesh M0Obtaining P and updating P at the top point of the graph;
(13) setting the maximum value of the relaxation variable theta to thetamaxSetting the initial value of the relaxation variable theta to theta at initialization0Taking V' ═ V0In which V is0A triangular mesh M of the step (1)0Then the iteration starts:
(13-1) let P be V', traverse the triangular mesh M0Performing neighborhood search on the vertex of the P, solving the optimization subproblem S2 to obtain and update P;
(13-2) solving the optimization sub-problem S1 by using a linear least square method according to the P in the step (13-1), and obtaining and updating V';
increasing the value of theta, repeating the steps (13-1) and (13-2), judging the relaxation variable theta, and if theta is larger than or equal to thetamaxStopping iteration and carrying out the step (14) if theta is less than thetamaxAnd then returning to (13-1);
(14) and (5) displaying a triangular mesh corresponding to the vertex coordinates V' in the step (13), wherein the triangular mesh is a result obtained by carrying out one-time human-computer interaction editing on the volume data segmentation result.
The method for carrying out human-computer interaction editing on the segmentation result of the volume data, provided by the invention, has the advantages that:
according to the method, the construction body drawing and the surface drawing are combined and are uniformly displayed in an interactive environment, so that the segmentation target is convenient to identify; the segmentation method based on stroke drawing is used for solving the intersecting lines of the segmentation results, so that the segmentation method can adapt to the segmentation targets in different shapes, and the method is favorable for more intuitively confirming the problems in the edited segmentation results; the three-dimensional slice is flattened to generate a two-dimensional slice, so that interaction can be performed more accurately, and the interactive editing precision is improved; the two-dimensional editing result is transmitted to the three-dimensional by the grid optimization algorithm based on the volume data gradient, so that the number of user interaction is saved, and meanwhile, the partition grid boundary is close to the volume data boundary, so that the partition precision is improved.
Detailed Description
The invention provides a method for carrying out human-computer interaction editing on a segmentation result of volume data, which comprises the following steps:
(1) reading the segmentation result of the volume data stored in the form of binary data from the computer, and performing isosurface extraction on the segmentation result by using a cube stepping (MarchingCubes) method to obtainTriangular mesh model M to segmentation result0Triangular mesh model M0The method comprises the steps of representing the shape and the volume characteristics of a segmentation target, wherein the target value of an extracted isosurface in a cube stepping method is 0.5, in the process, an input volume data segmentation result is stored in a binary data form, 0 or 1 is used for representing whether the position belongs to the segmentation target or not at each position, if 0 represents that the position does not belong to the segmentation target, if 1 represents that the position belongs to the segmentation target, all three-dimensional objects composed of 1 positions are the segmentation results represented by the segmentation data, and therefore the target value of the extracted isosurface in the cube stepping method is set to be 0.5. Using the triangular mesh model M0Storing in a mode of vertex coordinates and triangular patch indexes; rasterizing the triangular mesh model M by a Graphic Processing Unit (GPU)0And (4) performing surface drawing, wherein in the surface drawing process, the depth cache of the GPU equipment is used for shielding and removing, and a correct surface drawing result is obtained. In addition, in the surface drawing process, volume data coordinates of the positions of all the surface patches in the triangular mesh are recorded; and obtaining a surface drawing two-dimensional image which can be displayed on a screen.
(2) Reading volume data corresponding to the segmentation result of the step (1) from a computer, respectively performing volume rendering on the volume data from a spatial position corresponding to each computer screen pixel by using a ray projection method according to the position of an observer, namely, emitting rays to the volume data along the sight line direction to obtain projected rays, judging the rays projected onto the volume data according to the volume data coordinates of the position of each patch in the triangular grid of the step (1), if the rays are projected onto the triangular grid patch in the step (1), ending the ray projection, if the rays are not projected onto the patch, intersecting the projected rays and the volume data to obtain an intersecting line segment, obtaining a plurality of sampling points at equal intervals on the intersecting line segment, reading the volume data gray value at the sampling points, and accumulating the volume data gray value along the ray projection direction, obtaining a brightness value accumulated in the transmission process of the projection light in the volume data, and obtaining a color value of each screen pixel according to the brightness value to obtain a two-dimensional image drawn by the volume; the two-dimensional image may be displayed on a screen.
(3) In the same view, mixing the surface rendering two-dimensional image in the step (1) and the volume rendering two-dimensional image in the step (2) by adopting a transparency mixing method to obtain a unified display image;
(4) and (4) recording a pen touch line for mouse interaction of a user in the unified display image in the step (3), storing the pen touch line in a coordinate point sequence form of screen pixels, sequentially connecting every two coordinate points in the pen touch line to form a broken line segment, wherein the broken line segment represents a part needing to be edited in a segmentation result by observing the unified display image, and intercepting the section shape of the three-dimensional volume data according to the shape of the part to be edited. Using the light projection method in the step (2), extending the screen pixels corresponding to all points on the broken line segment along the sight line direction of the observer to the inner direction of the screen to obtain a plurality of parallel rays, intersecting the plurality of parallel rays with the volume data in the step (2) to obtain a plurality of parallel line segments, forming a quadrangle by two adjacent parallel line segments, sequentially connecting the quadrangle according to the coordinate point sequence of the stroke line to obtain an extension surface, storing the extension surface in the form of triangular grids and marking as a cross-section grid MSMaking the section grid MSIntersecting the volume data and meshing M according to the sectionSThe grid vertex coordinates obtain corresponding volume data coordinates, further obtain the gray level of the volume data, and grid M on the sectionSDisplaying a gray scale of the volume data; the gray scale condition of the disassembled data can be observed through the observation of the cross section grids, and the segmentation target corresponding to the segmentation result can be distinguished through the gray scale.
(5) The section grid M obtained in the step (4)SAnd (2) performing the step (1) on the triangular mesh model M0Respectively taking one triangular patch, carrying out space intersection test on two triangles corresponding to the two triangular patches to obtain three-dimensional space coordinates and barycentric coordinates of two end points of two intersected triangular patch sections in space, traversing all triangular patch selection modes in the two triangular meshes to obtain all intersected triangular patch sections in the two triangular meshes, and carrying out space intersection test on all the intersected patch sections according to the intersection line sectionsThe adjacent space relations are connected into a continuous broken line segment which is marked as a section grid MSAnd triangular mesh model M0Cross line C of0
(6) The section grid M of the step (4) is processedSA plurality of quadrangles connected in sequence are arranged on a two-dimensional plane according to the stroke line coordinate point sequence in the step (4) to obtain a section grid P on the two-dimensional planeSRasterizing the two-dimensional cross-section grid P by adopting a graphic processing unitSSurface drawing is carried out to obtain volume data gray scale on a section grid PSThe display result image of (1); according to the above cross-section mesh PSSetting the intersecting line C of the step (5)0In the cross section of the grid PSThe barycentric coordinates on the middle triangular patch are not changed to obtain an intersection line C on a two-dimensional planeP(ii) a Making the above two-dimensional section mesh PSIntersecting the two dimensions CPThe views are displayed together in the same view and are recorded as a two-dimensional view;
(7) the visual boundary of the volume data gray scale on the two-dimensional section grid in the two-dimensional view of the step (6) and the two-dimensional intersection line C in the two-dimensional viewPComparing the positions, when the intersecting line deviates from the boundary of the segmentation target in the step (1), moving the point on the intersecting line by using a mouse, and moving the point on the intersecting line to the boundary of the segmentation target to obtain an intersecting line CNAnd obtaining an intersecting line CNThe barycentric coordinates of the points above;
(8) the two-dimensional intersection line C of the step (7) is formedPTo the new intersection line CNIs converted into the position transformation of the three-dimensional mesh vertex. The method is according to the intersecting line C of the step (7) aboveNBarycentric coordinates of points thereon and the cross-sectional mesh M in the above-described step (4)SObtaining the intersecting line C by the grid vertex coordinatesNThe three-dimensional space coordinate C of the upper point;
(9) establishing a triangular mesh optimization model based on volume data gradient for transmitting the two-dimensional interactive editing result in the step (7) to three-dimensional, so that the two-dimensional interactive editing operation can drive the three-dimensional mesh M0And deforming, thereby completing the editing of the volume data segmentation result. TheThe objective function E of the triangular mesh optimization model is as follows:
E=||L(V′)-T(V′)δ0||2+ω||MV′-C||2-κG(V′)
wherein V' is an independent variable of the objective function, namely the optimized triangular mesh model M0The vertex coordinates are solved to obtain V' by minimizing an objective function E,
‖L(V′)-T(V′)δ02is a laplace shape constraint, where L is the laplace coordinate operator,
Figure GDA0003463738920000071
l (V') is an optimized triangular mesh model M0Laplace coordinate of (d)0Is an initial grid M0T (V') is a local transformation of vertices represented by a linear combination of vertex coordinates; to solve the problem that laplace coordinates are sensitive to rotation scaling transformations.
ω‖MV′-C‖2For user interaction constraint, ω is a weight parameter, and the value thereof is adjusted empirically, which in one embodiment of the present invention is 0.5, and C is the intersection line C obtained in the above step (8)NThe three-dimensional space coordinates C, M of the upper point are parameter matrixes, namely the intersection line CNThe three-dimensional space coordinate C of the upper point is expressed as a triangular mesh model M0Linear combination of the vertex coordinates is carried out, and the linear combination is expressed into a matrix form, so that a parameter matrix M can be obtained;
- κ G (V ') is a volume data gradient constraint, κ is a weight parameter of the term, the value of which is empirically adjusted, and in one embodiment of the invention is 1.0, G is a volume data gradient function, G (V ') represents the volume data gradient magnitude at the vertex V ', and the gradient function G is numerically calculated using a difference;
(10) rewriting the optimization function E of the above step (9) to E':
E′=||L(V′)-T(V′)δ0||2+ω||MV′-C||2-κG(P)+θ||V′-P||2
wherein P is an auxiliary variable and represents a triangular mesh model M in the optimization process0Theta is a relaxation variable representing | V' -P |2The weight coefficient of (2) is gradually increased in the optimization process;
the use of the auxiliary variable P in the above equation replaces V ' in the non-convex function G (V '), with the addition of | V ' -P |2The terms are constrained so that the auxiliary variable P is as close as possible to V' being replaced. When θ → + ∞, there is E' → E;
(11) extracting the related terms of E ' in the step (10) and taking V ' as a variable to obtain an optimization sub-problem S1, wherein the optimization sub-problem S1 is a known auxiliary variable P, and solving a vertex coordinate V ', and an optimization function is as follows:
Figure GDA0003463738920000081
the optimization subproblem S1 is a linear equation, which is solved by using a linear least square method to obtain V ', and V' is updated;
(12) extracting the relevant items of E 'in the step (10) by taking P as a variable to obtain an optimization sub-problem S2, wherein the optimization sub-problem S2 is a known vertex coordinate V', and the auxiliary variable P is solved, and the optimization function is as follows:
Figure GDA0003463738920000082
the optimization sub-problem S2 is a gradient optimization problem, the objective function being independent for each vertex, for the triangular mesh M0Solving the gradient optimization problem by using a neighborhood searching method to traverse the triangular mesh M0Obtaining P and updating P at the top point of the graph; selecting some points in a certain size of area near each vertex, and calculating the objective function of the optimization subproblem S2 to obtain a vertex coordinate with the minimum objective function value;
(13) the optimization sub-problem S1 in step (11) is iterated over the optimization sub-problem S2 in step (12).
Setting the maximum value of the relaxation variable theta to thetamaxSetting the initial value of the relaxation variable theta to theta at initialization0Taking V' ═ V0In which V is0A triangular mesh M of the step (1)0Then the iteration starts:
(13-1) let P be V', traverse the triangular mesh M0Performing neighborhood search on the vertex of the P, solving the optimization subproblem S2 to obtain and update P;
(13-2) solving the optimization sub-problem S1 by using a linear least square method according to the P in the step (13-1), and obtaining and updating V';
increasing the value of theta, repeating the steps (13-1) and (13-2), judging the relaxation variable theta, and if theta is larger than or equal to thetamaxStopping iteration and carrying out the step (14) if theta is less than thetamaxAnd then returning to (13-1);
(14) and (5) displaying a triangular mesh corresponding to the vertex coordinates V' in the step (13), wherein the triangular mesh is a result obtained by carrying out one-time human-computer interaction editing on the volume data segmentation result.

Claims (1)

1. A method for carrying out human-computer interaction editing on a segmentation result of volume data is characterized by comprising the following steps:
(1) reading the segmentation result of the volume data stored in the form of binary data from the computer, and performing isosurface extraction on the segmentation result by using a cube stepping method to obtain a triangular mesh model M of the segmentation result0Triangular mesh model M0Representing the shape and volume characteristics of the segmentation target, extracting the isosurface with a target value of 0.5 in the cube stepping method, and modeling the triangular mesh model M0Storing in a mode of vertex coordinates and triangular patch indexes; adopting a graphical processing unit rasterization method to carry out the triangular mesh model M0Performing surface drawing, and recording the volume data coordinates of the positions of all the surface patches in the triangular mesh to obtain a surface drawing two-dimensional image;
(2) reading volume data corresponding to the segmentation result of the step (1) from a computer, respectively performing volume rendering on the volume data from a spatial position corresponding to each computer screen pixel by using a ray projection method according to the position of an observer, namely, emitting rays to the volume data along the sight line direction to obtain projected rays, judging the rays projected onto the volume data according to the volume data coordinates of the position of each patch in the triangular grid of the step (1), if the rays are projected onto the triangular grid patch in the step (1), ending the ray projection, if the rays are not projected onto the patch, intersecting the projected rays and the volume data to obtain an intersecting line segment, obtaining a plurality of sampling points at equal intervals on the intersecting line segment, reading the volume data gray value at the sampling points, and accumulating the volume data gray value along the ray projection direction, obtaining a brightness value accumulated in the transmission process of the projection light in the volume data, and obtaining a color value of each screen pixel according to the brightness value to obtain a two-dimensional image drawn by the volume;
(3) in the same view, mixing the surface rendering two-dimensional image in the step (1) and the volume rendering two-dimensional image in the step (2) by adopting a transparency mixing method to obtain a unified display image;
(4) recording a pen touch line for mouse interaction of a user in the unified display image in the step (3), storing the pen touch line in a coordinate point sequence form of screen pixels, sequentially connecting every two coordinate points in the pen touch line to form a broken line segment, using the light projection method in the step (2), extending the screen pixels corresponding to all points on the broken line segment along the sight direction of an observer to the inner direction of the screen to obtain a plurality of parallel rays, intersecting the plurality of parallel rays with the volume data in the step (2) to obtain a plurality of parallel line segments, forming a quadrangle by two adjacent parallel line segments, sequentially connecting the quadrangle according to the coordinate point sequence of the pen touch line to obtain an extension surface, storing the extension surface in a triangular grid form, and recording the extension surface as a cross-section grid MSMaking the section grid MSIntersecting the volume data and meshing M according to the sectionSThe grid vertex coordinates obtain corresponding volume data coordinates, further obtain the gray level of the volume data, and grid M on the sectionSUpper displayThe gray scale of the volume data;
(5) the section grid M obtained in the step (4)SAnd (2) performing the step (1) on the triangular mesh model M0One triangular patch is taken from each triangular patch, two triangles corresponding to the two triangular patches are subjected to space intersection test to obtain three-dimensional space coordinates and barycentric coordinates of two end points of two intersected triangular patch cross-section in space, all triangular patch selection modes in two triangular meshes are traversed to obtain all cross-section lines of the triangular patches in the two triangular meshes, all cross-section lines are connected into continuous broken line segments according to space adjacent relations and are recorded as cross-section meshes MSAnd triangular mesh model M0Cross line C of0
(6) The section grid M of the step (4) is processedSA plurality of quadrangles connected in sequence are arranged on a two-dimensional plane according to the stroke line coordinate point sequence in the step (4) to obtain a section grid P on the two-dimensional planeSRasterizing the two-dimensional cross-section grid P by adopting a graphic processing unitSSurface drawing is carried out to obtain volume data gray scale on a section grid PSThe display result image of (1); according to the above cross-section mesh PSSetting the intersecting line C of the step (5)0In the cross section of the grid PSThe barycentric coordinates on the middle triangular patch are not changed to obtain an intersection line C on a two-dimensional planeP(ii) a Making the above two-dimensional section mesh PSIntersecting the two dimensions CPThe views are displayed together in the same view and are recorded as a two-dimensional view;
(7) the visual boundary of the volume data gray scale on the two-dimensional section grid in the two-dimensional view of the step (6) and the two-dimensional intersection line C in the two-dimensional viewPComparing the positions, when the intersecting line deviates from the boundary of the segmentation target in the step (1), moving the point on the intersecting line by using a mouse, and moving the point on the intersecting line to the boundary of the segmentation target to obtain an intersecting line CNAnd obtaining an intersecting line CNThe barycentric coordinates of the points above;
(8) the intersection line C according to the above step (7)NBarycentric coordinates of points thereon and the cross-sectional net in the above step (4)Grid MSObtaining the intersecting line C by the grid vertex coordinatesNThe three-dimensional space coordinate C of the upper point;
(9) establishing a triangular mesh optimization model based on volume data gradient, wherein an objective function E of the triangular mesh optimization model is as follows:
E=||L(V′)-T(V′)δ0||2+ω||MV′-C||2-κG(V′)
wherein V' is an independent variable of the objective function, namely the optimized triangular mesh model M0The vertex coordinates are solved to obtain V' by minimizing an objective function E,
||L(V′)-T(V′)δ0||2is a laplace shape constraint, where L is the laplace coordinate operator,
Figure FDA0003463738910000021
l (V') is an optimized triangular mesh model M0Laplace coordinate of (d)0Is an initial grid M0T (V') is a local transformation of vertices represented by a linear combination of vertex coordinates;
ω||MV′-C||2for user interaction constraint, ω is a weight parameter, and C is the intersection line C obtained in the above step (8)NThe three-dimensional space coordinates C, M of the upper point are parameter matrixes, namely the intersection line CNThe three-dimensional space coordinate C of the upper point is expressed as a triangular mesh model M0Linear combination of the vertex coordinates is carried out, and the linear combination is expressed into a matrix form, so that a parameter matrix M can be obtained;
- κ G (V ') is a volume data gradient constraint, κ is a weight parameter of the term, G is a volume data gradient function, G (V ') represents a volume data gradient magnitude at the vertex V ', the gradient function G is numerically calculated using a difference;
(10) rewriting the optimization function E of the above step (9) to E':
E′=||L(V′)-T(V′)δ0||2+ω||MV′-C||2-κG(P)+θ||V′-P||2
wherein P is an auxiliary variable and represents a triangular mesh model M in the optimization process0The vertex coordinate of (a) is a relaxation variable and represents | V' -P | Y2The weight coefficient of (2) is gradually increased in the optimization process;
(11) extracting the related terms of E ' in the step (10) and taking V ' as a variable to obtain an optimization sub-problem S1, wherein the optimization sub-problem S1 is a known auxiliary variable P, and solving a vertex coordinate V ', and an optimization function is as follows:
Figure FDA0003463738910000031
the optimization subproblem S1 is a linear equation, which is solved by using a linear least square method to obtain V ', and V' is updated;
(12) extracting the relevant items of E 'in the step (10) by taking P as a variable to obtain an optimization sub-problem S2, wherein the optimization sub-problem S2 is a known vertex coordinate V', and the auxiliary variable P is solved, and the optimization function is as follows:
Figure FDA0003463738910000032
the optimization sub-problem S2 is a gradient optimization problem, the objective function being independent for each vertex, for the triangular mesh M0Solving the gradient optimization problem by using a neighborhood searching method to traverse the triangular mesh M0Obtaining P and updating P at the top point of the graph;
(13) setting the maximum value of the relaxation variable theta to thetamaxSetting the initial value of the relaxation variable theta to theta at initialization0Taking V' ═ V0In which V is0A triangular mesh M of the step (1)0Then the iteration starts:
(13-1) let P be V', traverse the triangular mesh M0Performing neighborhood search on the vertex of the P, solving the optimization subproblem S2 to obtain and update P;
(13-2) solving the optimization sub-problem S1 by using a linear least square method according to the P in the step (13-1), and obtaining and updating V';
increasing the value of theta, repeating the steps (13-1) and (13-2), judging the relaxation variable theta, and if theta is larger than or equal to thetamaxStopping iteration and carrying out the step (14) if theta is less than thetamaxAnd then returning to (13-1);
(14) and (5) displaying a triangular mesh corresponding to the vertex coordinates V' in the step (13), wherein the triangular mesh is a result obtained by carrying out one-time human-computer interaction editing on the volume data segmentation result.
CN201810801542.7A 2018-07-20 2018-07-20 Method for carrying out human-computer interaction editing on segmentation result of volume data Active CN109147061B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810801542.7A CN109147061B (en) 2018-07-20 2018-07-20 Method for carrying out human-computer interaction editing on segmentation result of volume data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810801542.7A CN109147061B (en) 2018-07-20 2018-07-20 Method for carrying out human-computer interaction editing on segmentation result of volume data

Publications (2)

Publication Number Publication Date
CN109147061A CN109147061A (en) 2019-01-04
CN109147061B true CN109147061B (en) 2022-04-01

Family

ID=64801203

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810801542.7A Active CN109147061B (en) 2018-07-20 2018-07-20 Method for carrying out human-computer interaction editing on segmentation result of volume data

Country Status (1)

Country Link
CN (1) CN109147061B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113628095B (en) * 2021-08-04 2022-11-01 展讯通信(上海)有限公司 Portrait area grid point information storage method and related product

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7286127B2 (en) * 2005-06-22 2007-10-23 Microsoft Corporation Large mesh deformation using the volumetric graph Laplacian
US7868885B2 (en) * 2007-06-22 2011-01-11 Microsoft Corporation Direct manipulation of subdivision surfaces using a graphics processing unit
DE102009042326A1 (en) * 2009-09-21 2011-06-01 Siemens Aktiengesellschaft Interactively changing the appearance of an object represented by volume rendering
US10147185B2 (en) * 2014-09-11 2018-12-04 B.G. Negev Technologies And Applications Ltd., At Ben-Gurion University Interactive segmentation
US9713424B2 (en) * 2015-02-06 2017-07-25 Richard F. Spaide Volume analysis and display of information in optical coherence tomography angiography
CN104794758B (en) * 2015-04-17 2017-10-03 青岛海信医疗设备股份有限公司 A kind of method of cutting out of 3-D view
EP3314582B1 (en) * 2015-06-29 2020-08-05 Koninklijke Philips N.V. Interactive mesh editing
CN106373168A (en) * 2016-11-24 2017-02-01 北京三体高创科技有限公司 Medical image based segmentation and 3D reconstruction method and 3D printing system

Also Published As

Publication number Publication date
CN109147061A (en) 2019-01-04

Similar Documents

Publication Publication Date Title
US8970581B2 (en) System and method for interactive contouring for 3D medical images
CN106651900B (en) A kind of overhead strawberry three-dimensional modeling method in situ based on contours segmentation
RU2673975C2 (en) Segmentation of region of interest managed through eye tracking
CN108038862B (en) Interactive medical image intelligent segmentation modeling method
US8983189B2 (en) Method and systems for error correction for three-dimensional image segmentation
CA2776186C (en) Image display of a centerline of tubular structure
US20080225044A1 (en) Method and Apparatus for Editing Three-Dimensional Images
US20070279435A1 (en) Method and system for selective visualization and interaction with 3D image data
US10535189B2 (en) Image display of a centerline of tubular structure
CN106887000A (en) The gridding processing method and its system of medical image
JPWO2012063653A1 (en) Medical image display device and medical image display method
US9741123B2 (en) Transformation of 3-D object for object segmentation in 3-D medical image
CN109191510B (en) 3D reconstruction method and device for pathological section
CN101625766A (en) Method for processing medical images
JP4394127B2 (en) Area correction method
CN105825471A (en) Unity-3D-based three-dimensional surface reconstruction and rendering method
CN110993067A (en) Medical image labeling system
CN111739167A (en) 3D human head reconstruction method, device, equipment and medium
CN111142753A (en) Interactive method, information processing method and storage medium
JP6643821B2 (en) Image processing device
CN109147061B (en) Method for carrying out human-computer interaction editing on segmentation result of volume data
CN114783591A (en) Three-dimensional visual auxiliary diagnosis system for brain tumor
CN114201849A (en) Artificial intelligence based techniques for generating designs in virtual environments
Monnet et al. Three-dimensional morphometric ontogeny of mollusc shells by micro-computed tomography and geometric analysis
Schoor et al. VR based visualization and exploration of plant biological data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant