CN109147061B - Method for carrying out human-computer interaction editing on segmentation result of volume data - Google Patents

Method for carrying out human-computer interaction editing on segmentation result of volume data Download PDF

Info

Publication number
CN109147061B
CN109147061B CN201810801542.7A CN201810801542A CN109147061B CN 109147061 B CN109147061 B CN 109147061B CN 201810801542 A CN201810801542 A CN 201810801542A CN 109147061 B CN109147061 B CN 109147061B
Authority
CN
China
Prior art keywords
volume data
triangular mesh
dimensional
triangular
line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810801542.7A
Other languages
Chinese (zh)
Other versions
CN109147061A (en
Inventor
陈莉
雍俊海
宋艺博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201810801542.7A priority Critical patent/CN109147061B/en
Publication of CN109147061A publication Critical patent/CN109147061A/en
Application granted granted Critical
Publication of CN109147061B publication Critical patent/CN109147061B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本发明涉及一种对体数据的分割结果进行人机交互编辑的方法,属于数字化医疗技术领域。包括:对待编辑体数据分割结果进行三角网格模型生成并进行面绘制;对体数据进行体绘制;将体绘制与面绘制结果进行混合得到统一的视图;记录用户鼠标交互得到笔触线,使用光线投射方法获取延伸面与体数据相交得到截面网格并显示体数据灰度;截面网格与分割结果网格求交,得到相交线;对截面网格与相交线进行平面化,并在得到的二维视图上对相交线进行交互编辑;建立基于体数据梯度的三角网格优化模型,根据相交线编辑结果,对目标函数进行分解与迭代优化,得到优化后的三角网格顶点坐标,即为交互编辑结果。本方法提高了体数据的分割精度和交互编辑精度。The invention relates to a method for human-computer interactive editing of the segmentation result of volume data, and belongs to the technical field of digital medical treatment. Including: generating a triangular mesh model and surface rendering for the segmentation result of the volume data to be edited; performing volume rendering on the volume data; mixing the volume rendering and surface rendering results to obtain a unified view; recording the user's mouse interaction to obtain stroke lines, and using light The projection method obtains the intersection of the extension surface and the volume data to obtain the section grid and displays the grayscale of the volume data; intersects the section grid with the segmentation result grid to obtain the intersection line; Interactively edit the intersecting lines on the two-dimensional view; establish a triangular mesh optimization model based on the gradient of volume data, decompose and iteratively optimize the objective function according to the editing results of the intersecting lines, and obtain the optimized triangular mesh vertex coordinates, which are Interactive editing of results. The method improves the segmentation accuracy and interactive editing accuracy of volume data.

Description

Method for carrying out human-computer interaction editing on segmentation result of volume data
Technical Field
The invention relates to a method for carrying out human-computer interaction editing on a segmentation result of volume data, and belongs to the technical field of digital medical treatment.
Background
In the process of digital medical treatment, the volume data of a patient is acquired through diagnosis and treatment means such as CT and MRI, and a target region needs to be acquired through a volume data segmentation technology, which is helpful for further analysis and diagnosis. Due to the complexity of medical volume data, the existing automatic volume data segmentation technology has poor universality, the segmentation result usually has problems, and the segmentation result needs to be further edited by using a human-computer interaction technology. However, it is difficult to perform digitized human-computer interaction editing on three-dimensional volume data, and the existing technology cannot well solve the problems of high degree of freedom, poor interaction precision and the like faced by three-dimensional human-computer interaction. In order to efficiently edit the volume data division result, a well-designed interaction method is required. In the interactive volume data segmentation method adopted by software such as Seg3D, the user marks a segmentation target on a two-dimensional slice of volume data using a brush. In order to complete the segmentation task, a large number of two-dimensional slices need to be marked, and the interaction efficiency is low. Or by means of an automatic segmentation method provided by software, the obtained segmentation result may have certain errors. The existing interactive body data segmentation method based on grid editing expresses the segmentation result in the form of a triangular grid model, and the segmentation purpose is completed by constructing the triangular grid model through user interaction or edits the grid shape through interaction to edit the segmentation result. The method does not fully consider the volume data information in the grid shape calculation process, has strong dependence on user interaction, and may need additional interaction times to ensure that a better segmentation result is obtained. The existing volume data segmentation method based on contour editing adopted by TurtleSeg and other software selects a planar slice for volume data, and a user marks a contour line of a segmentation target on the slice as a constraint condition for grid generation. And performing curved surface reconstruction on the result obtained by marking the plurality of contour lines to obtain a mesh model of the segmentation result. The method has the defects of accuracy and efficiency, and more contours need to be marked to ensure the accuracy of the details of the segmentation result. Related methods can be found in Ijiri T, Yokota H.content-based interface for refining volume segment [ J ]. Computer Graphics Forum,2010,29(7), 2153-.
The existing volume data interactive segmentation method based on two-dimensional slice interaction needs an operator to mark a large number of slices to obtain a good result, the interaction efficiency is low, and the time required for segmenting one piece of data can be as long as several hours. In the grid-based segmentation method, the algorithm has a low utilization rate of volume data information and depends heavily on user interaction, and due to the lack of volume data as constraint, only the grid shape is considered, so that the segmentation precision is low. The curved surface reconstruction method based on the two-dimensional contour line is difficult to accurately segment the fine structure and needs more user interaction times.
Disclosure of Invention
The invention aims to provide a method for carrying out human-computer interaction editing on a segmentation result of volume data, so as to overcome the technical problems that the advantages of three-dimensional interaction and two-dimensional interaction cannot be combined and the volume data information cannot be fully utilized in the prior art, and further improve the interaction efficiency and the segmentation precision.
The invention provides a method for carrying out human-computer interaction editing on a segmentation result of volume data, which comprises the following steps:
(1) reading the segmentation result of the volume data stored in the form of binary data from the computer, and performing isosurface extraction on the segmentation result by using a cube stepping method to obtain a triangular mesh model M of the segmentation result0Triangular mesh model M0Representing the shape and volume characteristics of the segmented object, the target value of the extracted isosurface in the cube stepping method is 0.5, and the triangle is usedMesh model M0Storing in a mode of vertex coordinates and triangular patch indexes; adopting a graphical processing unit rasterization method to carry out the triangular mesh model M0Performing surface drawing, and recording the volume data coordinates of the positions of all the surface patches in the triangular mesh to obtain a surface drawing two-dimensional image;
(2) reading volume data corresponding to the segmentation result of the step (1) from a computer, respectively performing volume rendering on the volume data from a spatial position corresponding to each computer screen pixel by using a ray projection method according to the position of an observer, namely, emitting rays to the volume data along the sight line direction to obtain projected rays, judging the rays projected onto the volume data according to the volume data coordinates of the position of each patch in the triangular grid of the step (1), if the rays are projected onto the triangular grid patch in the step (1), ending the ray projection, if the rays are not projected onto the patch, intersecting the projected rays and the volume data to obtain an intersecting line segment, obtaining a plurality of sampling points at equal intervals on the intersecting line segment, reading the volume data gray value at the sampling points, and accumulating the volume data gray value along the ray projection direction, obtaining a brightness value accumulated in the transmission process of the projection light in the volume data, and obtaining a color value of each screen pixel according to the brightness value to obtain a two-dimensional image drawn by the volume;
(3) in the same view, mixing the surface rendering two-dimensional image in the step (1) and the volume rendering two-dimensional image in the step (2) by adopting a transparency mixing method to obtain a unified display image;
(4) recording a pen touch line for mouse interaction of a user in the unified display image in the step (3), storing the pen touch line in a coordinate point sequence form of screen pixels, sequentially connecting every two coordinate points in the pen touch line to form a broken line segment, extending the screen pixels corresponding to all points on the broken line segment in the direction of sight of an observer by using the light projection method in the step (2) to obtain a plurality of parallel rays, intersecting the plurality of parallel rays with the volume data in the step (2) to obtain a plurality of parallel line segments, forming a quadrangle by two adjacent parallel line segments, and enabling the quadrangle to be located according to the pen touch lineThe punctuation sequences are connected in sequence to obtain an extension surface, and the extension surface is stored in a triangular mesh form and is recorded as a section mesh MSMaking the section grid MSIntersecting the volume data and meshing M according to the sectionSThe grid vertex coordinates obtain corresponding volume data coordinates, further obtain the gray level of the volume data, and grid M on the sectionSDisplaying a gray scale of the volume data;
(5) the section grid M obtained in the step (4)SAnd (2) performing the step (1) on the triangular mesh model M0One triangular patch is taken from each triangular patch, two triangles corresponding to the two triangular patches are subjected to space intersection test to obtain three-dimensional space coordinates and barycentric coordinates of two end points of two intersected triangular patch cross-section in space, all triangular patch selection modes in two triangular meshes are traversed to obtain all cross-section lines of the triangular patches in the two triangular meshes, all cross-section lines are connected into continuous broken line segments according to space adjacent relations and are recorded as cross-section meshes MSAnd triangular mesh model M0Cross line C of0
(6) The section grid M of the step (4) is processedSA plurality of quadrangles connected in sequence are arranged on a two-dimensional plane according to the stroke line coordinate point sequence in the step (4) to obtain a section grid P on the two-dimensional planeSRasterizing the two-dimensional cross-section grid P by adopting a graphic processing unitSSurface drawing is carried out to obtain volume data gray scale on a section grid PSThe display result image of (1); according to the above cross-section mesh PSSetting the intersecting line C of the step (5)0In the cross section of the grid PSThe barycentric coordinates on the middle triangular patch are not changed to obtain an intersection line C on a two-dimensional planeP(ii) a Making the above two-dimensional section mesh PSIntersecting the two dimensions CPThe views are displayed together in the same view and are recorded as a two-dimensional view;
(7) the visual boundary of the volume data gray scale on the two-dimensional section grid in the two-dimensional view of the step (6) and the two-dimensional intersection line C in the two-dimensional viewPComparing the positions of the two lines, and if the intersection line deviates from the above steps(1) When the boundary of the object is divided, the point on the intersecting line is moved by using the mouse, and the point on the intersecting line is moved to the boundary of the divided object to obtain the intersecting line CNAnd obtaining an intersecting line CNThe barycentric coordinates of the points above;
(8) the intersection line C according to the above step (7)NBarycentric coordinates of points thereon and the cross-sectional mesh M in the above-described step (4)SObtaining the intersecting line C by the grid vertex coordinatesNThe three-dimensional space coordinate C of the upper point;
(9) establishing a triangular mesh optimization model based on volume data gradient, wherein an objective function E of the triangular mesh optimization model is as follows:
E=||L(V′)-T(V′)δ0||2+ω||MV′-C||2-κG(V′)
wherein V' is an independent variable of the objective function, namely the optimized triangular mesh model M0The vertex coordinates are solved to obtain V' by minimizing an objective function E,
‖L(V′)-T(V′)δ02is a laplace shape constraint, where L is the laplace coordinate operator,
Figure GDA0003463738920000031
l (V') is an optimized triangular mesh model M0Laplace coordinate of (d)0Is an initial grid M0T (V') is a local transformation of vertices represented by a linear combination of vertex coordinates;
ω‖MV′-C‖2for user interaction constraint, ω is a weight parameter, and C is the intersection line C obtained in the above step (8)NThe three-dimensional space coordinates C, M of the upper point are parameter matrixes, namely the intersection line CNThe three-dimensional space coordinate C of the upper point is expressed as a triangular mesh model M0Linear combination of the vertex coordinates is carried out, and the linear combination is expressed into a matrix form, so that a parameter matrix M can be obtained;
- κ G (V ') is a volume data gradient constraint, κ is a weight parameter of the term, G is a volume data gradient function, G (V ') represents a volume data gradient magnitude at the vertex V ', the gradient function G is numerically calculated using a difference;
(10) rewriting the optimization function E of the above step (9) to E':
E′=||L(V′)-T(V′)δ0||2+ω||MV′-C||2-κG(P)+θ||V′-P||2
wherein P is an auxiliary variable and represents a triangular mesh model M in the optimization process0Theta is a relaxation variable representing | V' -P |2The weight coefficient of (2) is gradually increased in the optimization process;
(11) extracting the related terms of E ' in the step (10) and taking V ' as a variable to obtain an optimization sub-problem S1, wherein the optimization sub-problem S1 is a known auxiliary variable P, and solving a vertex coordinate V ', and an optimization function is as follows:
Figure GDA0003463738920000041
the optimization subproblem S1 is a linear equation, which is solved by using a linear least square method to obtain V ', and V' is updated;
(12) extracting the relevant items of E 'in the step (10) by taking P as a variable to obtain an optimization sub-problem S2, wherein the optimization sub-problem S2 is a known vertex coordinate V', and the auxiliary variable P is solved, and the optimization function is as follows:
Figure GDA0003463738920000042
the optimization sub-problem S2 is a gradient optimization problem, the objective function being independent for each vertex, for the triangular mesh M0Solving the gradient optimization problem by using a neighborhood searching method to traverse the triangular mesh M0Obtaining P and updating P at the top point of the graph;
(13) setting the maximum value of the relaxation variable theta to thetamaxSetting the initial value of the relaxation variable theta to theta at initialization0Taking V' ═ V0In which V is0A triangular mesh M of the step (1)0Then the iteration starts:
(13-1) let P be V', traverse the triangular mesh M0Performing neighborhood search on the vertex of the P, solving the optimization subproblem S2 to obtain and update P;
(13-2) solving the optimization sub-problem S1 by using a linear least square method according to the P in the step (13-1), and obtaining and updating V';
increasing the value of theta, repeating the steps (13-1) and (13-2), judging the relaxation variable theta, and if theta is larger than or equal to thetamaxStopping iteration and carrying out the step (14) if theta is less than thetamaxAnd then returning to (13-1);
(14) and (5) displaying a triangular mesh corresponding to the vertex coordinates V' in the step (13), wherein the triangular mesh is a result obtained by carrying out one-time human-computer interaction editing on the volume data segmentation result.
The method for carrying out human-computer interaction editing on the segmentation result of the volume data, provided by the invention, has the advantages that:
according to the method, the construction body drawing and the surface drawing are combined and are uniformly displayed in an interactive environment, so that the segmentation target is convenient to identify; the segmentation method based on stroke drawing is used for solving the intersecting lines of the segmentation results, so that the segmentation method can adapt to the segmentation targets in different shapes, and the method is favorable for more intuitively confirming the problems in the edited segmentation results; the three-dimensional slice is flattened to generate a two-dimensional slice, so that interaction can be performed more accurately, and the interactive editing precision is improved; the two-dimensional editing result is transmitted to the three-dimensional by the grid optimization algorithm based on the volume data gradient, so that the number of user interaction is saved, and meanwhile, the partition grid boundary is close to the volume data boundary, so that the partition precision is improved.
Detailed Description
The invention provides a method for carrying out human-computer interaction editing on a segmentation result of volume data, which comprises the following steps:
(1) reading the segmentation result of the volume data stored in the form of binary data from the computer, and performing isosurface extraction on the segmentation result by using a cube stepping (MarchingCubes) method to obtainTriangular mesh model M to segmentation result0Triangular mesh model M0The method comprises the steps of representing the shape and the volume characteristics of a segmentation target, wherein the target value of an extracted isosurface in a cube stepping method is 0.5, in the process, an input volume data segmentation result is stored in a binary data form, 0 or 1 is used for representing whether the position belongs to the segmentation target or not at each position, if 0 represents that the position does not belong to the segmentation target, if 1 represents that the position belongs to the segmentation target, all three-dimensional objects composed of 1 positions are the segmentation results represented by the segmentation data, and therefore the target value of the extracted isosurface in the cube stepping method is set to be 0.5. Using the triangular mesh model M0Storing in a mode of vertex coordinates and triangular patch indexes; rasterizing the triangular mesh model M by a Graphic Processing Unit (GPU)0And (4) performing surface drawing, wherein in the surface drawing process, the depth cache of the GPU equipment is used for shielding and removing, and a correct surface drawing result is obtained. In addition, in the surface drawing process, volume data coordinates of the positions of all the surface patches in the triangular mesh are recorded; and obtaining a surface drawing two-dimensional image which can be displayed on a screen.
(2) Reading volume data corresponding to the segmentation result of the step (1) from a computer, respectively performing volume rendering on the volume data from a spatial position corresponding to each computer screen pixel by using a ray projection method according to the position of an observer, namely, emitting rays to the volume data along the sight line direction to obtain projected rays, judging the rays projected onto the volume data according to the volume data coordinates of the position of each patch in the triangular grid of the step (1), if the rays are projected onto the triangular grid patch in the step (1), ending the ray projection, if the rays are not projected onto the patch, intersecting the projected rays and the volume data to obtain an intersecting line segment, obtaining a plurality of sampling points at equal intervals on the intersecting line segment, reading the volume data gray value at the sampling points, and accumulating the volume data gray value along the ray projection direction, obtaining a brightness value accumulated in the transmission process of the projection light in the volume data, and obtaining a color value of each screen pixel according to the brightness value to obtain a two-dimensional image drawn by the volume; the two-dimensional image may be displayed on a screen.
(3) In the same view, mixing the surface rendering two-dimensional image in the step (1) and the volume rendering two-dimensional image in the step (2) by adopting a transparency mixing method to obtain a unified display image;
(4) and (4) recording a pen touch line for mouse interaction of a user in the unified display image in the step (3), storing the pen touch line in a coordinate point sequence form of screen pixels, sequentially connecting every two coordinate points in the pen touch line to form a broken line segment, wherein the broken line segment represents a part needing to be edited in a segmentation result by observing the unified display image, and intercepting the section shape of the three-dimensional volume data according to the shape of the part to be edited. Using the light projection method in the step (2), extending the screen pixels corresponding to all points on the broken line segment along the sight line direction of the observer to the inner direction of the screen to obtain a plurality of parallel rays, intersecting the plurality of parallel rays with the volume data in the step (2) to obtain a plurality of parallel line segments, forming a quadrangle by two adjacent parallel line segments, sequentially connecting the quadrangle according to the coordinate point sequence of the stroke line to obtain an extension surface, storing the extension surface in the form of triangular grids and marking as a cross-section grid MSMaking the section grid MSIntersecting the volume data and meshing M according to the sectionSThe grid vertex coordinates obtain corresponding volume data coordinates, further obtain the gray level of the volume data, and grid M on the sectionSDisplaying a gray scale of the volume data; the gray scale condition of the disassembled data can be observed through the observation of the cross section grids, and the segmentation target corresponding to the segmentation result can be distinguished through the gray scale.
(5) The section grid M obtained in the step (4)SAnd (2) performing the step (1) on the triangular mesh model M0Respectively taking one triangular patch, carrying out space intersection test on two triangles corresponding to the two triangular patches to obtain three-dimensional space coordinates and barycentric coordinates of two end points of two intersected triangular patch sections in space, traversing all triangular patch selection modes in the two triangular meshes to obtain all intersected triangular patch sections in the two triangular meshes, and carrying out space intersection test on all the intersected patch sections according to the intersection line sectionsThe adjacent space relations are connected into a continuous broken line segment which is marked as a section grid MSAnd triangular mesh model M0Cross line C of0
(6) The section grid M of the step (4) is processedSA plurality of quadrangles connected in sequence are arranged on a two-dimensional plane according to the stroke line coordinate point sequence in the step (4) to obtain a section grid P on the two-dimensional planeSRasterizing the two-dimensional cross-section grid P by adopting a graphic processing unitSSurface drawing is carried out to obtain volume data gray scale on a section grid PSThe display result image of (1); according to the above cross-section mesh PSSetting the intersecting line C of the step (5)0In the cross section of the grid PSThe barycentric coordinates on the middle triangular patch are not changed to obtain an intersection line C on a two-dimensional planeP(ii) a Making the above two-dimensional section mesh PSIntersecting the two dimensions CPThe views are displayed together in the same view and are recorded as a two-dimensional view;
(7) the visual boundary of the volume data gray scale on the two-dimensional section grid in the two-dimensional view of the step (6) and the two-dimensional intersection line C in the two-dimensional viewPComparing the positions, when the intersecting line deviates from the boundary of the segmentation target in the step (1), moving the point on the intersecting line by using a mouse, and moving the point on the intersecting line to the boundary of the segmentation target to obtain an intersecting line CNAnd obtaining an intersecting line CNThe barycentric coordinates of the points above;
(8) the two-dimensional intersection line C of the step (7) is formedPTo the new intersection line CNIs converted into the position transformation of the three-dimensional mesh vertex. The method is according to the intersecting line C of the step (7) aboveNBarycentric coordinates of points thereon and the cross-sectional mesh M in the above-described step (4)SObtaining the intersecting line C by the grid vertex coordinatesNThe three-dimensional space coordinate C of the upper point;
(9) establishing a triangular mesh optimization model based on volume data gradient for transmitting the two-dimensional interactive editing result in the step (7) to three-dimensional, so that the two-dimensional interactive editing operation can drive the three-dimensional mesh M0And deforming, thereby completing the editing of the volume data segmentation result. TheThe objective function E of the triangular mesh optimization model is as follows:
E=||L(V′)-T(V′)δ0||2+ω||MV′-C||2-κG(V′)
wherein V' is an independent variable of the objective function, namely the optimized triangular mesh model M0The vertex coordinates are solved to obtain V' by minimizing an objective function E,
‖L(V′)-T(V′)δ02is a laplace shape constraint, where L is the laplace coordinate operator,
Figure GDA0003463738920000071
l (V') is an optimized triangular mesh model M0Laplace coordinate of (d)0Is an initial grid M0T (V') is a local transformation of vertices represented by a linear combination of vertex coordinates; to solve the problem that laplace coordinates are sensitive to rotation scaling transformations.
ω‖MV′-C‖2For user interaction constraint, ω is a weight parameter, and the value thereof is adjusted empirically, which in one embodiment of the present invention is 0.5, and C is the intersection line C obtained in the above step (8)NThe three-dimensional space coordinates C, M of the upper point are parameter matrixes, namely the intersection line CNThe three-dimensional space coordinate C of the upper point is expressed as a triangular mesh model M0Linear combination of the vertex coordinates is carried out, and the linear combination is expressed into a matrix form, so that a parameter matrix M can be obtained;
- κ G (V ') is a volume data gradient constraint, κ is a weight parameter of the term, the value of which is empirically adjusted, and in one embodiment of the invention is 1.0, G is a volume data gradient function, G (V ') represents the volume data gradient magnitude at the vertex V ', and the gradient function G is numerically calculated using a difference;
(10) rewriting the optimization function E of the above step (9) to E':
E′=||L(V′)-T(V′)δ0||2+ω||MV′-C||2-κG(P)+θ||V′-P||2
wherein P is an auxiliary variable and represents a triangular mesh model M in the optimization process0Theta is a relaxation variable representing | V' -P |2The weight coefficient of (2) is gradually increased in the optimization process;
the use of the auxiliary variable P in the above equation replaces V ' in the non-convex function G (V '), with the addition of | V ' -P |2The terms are constrained so that the auxiliary variable P is as close as possible to V' being replaced. When θ → + ∞, there is E' → E;
(11) extracting the related terms of E ' in the step (10) and taking V ' as a variable to obtain an optimization sub-problem S1, wherein the optimization sub-problem S1 is a known auxiliary variable P, and solving a vertex coordinate V ', and an optimization function is as follows:
Figure GDA0003463738920000081
the optimization subproblem S1 is a linear equation, which is solved by using a linear least square method to obtain V ', and V' is updated;
(12) extracting the relevant items of E 'in the step (10) by taking P as a variable to obtain an optimization sub-problem S2, wherein the optimization sub-problem S2 is a known vertex coordinate V', and the auxiliary variable P is solved, and the optimization function is as follows:
Figure GDA0003463738920000082
the optimization sub-problem S2 is a gradient optimization problem, the objective function being independent for each vertex, for the triangular mesh M0Solving the gradient optimization problem by using a neighborhood searching method to traverse the triangular mesh M0Obtaining P and updating P at the top point of the graph; selecting some points in a certain size of area near each vertex, and calculating the objective function of the optimization subproblem S2 to obtain a vertex coordinate with the minimum objective function value;
(13) the optimization sub-problem S1 in step (11) is iterated over the optimization sub-problem S2 in step (12).
Setting the maximum value of the relaxation variable theta to thetamaxSetting the initial value of the relaxation variable theta to theta at initialization0Taking V' ═ V0In which V is0A triangular mesh M of the step (1)0Then the iteration starts:
(13-1) let P be V', traverse the triangular mesh M0Performing neighborhood search on the vertex of the P, solving the optimization subproblem S2 to obtain and update P;
(13-2) solving the optimization sub-problem S1 by using a linear least square method according to the P in the step (13-1), and obtaining and updating V';
increasing the value of theta, repeating the steps (13-1) and (13-2), judging the relaxation variable theta, and if theta is larger than or equal to thetamaxStopping iteration and carrying out the step (14) if theta is less than thetamaxAnd then returning to (13-1);
(14) and (5) displaying a triangular mesh corresponding to the vertex coordinates V' in the step (13), wherein the triangular mesh is a result obtained by carrying out one-time human-computer interaction editing on the volume data segmentation result.

Claims (1)

1.一种对体数据的分割结果进行人机交互编辑的方法,其特征在于,该方法包括以下步骤:1. a method of human-computer interaction editing is carried out to the segmentation result of volume data, it is characterised in that the method comprises the following steps: (1)从计算机中读取以二值数据形式存储的体数据的分割结果,使用立方体步进法方法,对分割结果进行等值面提取,得到分割结果的三角网格模型M0,三角网格模型M0表示分割目标的形状和体积特征,立方体步进方法中的提取等值面的目标值为0.5,将上述三角网格模型M0以顶点坐标和三角面片索引的方式进行存储;采用图形处理单元光栅化方法对该三角网格模型M0进行面绘制,并记录三角网格中各面片所在位置的体数据坐标,得到面绘制二维图像;(1) Read the segmentation result of the volume data stored in the form of binary data from the computer, use the cube stepping method to extract the isosurface of the segmentation result, and obtain the triangular mesh model M 0 of the segmentation result. The lattice model M 0 represents the shape and volume features of the segmentation target, and the target value of the isosurface extraction in the cube stepping method is 0.5, and the above-mentioned triangular mesh model M 0 is stored in the form of vertex coordinates and triangular patch indices; Use the graphics processing unit rasterization method to draw the surface of the triangular mesh model M 0 , and record the volume data coordinates of the positions of each facet in the triangular mesh to obtain a two-dimensional image of surface rendering; (2)从计算机中读取与上述步骤(1)的分割结果相对应的体数据,根据观察者位置,使用光线投射方法,分别从与每个计算机屏幕像素对应的空间位置,对体数据进行体绘制,即沿视线方向,向体数据发出射线,得到投射光线,根据上述步骤(1)的三角网格中各面片所在位置的体数据坐标,对投射到体数据上的光线进行判断,若光线投射到上述步骤(1)中的三角网格面片上,则结束光线投射,若光线未投射到面片上,则投射光线与体数据相交,得到相交线段,在该相交线段上等距获取多个采样点,读取采样点处的体数据灰度值,沿光线投射方向进行体数据灰度值的累加,得到投射光线在体数据中传播过程中积累的亮度值,根据该亮度值得到每个屏幕像素的颜色值,得到体绘制的二维图像;(2) Read the volume data corresponding to the segmentation result of the above step (1) from the computer, and use the ray casting method according to the position of the observer to perform the volume data from the spatial position corresponding to each computer screen pixel. Volume rendering, that is, sending rays to the volume data along the line of sight to obtain the projected rays, and according to the volume data coordinates of the positions of each facet in the triangular mesh of the above step (1), the rays projected on the volume data are judged, If the ray is projected on the triangular mesh patch in the above step (1), the ray casting is ended. If the ray is not projected on the patch, the projected ray intersects the volume data to obtain the intersecting line segment, which is equidistantly obtained on the intersecting line segment. Multiple sampling points, read the gray value of the volume data at the sampling point, and accumulate the gray value of the volume data along the light projection direction to obtain the brightness value accumulated during the propagation of the projected light in the volume data. The color value of each screen pixel to obtain a volume-rendered two-dimensional image; (3)在同一视图中,采用透明度混合方法,对上述步骤(1)的面绘制二维图像和上述步骤(2)的体绘制二维图像进行混合,得到统一显示图像;(3) in the same view, using the transparency mixing method, the surface rendering two-dimensional image of the above-mentioned step (1) and the volume rendering two-dimensional image of the above-mentioned step (2) are mixed to obtain a unified display image; (4)在上述步骤(3)的统一显示图像中,记录用户进行鼠标交互的笔触线,将该笔触线以屏幕像素的坐标点序列形式进行存储,笔触线中每两个坐标点依次相连形成折线段,使用上述步骤(2)中的光线投射方法,沿观察者视线方向,从该折线段上与所有点相对应的屏幕像素,向屏幕内部方向延伸得到多个平行射线,多个平行射线与上述步骤(2)的体数据相交得到多个平行线段,相邻两条平行线段组成一个四边形,将该四边形根据上述笔触线坐标点序列顺序连接,得到一个延伸面,该延伸面以三角网格的形式进行存储,记为截面网格MS,使该截面网格MS与体数据相交,并根据该截面网格MS的网格顶点坐标得到相应的体数据坐标,进而得到体数据的灰度,并在该截面网格MS上显示体数据的灰度;(4) In the unified display image of the above-mentioned step (3), record the stroke line for the user to interact with the mouse, store the stroke line in the form of a coordinate point sequence of screen pixels, and each two coordinate points in the stroke line are connected in turn to form Polyline segment, using the ray casting method in the above step (2), along the line of sight of the observer, from the screen pixels corresponding to all points on the polyline segment, extending to the inner direction of the screen to obtain multiple parallel rays, multiple parallel rays Intersection with the volume data of the above-mentioned step (2) to obtain a plurality of parallel line segments, two adjacent parallel line segments form a quadrilateral, and the quadrilaterals are sequentially connected according to the above-mentioned stroke line coordinate point sequence to obtain an extended surface, and the extended surface is represented by a triangular mesh. It is stored in the form of a grid, denoted as the section mesh MS , so that the section mesh MS intersects the volume data, and the corresponding volume data coordinates are obtained according to the mesh vertex coordinates of the section mesh MS, and then the volume data is obtained. , and display the grayscale of the volume data on the section grid MS; (5)在步骤(4)中得到的截面网格MS与步骤(1)的三角网格模型M0中各取一个三角面片,对与两个三角面片相对应的两个三角形进行空间相交测试,得到相交的两个三角形在空间相交的交线段的两个端点的三维空间坐标和重心坐标,遍历两个三角网格中所有的三角面片选取方式,得到上述两个三角网格中三角面片相交的所有交线段,将上述所有交线段按照空间相邻关系连接成连续的折线段,记为截面网格MS与三角网格模型M0的相交线C0(5) Take a triangular patch from the cross-section mesh MS obtained in step (4) and the triangular mesh model M 0 in step (1), and carry out the calculation on the two triangles corresponding to the two triangular patches. The space intersection test is to obtain the three-dimensional space coordinates and barycentric coordinates of the two endpoints of the intersecting line segment where the two intersecting triangles intersect in space, and traverse all the triangle patch selection methods in the two triangular meshes to obtain the above two triangular meshes All the intersecting line segments intersected by the middle triangular facets, connect all the above-mentioned intersecting line segments into continuous polyline segments according to the spatial adjacent relationship, and denote the intersecting line C 0 of the section mesh MS and the triangular mesh model M 0 ; (6)将上述步骤(4)截面网格MS中的多个依次相连的四边形,按照步骤(4)中的笔触线坐标点序列顺序排列在一个二维平面上,得到二维平面上的截面网格PS,采用图形处理单元光栅化方法,对该二维截面网格PS进行面绘制,得到体数据灰度在截面网格PS的显示结果图像;根据上述截面网格PS,设定上述步骤(5)的相交线C0在截面网格PS中的三角面片上的重心坐标不变,得到二维平面上的相交线CP;使上述二维截面网格PS与二维相交线CP共同显示在同一视图中,将该视图记为二维视图;(6) Arranging a plurality of sequentially connected quadrilaterals in the cross-section grid MS in the above step (4) on a two-dimensional plane according to the sequence of the stroke line coordinate points in step (4), to obtain the quadrilaterals on the two-dimensional plane. For the section grid P S , the rasterization method of the graphics processing unit is used to draw the surface of the two-dimensional section grid P S to obtain the display result image of the volume data grayscale in the section grid P S ; according to the above section grid P S , set the barycentric coordinates of the intersection line C 0 in the above step (5) on the triangular patch in the section grid P S to remain unchanged, and obtain the intersection line C P on the two-dimensional plane; make the above-mentioned two-dimensional section grid P S It is displayed in the same view together with the two-dimensional intersection line CP , and the view is recorded as a two-dimensional view; (7)对上述步骤(6)的二维视图中二维截面网格上的体数据灰度的视觉边界与该二维视图中的二维相交线CP所在的位置进行对比,当相交线偏离上述步骤(1)的分割目标的边界时,使用鼠标移动相交线上的点,将相交线上的点移动到分割目标的边界上,得到相交线CN,并得到相交线CN上的点的重心坐标;(7) Compare the visual boundary of the volume data grayscale on the two-dimensional cross-section grid in the two-dimensional view of the above step (6) with the position of the two-dimensional intersection line CP in the two-dimensional view. When the intersection line When deviating from the boundary of the segmentation target in the above step (1), use the mouse to move the point on the intersection line, move the point on the intersection line to the boundary of the segmentation target, obtain the intersection line C N , and obtain the intersection line C N . the barycentric coordinates of the point; (8)根据上述步骤(7)的相交线CN上的点的重心坐标以及上述步骤(4)中的截面网格MS的网格顶点坐标,得到相交线CN上点的三维空间坐标C;(8) According to the barycentric coordinates of the point on the intersection line CN in the above step (7) and the grid vertex coordinates of the cross-section mesh MS in the above step (4), obtain the three-dimensional space coordinates of the point on the intersection line CN C; (9)建立基于体数据梯度的三角网格优化模型,该三角网格优化模型的目标函数E为:(9) Establish a triangular mesh optimization model based on volume data gradient, and the objective function E of this triangular mesh optimization model is: E=||L(V′)-T(V′)δ0||2+ω||MV′-C||2-κG(V′)E=||L(V′)-T(V′)δ 0 || 2 +ω||MV′-C|| 2 -κG(V′) 其中,V′为目标函数的自变量,即优化后的三角网格模型M0的顶点坐标,通过最小化目标函数E,求解得到V′,Among them, V′ is the independent variable of the objective function, that is, the vertex coordinates of the optimized triangular mesh model M 0 . By minimizing the objective function E, V′ is obtained by solving, ||L(V′)-T(V′)δ0||2为拉普拉斯形状约束,其中L为拉普拉斯坐标算符,||L(V′)-T(V′)δ 0 || 2 is the Laplacian shape constraint, where L is the Laplacian coordinate operator,
Figure FDA0003463738910000021
Figure FDA0003463738910000021
L(V′)为优化后三角网格模型M0的拉普拉斯坐标,δ0为初始网格M0的拉普拉斯坐标,T(V′)为用顶点坐标线性组合表示的顶点局部变换;L(V′) is the Laplace coordinate of the optimized triangular mesh model M 0 , δ 0 is the Laplacian coordinate of the initial mesh M 0 , T(V′) is the vertex represented by the linear combination of vertex coordinates local transformation; ω||MV′-C||2为用户交互约束,ω为权重参数,C为上述步骤(8)中得到的相交线CN上点的三维空间坐标C,M为参数矩阵,即将相交线CN上点的三维空间坐标C表示为三角网格模型M0顶点坐标的线性组合,并将该线性组合表示成矩阵形式,即可得到参数矩阵M;ω||MV′-C|| 2 is the user interaction constraint, ω is the weight parameter, C is the three-dimensional space coordinate C of the point on the intersection line CN obtained in the above step (8), M is the parameter matrix, that is, the intersection line The three-dimensional space coordinate C of the point on C N is expressed as the linear combination of the vertex coordinates of the triangular mesh model M 0 , and the linear combination is expressed in the form of a matrix, and the parameter matrix M can be obtained; -κG(V′)为体数据梯度约束,κ为该项的权重参数,G为体数据梯度函数,G(V′)表示顶点V′处的体数据梯度大小,该梯度函数G使用差分进行数值计算;-κG(V') is the volume data gradient constraint, κ is the weight parameter of this item, G is the volume data gradient function, G(V') represents the volume data gradient size at the vertex V', the gradient function G uses the difference Numeral Calculations; (10)将上述步骤(9)的优化函数E改写为E′:(10) Rewrite the optimization function E of the above step (9) as E': E′=||L(V′)-T(V′)δ0||2+ω||MV′-C||2-κG(P)+θ||V′-P||2 E′=||L(V′)-T(V′)δ 0 || 2 +ω||MV′-C|| 2 -κG(P)+θ||V′-P|| 2 其中,P为辅助变量,表示优化过程中三角网格模型M0的顶点坐标,θ为松弛变量,表示||V′-P||2的权重系数,在优化过程中逐渐增大;Among them, P is an auxiliary variable, representing the vertex coordinates of the triangular mesh model M 0 in the optimization process, θ is a relaxation variable, representing the weight coefficient of ||V′-P|| 2 , which gradually increases during the optimization process; (11)提取上述步骤(10)中E′的以V′为变量的相关项,得到优化子问题S1,该优化子问题S1为已知辅助变量P,求解顶点坐标V′,其优化函数如下:(11) Extract the correlation term of E' with V' as the variable in the above step (10), and obtain the optimization sub-problem S1, which is a known auxiliary variable P, and solves the vertex coordinate V', and its optimization function is as follows :
Figure FDA0003463738910000031
Figure FDA0003463738910000031
上述优化子问题S1为线性方程,利用线性最小二乘方法求解该线性方程,得到V′,并更新V′;The above optimization sub-problem S1 is a linear equation, and the linear least squares method is used to solve the linear equation to obtain V', and update V'; (12)提取上述步骤(10)中E′的以P为变量的相关项,得到优化子问题S2,该优化子问题S2为已知顶点坐标V′,求解辅助变量P,其优化函数如下:(12) Extract the correlation term of E' in the above step (10) with P as the variable, and obtain the optimization sub-problem S2, the optimization sub-problem S2 is the known vertex coordinate V', solve the auxiliary variable P, and its optimization function is as follows:
Figure FDA0003463738910000032
Figure FDA0003463738910000032
上述优化子问题S2为梯度优化问题,该目标函数关于每个顶点独立,对于三角网格M0的每个顶点,利用邻域搜索的方法,求解该梯度优化问题,遍历三角网格M0的顶点,得到P,并更新P;The above optimization sub-problem S2 is a gradient optimization problem. The objective function is independent about each vertex. For each vertex of the triangular mesh M 0 , the gradient optimization problem is solved by using the method of neighborhood search, and the traversal of the triangular mesh M 0 is performed. vertex, get P, and update P; (13)设定松弛变量θ的最大值为θmax,初始化时,令松弛变量θ的初始值为θ0,取V′=V0,其中V0为上述步骤(1)的三角网格M0的顶点坐标,随后开始进行迭代:(13) Set the maximum value of the slack variable θ as θ max . During initialization, let the initial value of the slack variable θ be θ 0 , and take V′=V 0 , where V 0 is the triangular mesh M of the above step (1). 0 vertex coordinates, and then start iterating: (13-1)令P=V′,遍历三角网格M0的顶点,进行邻域搜索,求解上述优化子问题S2,得到并更新P;(13-1) Let P=V′, traverse the vertices of the triangular mesh M 0 , perform a neighborhood search, solve the above-mentioned optimization sub-problem S2, and obtain and update P; (13-2)根据上述步骤(13-1)的P,使用线性最小二乘方法,求解上述优化子问题S1,得到并更新V′;(13-2) According to P of the above-mentioned step (13-1), use the linear least squares method to solve the above-mentioned optimization sub-problem S1, and obtain and update V′; 增大θ值,重复上述步骤(13-1)和(13-2),对松弛变量θ进行判断,若θ≥θmax,则停止迭代,进行步骤(14),若θ<θmax,则返回(13-1);Increase the value of θ, repeat the above steps (13-1) and (13-2), and judge the slack variable θ. If θ≥θ max , stop the iteration and go to step (14), if θ < θ max , then return (13-1); (14)显示与上述步骤(13)的顶点坐标V′对应的三角网格,该三角网格即为对体数据分割结果进行一次人机交互编辑得到的结果。(14) Display the triangular mesh corresponding to the vertex coordinate V' in the above step (13), where the triangular mesh is the result obtained by performing one human-computer interaction editing on the volume data segmentation result.
CN201810801542.7A 2018-07-20 2018-07-20 Method for carrying out human-computer interaction editing on segmentation result of volume data Active CN109147061B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810801542.7A CN109147061B (en) 2018-07-20 2018-07-20 Method for carrying out human-computer interaction editing on segmentation result of volume data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810801542.7A CN109147061B (en) 2018-07-20 2018-07-20 Method for carrying out human-computer interaction editing on segmentation result of volume data

Publications (2)

Publication Number Publication Date
CN109147061A CN109147061A (en) 2019-01-04
CN109147061B true CN109147061B (en) 2022-04-01

Family

ID=64801203

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810801542.7A Active CN109147061B (en) 2018-07-20 2018-07-20 Method for carrying out human-computer interaction editing on segmentation result of volume data

Country Status (1)

Country Link
CN (1) CN109147061B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113628095B (en) * 2021-08-04 2022-11-01 展讯通信(上海)有限公司 Portrait area grid point information storage method and related product
CN114882163A (en) * 2022-06-10 2022-08-09 上海联影医疗科技股份有限公司 Volume rendering method, system, apparatus and storage medium

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7286127B2 (en) * 2005-06-22 2007-10-23 Microsoft Corporation Large mesh deformation using the volumetric graph Laplacian
US7868885B2 (en) * 2007-06-22 2011-01-11 Microsoft Corporation Direct manipulation of subdivision surfaces using a graphics processing unit
DE102009042326A1 (en) * 2009-09-21 2011-06-01 Siemens Aktiengesellschaft Interactively changing the appearance of an object represented by volume rendering
WO2016038604A1 (en) * 2014-09-11 2016-03-17 B. G. Negev Technologies And Applications Ltd. (Ben-Gurion University) Interactive segmentation
US9713424B2 (en) * 2015-02-06 2017-07-25 Richard F. Spaide Volume analysis and display of information in optical coherence tomography angiography
CN104794758B (en) * 2015-04-17 2017-10-03 青岛海信医疗设备股份有限公司 A kind of method of cutting out of 3-D view
US10282917B2 (en) * 2015-06-29 2019-05-07 Koninklijke Philips N.V. Interactive mesh editing
CN106373168A (en) * 2016-11-24 2017-02-01 北京三体高创科技有限公司 Medical image based segmentation and 3D reconstruction method and 3D printing system

Also Published As

Publication number Publication date
CN109147061A (en) 2019-01-04

Similar Documents

Publication Publication Date Title
US8970581B2 (en) System and method for interactive contouring for 3D medical images
US9852238B2 (en) 4D vizualization of building design and construction modeling with photographs
US8983189B2 (en) Method and systems for error correction for three-dimensional image segmentation
US20070279435A1 (en) Method and system for selective visualization and interaction with 3D image data
US20070279436A1 (en) Method and system for selective visualization and interaction with 3D image data, in a tunnel viewer
CN108038862B (en) Interactive medical image intelligent segmentation modeling method
CN109191510B (en) 3D reconstruction method and device for pathological section
JP6385318B2 (en) Transform 3D objects to segment objects in 3D medical images
US10535189B2 (en) Image display of a centerline of tubular structure
WO2006088429A1 (en) Method and apparatus for editing three-dimensional images
US11995786B2 (en) Interactive image editing
CN105825471A (en) Unity-3D-based three-dimensional surface reconstruction and rendering method
CN109147061B (en) Method for carrying out human-computer interaction editing on segmentation result of volume data
CN108597038B (en) Three-dimensional surface modeling method and device and computer storage medium
CN110993067A (en) Medical image labeling system
US9342913B2 (en) Method and system for emulating inverse kinematics
CN114201849A (en) Artificial intelligence based techniques for generating designs in virtual environments
Wu et al. Snapping a cursor on volume data
CN103745495A (en) Medical volume data based volume rendering method
Bornik et al. Interactive editing of segmented volumetric datasets in a hybrid 2D/3D virtual environment
JP2006000126A (en) Image processing method, apparatus and program
CN114140504A (en) A three-dimensional interactive biomedical image registration method
Chen et al. Deforming and animating discretely sampled object representations.
CN113409433A (en) Medical three-dimensional model display and cutting system based on mobile terminal
Schoor et al. VR based visualization and exploration of plant biological data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant