CN117635634A - Image processing method, device, electronic equipment, chip and storage medium - Google Patents

Image processing method, device, electronic equipment, chip and storage medium Download PDF

Info

Publication number
CN117635634A
CN117635634A CN202311567456.1A CN202311567456A CN117635634A CN 117635634 A CN117635634 A CN 117635634A CN 202311567456 A CN202311567456 A CN 202311567456A CN 117635634 A CN117635634 A CN 117635634A
Authority
CN
China
Prior art keywords
cut
curved surface
mesh
area
mouse
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311567456.1A
Other languages
Chinese (zh)
Inventor
周小凤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Suzhou Software Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Suzhou Software Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Suzhou Software Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202311567456.1A priority Critical patent/CN117635634A/en
Publication of CN117635634A publication Critical patent/CN117635634A/en
Pending legal-status Critical Current

Links

Abstract

The present disclosure provides an image processing method, an apparatus, an electronic device, a chip, and a storage medium, the method including: acquiring a boundary contour of a region to be cut from a three-dimensional image by using a mouse pickup method to obtain a target cutting line of the region to be cut; the three-dimensional image comprises a plurality of Mesh grids; based on the target cutting line, performing collision test and gray fusion on the to-be-cut area to generate a first curved surface; and generating a curved surface model of the region to be cut based on the first curved surface. According to the scheme provided by the disclosure, the specified area can be cut in any shape.

Description

Image processing method, device, electronic equipment, chip and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method, an image processing device, an electronic device, a chip, and a storage medium.
Background
Volume cutting is an interactive tool for exposing the internal structure of three-dimensional model volume data. In a volume rendering application scene based on three-dimensional texture mapping, a shearing plane can be utilized to specify the boundary of volume data, and the light resistance is directly modulated on each sampling point according to the distance between the sampling point and an observation point, so that the effect of volume cutting is achieved, and for a three-dimensional model with a complex structure, the extraction and segmentation of the model can be realized by adding manual production and deep processing.
However, how to cut the specified area in any shape is a problem to be solved at present.
Disclosure of Invention
The present disclosure provides an image processing method, an image processing apparatus, an electronic device, a chip, and a storage medium, which can realize cutting of an arbitrary shape for a specified area.
An embodiment of a first aspect of the present disclosure proposes an image processing method, the method including:
acquiring a boundary contour of a region to be cut from a three-dimensional image by using a mouse pickup method to obtain a target cutting line of the region to be cut; the three-dimensional image comprises a plurality of Mesh grids;
based on the target cutting line, performing collision test and gray fusion on the to-be-cut area to generate a first curved surface;
and generating a curved surface model of the region to be cut based on the first curved surface.
In the above scheme, the performing collision test and gray level fusion on the to-be-cut area based on the target cutting line to generate a first curved surface includes:
performing collision test on the target cutting line and the Mesh grids, and determining a plurality of first Mesh grids intersecting the target cutting line from the Mesh grids;
determining intersection points of the first Mesh grids and the target cutting lines to obtain first intersection points;
Gray fusion is carried out on the Mesh grids to obtain a fusion result;
sequencing the first intersection points based on the gray fusion result to obtain a sequencing result;
and generating a first curved surface based on the sorting result.
In the above solution, the generating a first curved surface based on the sorting result includes:
generating intersecting lines of the target cutting lines and the Mesh grids based on the sorting result to obtain first intersecting lines;
and determining the Mesh grid of the area to be cut based on the first intersecting line, and generating a first curved surface.
In the above solution, the determining the Mesh grid of the to-be-cut area based on the first intersecting line, and generating the first curved surface include:
determining a grid area positioned in the area to be cut in each first Mesh grid based on the first intersecting line to obtain a first area of each first Mesh grid; the first area is of an N-sided shape, and N is an integer greater than 3;
triangularizing the first area of each first Mesh grid to obtain a plurality of corresponding second Mesh grids;
determining a plurality of internal grids of the region to be cut;
and generating a first curved surface based on the plurality of second Mesh grids and the plurality of internal grids.
In the above solution, the determining a plurality of internal grids of the area to be cut includes:
determining whether a third Mesh grid adjacent to the current internal grid meets a first preset condition; the first preset condition includes that the vertexes of the third Mesh grid do not belong to the vertexes of the internal grid, and the third Mesh grid does not intersect with the target cutting line;
and under the condition that the third Mesh grid meets the first preset condition, determining the third Mesh grid as an internal grid.
In the above scheme, the acquiring the boundary contour of the to-be-cut area in the three-dimensional image by using the mouse pickup method to obtain the target cutting line of the to-be-cut area includes:
performing mouse marking on the boundary position of the region to be cut in the three-dimensional image to obtain coordinate values of a plurality of marking points and obtain coordinate information;
acquiring coordinate tracks of the mouse among different coordinate points to obtain track information;
and obtaining the target cutting line of the region to be cut based on the coordinate information and the track information.
In the above solution, the generating, based on the first curved surface, a curved surface model of the to-be-cut area includes:
performing point normal vector movement on the first curved surface to generate a model matrix;
And generating a curved surface model of the region to be cut based on the model matrix.
In the above solution, the generating, based on the model base, a curved surface model of the region to be cut includes:
and carrying out point cloud reconstruction on the inner and outer layer point cloud data of the model matrix to obtain a curved surface model of the region to be cut.
An embodiment of a second aspect of the present disclosure proposes an image processing apparatus including:
the computing unit is used for acquiring the boundary outline of the area to be cut from the three-dimensional image by utilizing a mouse pickup method to obtain a target cutting line of the area to be cut; the three-dimensional image comprises a plurality of Mesh grids;
the determining unit is used for carrying out collision test and gray fusion on the to-be-cut area based on the target cutting line to generate a first curved surface;
and the processing unit is used for generating a curved surface model of the area to be cut based on the first curved surface.
An embodiment of a third aspect of the present disclosure proposes an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method described in the embodiments of the first aspect of the present disclosure.
An embodiment of a fourth aspect of the present disclosure proposes a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the method described in the embodiment of the first aspect of the present disclosure.
A fifth aspect embodiment of the present disclosure proposes a chip comprising one or more interfaces and one or more processors; the interface is for receiving a signal from a memory of the electronic device and sending the signal to the processor, the signal comprising computer instructions stored in the memory, which when executed by the processor, cause the electronic device to perform the method described in the embodiments of the first aspect of the disclosure.
In summary, the image processing method, the device, the electronic equipment, the chip and the storage medium provided by the disclosure acquire the boundary contour of the region to be cut in the three-dimensional image by using a mouse pickup method to obtain the target cutting line of the region to be cut; the three-dimensional image comprises a plurality of Mesh grids; based on the target cutting line, performing collision test and gray fusion on the to-be-cut area to generate a first curved surface; and generating a curved surface model of the region to be cut based on the first curved surface. According to the technical scheme provided by the embodiment of the disclosure, the target cutting line of the region to be cut is obtained by using the mouse pickup method, so that track extraction can be simply and efficiently realized, and a personalized curved surface model can be obtained by carrying out collision test and gray level fusion on the region to be cut based on the cutting line, so that the specified region can be cut in any shape.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure and do not constitute an undue limitation on the disclosure.
Fig. 1 is a schematic flow chart of an image processing method according to an embodiment of the disclosure;
FIG. 2 is a schematic diagram of a result of recognizing a closed curve by mouse interaction in an image processing method according to an embodiment of the disclosure;
FIG. 3a is a schematic view illustrating a curved surface of a region to be cut in an image processing method according to an embodiment of the disclosure;
FIG. 3b is a schematic diagram of the image processing method according to the embodiment of the disclosure after re-triangulating the polygon mesh of the region to be cut;
FIG. 4a is a schematic diagram of initial mesh vertices in an image processing method according to an embodiment of the disclosure;
FIG. 4b is a schematic diagram of a first vertex search result in an image processing method according to an embodiment of the disclosure;
FIG. 4c is a schematic diagram of all internal vertices after searching in the image processing method according to the embodiment of the disclosure;
fig. 5a is a schematic diagram of a three-dimensional display result of a Mesh grid of a three-dimensional image in an image processing method according to an embodiment of the disclosure;
Fig. 5b is a schematic diagram illustrating a normal vector representation of Mesh vertices in an image processing method according to an embodiment of the disclosure;
FIG. 5c is a schematic view of a curved surface model of a region to be cut in an image processing method according to an embodiment of the disclosure;
FIG. 5d is a schematic diagram showing a normal vector representation of a region to be cut in an image processing method according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure;
fig. 8 is a schematic diagram of a chip structure according to an embodiment of the disclosure.
Detailed Description
Embodiments of the present disclosure are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are exemplary and intended for the purpose of explaining the present disclosure and are not to be construed as limiting the present disclosure.
The 3D printing technology is a rapid prototyping technology, and the geometric structure and the data characteristic of a three-dimensional image can be acquired by utilizing the 3D printing technology, so that an object with any shape can be printed. Because the design difficulty of the three-dimensional modeling model is high, the requirements on the professional background are high, and therefore, the process of creating and developing the ideal three-dimensional model consumes a great deal of time and energy, so that the personalized model is difficult to acquire quickly.
In the related art, volume data of a three-dimensional model may be extracted through a volume cutting technique, and then a personalized model may be generated through three-dimensional scanning and 3D printing. Volume cutting is an interactive tool that can be used to reveal the internal structure of the volume data and cut through a controllable scissor plane. However, in the process of cutting by using a shearing plane, a user cannot cut in any shape, so that it is difficult to accurately and effectively cut, although a cutting track can be identified by using a tool such as a deep learning model, the identification speed is limited, and defects such as burrs exist at the identification edge, so that the model identification effect is affected, and the cutting accuracy is low.
Based on the above, in each embodiment of the present disclosure, the target cutting line of the area to be cut is obtained by using the mouse pickup method, so that the track extraction can be simply and efficiently implemented, and the individualized curved surface model can be obtained by performing the collision test and the gray fusion on the area to be cut based on the cutting line, so as to implement the cutting of any shape on the designated area.
Fig. 1 provides a schematic flow chart of an image processing method, which can be applied to electronic equipment, and particularly can be applied to electronic equipment such as Personal Computers (PCs), servers and the like. As shown in fig. 1, the method may include:
Step 101: acquiring a boundary contour of a region to be cut from a three-dimensional image by using a mouse pickup method to obtain a target cutting line of the region to be cut; the three-dimensional image includes a plurality of Mesh grids.
In practical application, the outline curve information of the closed area determined by the mark points can be picked up by using a visual technology tool (VTK, visualization Technology Kitware); wherein, the acquired initial image data can be converted into VTK three-dimensional data by using an image processing tool such as ITK (English is Insight Segmentation and Registration Toolkit); specifically, initial image data, such as Dicom data derived by scanning, may be acquired first; then, an image filter (for example, a binary threshold image filter, binary Threshold Image Filter in the whole English process) in the ITK can be utilized to divide the acquired initial image data based on a binary segmentation algorithm to obtain ITK image data, and then the ITK image data is converted into VTK three-dimensional data; the corrosion image filter (for example, a binary corrosion image filter, which is shown as Binary Erode Image Filter in English) can be utilized to perform corrosion treatment on the VTK three-dimensional data based on the VTK corrosion algorithm to obtain a corrosion image; finally, the original image may be subtracted by a subtraction image filter (shown in english as Subtract Image Filter) to obtain cut surface model data, and the surface model data result may be output in STL format; by way of example, a standard head thin layer CT scan image of a patient may be acquired, wherein the image layer is 2mm thick and the scan matrix 512 x 512, resulting in initial image data, and head skin data is acquired by subtracting the erosion image from the original image using a subtraction image filter.
In practical applications, the three-dimensional image may also be referred to as a 3D image, which is not limited by the embodiments of the present disclosure.
In practical application, a closed edge curve can be interactively drawn in a three-dimensional image by using a mouse pickup method, so that a target cutting line of a region to be cut is obtained.
In actual application, in the process of picking up the contour by using the mouse, marking points can be firstly carried out, and then the cutting track of the mouse is obtained in real time by using an interpolation method, so that the mouse interaction function of extracting the boundary contour information of the region to be cut is realized; wherein the area to be cut may also be referred to as a target area, which is not limited by the embodiments of the present disclosure.
Based on this, in an embodiment, the obtaining the boundary contour of the to-be-cut area in the three-dimensional image to obtain the target cutting line of the to-be-cut area may include:
performing mouse marking on the boundary position of the region to be cut in the three-dimensional image to obtain coordinate values of a plurality of marking points and obtain coordinate information;
acquiring coordinate tracks of the mouse among different coordinate points to obtain track information;
and obtaining the target cutting line of the region to be cut based on the coordinate information and the track information.
Specifically, the boundary position of the region to be cut can be subjected to mouse mark point confirmation in the three-dimensional image to obtain curve information, namely track information, of the region to be cut; then, interpolation processing is carried out on the curve information to obtain a mouse track P, and a closed boundary contour is generated to obtain track information.
The method comprises the steps that a function can be called to confirm a mouse mark point of a region to be cut on the surface of a three-dimensional image, so that a mouse track is obtained; for example, the onLeftButtonDown () function may be called for mouse marker point validation, simulating a mouse trace by calling the OnMouseMove () function.
In practical application, in the process of obtaining the target cutting line of the area to be cut based on the coordinate information and the track information, a mouse pickup method can be utilized to calculate the intersection point of the interpolation point and the three-dimensional image surface, and the practical cutting mark point is obtained.
In practical application, after the actual cutting mark points are obtained, the actual cutting mark points can be stored in the tracks variable.
Here, in the process of picking up the contour by the mouse, the screen coordinate value can be obtained by the mouse, and then the world coordinate value cutting track, that is, the actual cutting track is obtained.
In practical application, the practical cutting marking point can be determined according to the vertex of the Mesh grid intersected with the mouse marking point in the three-dimensional image and the intersecting point of the mouse and the Mesh grid, namely the mouse marking point.
Specifically, whether the vertex coordinates of the Mesh grid intersected with the mouse mark point and the coordinates of the mouse mark point in the three-dimensional image meet the preset condition can be judged, and under the condition that the preset condition is met, the actual cutting mark point is calculated according to the vertex coordinates of the Mesh grid intersected with the mouse mark point and the coordinates of the mouse mark point in the three-dimensional image.
Illustratively, the Mesh vertex coordinates of a three-dimensional surface (polydata structure) in space are a 1 、a 2 、a 3 Mouse andthe intersection point of Mesh is C (x, y, z), then the vertex and the intersection point satisfy the following formula:
C=t(a 1 -a 3 )+U×(a 2 -a 3 )-V×(a 3 -a 2 ); (1)
C=E d +λF d ; (2)
wherein U, V represents equal parameter lines in the primary and secondary directions generated in the parameterized curved surface process, U is more than or equal to 0 and less than or equal to 1, and V is more than or equal to 0 and less than or equal to 1; e (E) d Representing the starting point of the connecting line of the mark points of the mouse d The direction of the mouse separating mark points is shown, lambda and t show parameter coefficients, lambda is more than 0, and t is more than 0;
E d and F d The following formula is satisfied:
E d +λF d =t(a 1 -a 3 )+U×(a 2 -a 3 )-V×(a 3 -a 2 ); (3)
the preset conditions are as follows:
|(a 1 -a 3 ),-F d ,(a 2 -a 3 ),-(a 3 -a 2 )|≠0; (5)
under the condition that the preset condition of the formula (5) is met, the actual cutting outline marking point picked up by the mouse, namely the actual cutting marking point, can be obtained by calculating the values of t, lambda and U, V.
The mouse picking process at least comprises two parts of mouse interaction design and mouse picking curve result indication; these two parts are described in detail below.
The first part, the mouse interaction design.
In actual application, the observer mode can be used for mouse interaction operation, and when mouse related information is received, a corresponding event processing function is called; illustratively, after receiving a mouse left-click event message, an onLeftButtonDown () function is invoked to handle the specific event function triggered by the left-click action.
In actual application, the calling mode can be configured as an application programming interface for providing basic mouse operation for the VTK, and covers the OnKeyPress () function, onLeftButtonDown () function, onLeftButtonUp () function and OnMouseMove () function of the class, and the pickup curve function drawCurve (); the calling mode and the curve function can be user-defined, and specifically, how to define the calling mode and the curve function can be determined according to an actual application scene, and the embodiment of the disclosure is not limited;
in the rewritten virtual function onLeftButtonDown (), the coordinate value in the screen coordinate picked up by the mouse can be obtained through the call of the GetEventposition () function, and the coordinate value of the mouse pick-up position (x, y, z) in the world coordinate system is returned through the GetWorldPoint () function; pick-up point rendering data may also be added to the Actor rendering pipeline.
In the process of moving the mouse, the event of moving the mouse can be processed by calling an OnMouseMove () function; specifically, according to a preset refresh time interval, capturing a current moment screen coordinate value picked up by the mouse in a certain time interval by calling a GetEventPosition () function, and acquiring a last moment screen coordinate value picked up by the mouse by calling the GetLastEventPosition () function; then, the captured window coordinate values are transferred to a Pick (double x, double y, double z, vtkRender) function, the function has four parameter values, wherein the first three parameters double x, double y and double z represent the current window coordinate values, and the value of z can be 0, and render represents an interactive object; then, the world coordinate value corresponding to the screen coordinate can be obtained and stored in a vector container by calling a GetPickPosition () function; in practical application, the geometrical structure memory of the Data set polydata three-dimensional Point Data (Point Data) can be defined and stored; then, the mouse pick-up point rendering data can be added into an Actor rendering pipeline; finally, coordinate tracks picked up in the mouse moving process can be drawn and displayed in three dimensions by calling a custom drawCurve () function.
In the course of calling the drawcurrve () function, the vtkLine rendering mouse may be used to draw a video signal during movement,when the system time is continuously refreshed, the line segment between the front and rear point coordinate marking points is picked up; specifically, when the mouse starts to move, clicking a left button of the mouse to record a coordinate value of an initial movement position of the mouse, pressing the left button of the mouse in the movement process of the mouse and keeping the left button of the mouse unchanged, continuously capturing the position coordinate picked up by the mouse in a screen window, and drawing a connecting line between the position coordinate and a three-dimensional display coordinate point, namely a movement track of the mouse; when the left button of the mouse is released, the track information P of the current segment is recorded 1 Defining a track line P (g, P, vecG, vecEg, vecGp), and respectively recording a starting point and an end coordinate value of each track, a grid index value, a grid, grid side information and an intersection point coordinate of the track and the side; when the mouse moves again, the left mouse button is pressed, track coordinates which are scratched on the data object when the mouse is pressed are captured, and when the left mouse button is released again, a track P is generated 2 The method comprises the steps of carrying out a first treatment on the surface of the When the mouse returns to the initial position, clicking the right button of the mouse to finish curve drawing; adding all track curve data P of the mouse on the curved surface into the vector container tracks for storage; if the distance between the end coordinate and the starting point coordinate of the last track is smaller than a preset threshold value, the track is connected with the first track curve to form a closed curve area.
The second part, mouse pickup curve results illustrate:
in actual application, the process of drawing a curve by a mouse can be displayed in real time by calling a drawCurve () function, and interaction information is obtained; specifically, firstly, boundary contour information is picked up through mouse interaction to establish a unit geometry structure and a unit topology structure of a data set polydata, and then defined point data and unit data are added into polydata through calling a SetPoints () function and calling a SetLines () function; setting a color code value and a curve width attribute value through a mapper, and adding a mouse track into a current visual pipeline for real-time rendering; fig. 2 is a schematic diagram of a closed curve of the mouse interaction recognition, wherein the initial position of the mouse is a as shown by a black line in fig. 2, a closed curve is drawn on a read rabbit model through mouse movement, and track information of the curve on an interaction object is recorded.
Step 102: and performing collision test and gray level fusion on the region to be cut based on the target cutting line to generate a first curved surface.
In practical application, the three-dimensional image may be an irregular triangular Mesh with a side and point topological structure, namely a Mesh, the data to be cut may be surface layer curved surface data of the Mesh, continuous curve data drawn by a user on the Mesh form a boundary contour of the area to be cut, and a curve of the area to be cut, namely a target cutting line, is formed; the line segment intersecting each Mesh grid in the target cutting line can be called as a grid cutting line of the Mesh grid, the grid cutting line is a basic cutting unit for cutting an irregular triangular grid curved surface, and the target cutting line of the area to be cut is composed of one or more grid cutting lines connected end to end.
In an embodiment, the three-dimensional image includes a plurality of Mesh grids, and the performing collision test and gray fusion on the to-be-cut area based on the target cutting line to generate a first curved surface may include:
step 1021: and performing collision test on the target cutting line and the Mesh grids, and determining a plurality of first Mesh grids intersecting the target cutting line from the Mesh grids.
In practical application, before cutting an area to be cut, the position relationship between the target cutting line and the Mesh grid needs to be judged first, in the process, a first Mesh grid intersected with the target cutting line can be determined, and intersection lines and intersection points of the first Mesh grid and the target cutting line can be determined.
In practical application, a structure tree of the directional bounding box can be established, and then collision detection is carried out on the target cutting line and the cut curved surface (namely, the three-dimensional image) so as to determine a first Mesh grid intersected with the target cutting line.
Specifically, a direction bounding box structure tree can be built by utilizing the VTK OBB, namely, an OBB tree is built; in the construction process of the OBB tree, a bounding box of the whole object can be firstly established, then the bounding box is divided into two parts, and the two decomposed bounding box structure tree nodes are used as child nodes of the bounding box nodes; then, collision detection is carried out on the target cutting line and the cut surface (namely a plurality of Mesh grids), and the intersecting position is determined to obtain intersecting information td (r 1, r 2); for the intersection information, r1 belongs to a Mesh grid curved surface on a cut surface, and r2 cutting lines are track information picked up by mouse interaction.
Step 1022: and determining intersection points of the plurality of first Mesh grids and the target cutting lines to obtain a plurality of first intersection points.
Cutting the Mesh grids, wherein the basic unit of cutting is a grid cutting line, namely a section of cutting line corresponding to each first Mesh in the target cutting line; in the cutting process, information (td 1, td2, …, tdn) of the intersected Mesh (namely the first Mesh) can be determined by constructing an OBB tree, the intersection point of the target cutting line and the curved surface of the first Mesh is obtained, in the process, whether the target cutting line is intersected with the Mesh which is collided or not can be firstly obtained, and then the intersection point is determined according to the intersected edge; after all Mesh grids are cut, the intersection points and the intersection edges of the target cutting lines and the Mesh grids can be obtained, and then the edge intersection points (namely the first intersection points) are subjected to gray processing and sequencing through gray fusion of the Mesh grids; finally, calculating an intersection line between the target cutting line and the Mesh grid through the ordered intersection points, namely a first Mesh grid of the area to be cut; the intersecting line may be a continuous curve or a plurality of continuous curves.
In practical application, the Mesh grid may also be referred to as a triangular grid, which is not limited in the embodiment of the disclosure, so long as the function thereof can be realized; the first Mesh grid may also be referred to as a boundary triangle grid, which is not limited in this embodiment of the present disclosure, as long as the function thereof can be implemented.
Step 1023: and carrying out gray fusion on the Mesh grids to obtain a fusion result.
In practical application, pixels with larger differences in the to-be-cut area can be determined according to a preset pixel threshold, and gray fusion is performed on Mesh grids of the to-be-cut area according to the pixels with larger differences.
Illustratively, gray scale processing is firstly carried out on a plurality of Mesh grids of a region to be cut; wherein the image pixels D of different gray areas can be expressed by the following formula:
then, according to the preset pixel threshold G 1 And G 2 The pixels in the region to be cut after gray level processing are larger than G 1 And less than G 2 Is described as I (G) 1 ) And I (G) 2 ) The method comprises the steps of carrying out a first treatment on the surface of the Gray fusion of the image of the region to be cut is performed by using the following formula:
wherein,
wherein sigma k The product of the local average curvature and the unit normal vector is represented, and omega represents the weight relation among Mesh grid vertexes; u and v represent frequency domain coefficient coordinates of the image, mu is an imaginary unit, A belongs to an interference vector, and s represents the area of the triangular mesh;
the constraint conditions of the image gray level fusion are as follows:
wherein delta a (t) represents a three-dimensional image weighting function, and d represents a time-domain coordinate translation.
In practical application, gray fusion is carried out on a plurality of Mesh grids of the region to be cut, so that correlation of pixels in a local region can be introduced, accuracy of fusion results is improved, cutting information can be better extracted, and cutting accuracy is improved.
Step 1024: and sequencing the first intersection points based on the gray fusion result to obtain a sequencing result.
In actual application, the whole effect of cutting the image can be reserved by fusing the images; in the related art, a fusion strategy such as a maximum value method or a weighted average method can be adopted to directly process the pixel value of the cut image based on the image fusion method of the spatial domain, but the two methods are adopted to perform image fusion, so that detail information such as edges, contours and the like is seriously lost, and the fusion effect is poor; compared with the image fusion method in the related art, the method and the device for the image fusion in the embodiment of the disclosure have the advantages that gray fusion is carried out on a plurality of Mesh grids of the area to be cut, and the intersection points of the target cutting line and the Mesh grids are ordered according to the fusion result, and because the correlation of pixels in the local area is introduced, the accuracy of the fusion result can be improved, cutting information can be better extracted, and cutting accuracy is improved.
Step 1025: and generating a first curved surface based on the sorting result.
In practical application, the intersection line between the target cutting line and the first Mesh grid can be determined according to the ordered intersection points, so that the first curved surface can be determined according to the intersection line.
Based on this, in an embodiment, the generating a first curved surface based on the sorting result may include:
generating intersecting lines of the target cutting lines and the Mesh grids based on the sorting result to obtain first intersecting lines;
and determining the Mesh grid of the area to be cut based on the first intersecting line, and generating a first curved surface.
In actual application, after Mesh grid cutting is carried out, a new curved surface formed by a boundary triangular grid determined by intersecting lines can be generated according to the original topological position relation on the initial Mesh grid curved surface; because a plurality of intersection points are added on the generated new curved surface edge, the Mesh grid to which the intersection points belong is cut into polygons, and the polygons can be re-triangulated in order to ensure the integrity of the Mesh grid curved surface topological structure.
Based on this, in an embodiment, the determining, based on the first intersection, the Mesh grid of the area to be cut, and generating a first curved surface may include:
determining a grid area positioned in the area to be cut in each first Mesh grid based on the first intersecting line to obtain a first area of each first Mesh grid; the first area is of an N-sided shape, and N is an integer greater than 3;
Triangularizing the first area of each first Mesh grid to obtain a plurality of corresponding second Mesh grids;
determining a plurality of internal grids of the region to be cut;
and generating a first curved surface based on the plurality of second Mesh grids and the plurality of internal grids.
In practical application, a surface reconstruction filter (Surface Reconstruction Filter in English) in the VTK can be used for re-triangulating the cut new curved surface; specifically, a three-dimensional point cloud implicit curved surface can be reconstructed by using a reconstruction Filter, sampling is carried out on a grid, the signed distance measurement from the point to the curved surface is calculated, and the surface profile of the grid curved surface of the zero isosurface is extracted by using a profile Filter (English is expressed as a content Filter) in the VTK; here, by triangular reconstruction, the curved surface topological structure can be better maintained, the precision of the cutting result is ensured, the method is suitable for real-time processing, and the interactivity is strong; illustratively, fig. 3a shows a schematic view of cutting a curved surface of a region to be cut, and fig. 3b shows a schematic view of cutting a curved surface of a region to be cut after re-triangulating the polygon in fig. 3 a.
In practical application, after the mouse contour pickup is completed on the volume data of the three-dimensional image, the three-dimensional texture used for drawing is updated through triangular reconstruction, but the related texture representing the transfer function is maintained, drawing is carried out again, and the consistency of drawing effects before and after cutting is ensured; meanwhile, the drawing algorithm is completed in one drawing channel, so that the drawing has higher real-time performance.
In the process of determining the internal grid, the rest internal grid vertexes can be searched according to the determined initial internal grid vertexes during actual application; since the region to be cut is a curved surface formed by triangular meshes of a complete topological structure, the rest meshes adjacent to the mesh edges and the vertexes thereof can be obtained according to the index values of the meshes to which the mesh vertexes belong.
Based on this, in an embodiment, the determining the plurality of internal grids of the region to be cut may include:
determining whether a third Mesh grid adjacent to the current internal grid meets a first preset condition; the first preset condition includes that the vertexes of the third Mesh grid do not belong to the vertexes of the internal grid, and the third Mesh grid does not intersect with the target cutting line;
and under the condition that the third Mesh grid meets the first preset condition, determining the third Mesh grid as an internal grid.
Illustratively, as shown in fig. 4a, a black box shows a region to be cut composed of four cut lines, point M is an internal initial Mesh vertex, and the remaining internal Mesh vertices are searched for using the vertices of the initial Mesh; fig. 4b shows the first vertex search result, wherein black vertices (M, p, p2, p 3) represent target region internal grid points and white vertices (p 4) represent region external vertices; in the searching process, the point M is pressed into the stack S, when the storage element in the stack is not empty, the stack top element is popped up, and the popped vertex is set as an internal grid vertex (represented by a black vertex); judging all grid edges connected with the vertexes one by one, if another vertex P on one searched grid edge is not an internal grid vertex and the grid edge has no non-traversed cutting line, stacking the vertexes P as internal grid vertexes; as shown in fig. 4b, since three sides formed by vertices P1, P2, P3 connected to vertex M have no cut lines passing, P1, P2, P3 are respectively set as internal mesh vertices, and then these vertices are pushed into the stack; if the non-even number of cutting lines which are not traversed have intersection points with the triangle edge, marking the vertex P as an external triangle mesh vertex; as shown in fig. 4b, since there is a cut line on the side where the vertex M and the vertex P4 are connected, P4 is set as the external mesh vertex (represented by white point); and when the elements in the stack are not empty, judging whether the grid edges connected with the stack-top stack-pulling elements pass through the cutting lines according to the steps, and when the elements in the stack are empty, ending the search, wherein all internal grid vertices obtained after the search are shown in fig. 4 c.
Step 103: and generating a curved surface model of the region to be cut based on the first curved surface.
In practical application, the first curved surface may also be referred to as an initial curved surface, and may also be referred to as an inner surface of the substrate, which is not limited in the embodiments of the present disclosure, so long as the function thereof can be achieved.
In practical application, after the first curved surface is obtained, the outer surface of the substrate can be obtained in a manner that the first curved surface moves along the direction of the point normal vector, so that a curved surface model is generated.
Based on this, in an embodiment, the generating, based on the first curved surface, a curved surface model of the area to be cut may include:
performing point normal vector movement on the first curved surface to generate a model matrix;
and generating a curved surface model of the region to be cut based on the model matrix.
In practical application, the point normal vector and the unit normal vector of the Mesh grid can be determined first, then a model matrix is generated in a vector offset mode according to a determination result, and specifically, a matrix structure of the curved surface model is generated.
In practical application, the point normal vector and the cell normal vector of the Mesh grid can be determined by Poly Data Normals () in the VTK. Illustratively, fig. 5a shows a display result of a three-dimensional image Mesh grid, fig. 5b shows a normal vector representation of each Mesh vertex, fig. 5c shows a curved surface model of an area to be cut, and fig. 5d shows a normal vector representation of an area to be cut.
In practical application, after the correct normal vector of the curved surface model is obtained through calculation, a model matrix can be obtained in a vector deviation mode; specifically, vertex normal vectors can be extracted first and stored in variable value, and mesh vertex coordinates are stored in three-dimensional array point, and a normal vector offset product factor is calculated according to a set offset distance dis, where factor=invsqrt ((dis)/(value [0 ]. 0 ])+ (value [1 ]. Times. + (value [2 ]); then, the grid vertex carries out distance offset according to a normal vector product factor and a point coordinate value; finally, a starting point geometry and a unit topology are established for the offset data point set.
In practical application, the point cloud data of the inner layer and the outer layer of the matrix can be used for generating a complete curved surface matrix model, namely a curved surface model, in a point cloud reconstruction mode.
Based on this, in an embodiment, the generating, based on the model base, a curved surface model of the region to be cut may include:
and carrying out point cloud reconstruction on the inner and outer layer point cloud data of the model matrix to obtain a curved surface model of the region to be cut.
In practical application, in the process of reconstructing the point cloud, the parameter values of the data points of the inner layer and the outer layer of the matrix can be determined, the data points are corrected according to the parameter values of the data points, and then the topological structure of the curved surface model is constructed.
In actual application, the data point parameter value can be calculated according to the shortest distance mapping relation; specifically, a point with the shortest distance from the data point on the first curved surface can be used as a corresponding point of the data point on the first curved surface, and the normal vector of the corresponding point is the parameter value of the corresponding data point on the first curved surface.
Illustratively, the data point is denoted by p, q denotes the point on the initial surface that is the shortest distance to p, and the vector pq is in the normal direction of q along the initial surface, so q satisfies the following condition:
(q-p)×(q u ×q v )=0; (11)
let l denote the distance of the data point p from the initial surface q, then equation (11) is expressed as:
q(u,v)-p-l(q u ×q v )=0; (12)
wherein u and v represent normal vectors of the tangent plane of the initial curved surface (i.e., the first curved surface);
the u and v components are expressed as:
/>
and (3) calculating the parameter value of the data point according to the conditions shown in the formulas (11) to (13), and correcting the corresponding data point.
After the parameter value of the data point is obtained through calculation, the data point can be corrected according to the parameter value, so that a curved surface control point is established, and then a complete topological structure of the key is constructed; in this process, implementing the construction of topological junctions involves local data extraction of several classes; wherein a subset can be extracted from vtkDataSet using a vtkExactSelect; in particular, VTK ExtractSelection may extract a subset of cells and points from the input data, where the VTK first input port is given by dataobject and the second input port is a content description on vtkSelection; each vtkselect will be created to identify the selected element according to the contents of the vtkselect; the vtkSelectedNode object is commonly used with the vtkSelectation object, the vtkSelectionNode determines the data type of the data to be extracted, sets the CELL data type (vtkSelectionNode:: CELL) by SetFieldType, sets the index number data type (vtkSelectionNode:: INDICES) by SetContentType; vtkSelect is an array that stores the vtkSelectionNode types; in order to realize data extraction, triangle unit information of the vtkIdTypeArray data type can be input in selection, and then the unit data to be extracted is filtered through the call of SetInputConnection () to establish pipeline connection; then the selection independent dataset is processed using SetInputData (); then invert the selected region using INVERSE (); finally, to maintain a stable topology, the BuildCells (), buildLinks () functions need to be called to restore the topology of the object.
In practical application, the generated curved surface model can be converted into STL format and output by using the data conversion output interface of the VTK, so that the entity guide template can be manufactured by using a 3D printing mode.
In summary, according to the technical scheme provided by the embodiment of the disclosure, the target cutting line of the region to be cut is obtained by using the mouse pickup method, so that the track extraction can be simply and efficiently realized, and the personalized curved surface model can be obtained by performing collision test and gray fusion on the region to be cut based on the cutting line, so that the specified region can be cut in any shape.
The technical scheme of the present disclosure is described in further detail below in connection with specific application examples.
The embodiment of the application of the disclosure provides a method for cutting a three-dimensional stereoscopic image, wherein a data source is a patient standard head thin layer CT scanning image, the thickness of the layer is 2mm, a scanning matrix is 512 x 512, data is derived in a Dicom format, the embodiment of the application of the disclosure cuts Dicom data by using an image binary segmentation algorithm BinaryThresholdImageFilter in an ITK image processing tool, then the ITK image is converted into VTK three-dimensional data, then the data is subjected to corrosion processing by using a VTK corrosion algorithm BinaryErodeImageFilter, finally the original image is subtracted by using a subcontact ImageFilter to obtain head skin data, and a data result is output in an STL format.
Specifically, the method comprises the following steps:
step 1: the mouse interactively picks up the boundary contour.
The VTK has some basic mouse interaction functions such as rotation and scaling, but does not provide a cutting function of an irregular curved surface; the method is characterized in that a mode of picking up the contour by a mouse is adopted, the contour curve information of the closed area determined by the mark points is picked up by taking the VTK as a tool, the mouse cutting track is obtained in real time by using an interpolation method, and the mouse interaction function of extracting the boundary contour information of the target area is realized. The specific description is as follows:
(a) Confirming a mouse mark point (onLeftButtonDown () function) of a region to be cut on the surface of the three-dimensional model, wherein the mark point simulates a mouse track (OnMouseMove () function);
(b) Interpolating curve information picked up by a mouse to obtain a mouse track P, and finally generating a closed cutting area;
(c) The intersection point of the interpolation point and the surface of the three-dimensional model is obtained by utilizing a mouse pick-up method, and a real track cutting mark point (stored in a tracks variable) is obtained;
(d) Cutting the closed curved surface to be cut, which is selected by the marking points determined in the steps (a) and (c).
The mouse picking method is characterized by comprising the following steps: acquiring screen coordinate values through a mouse to further acquire a world coordinate value cutting track, and assuming that the vertex coordinate of a triangular surface patch of a three-dimensional curved surface (polydata structure) in a space is a 1 、a 2 、a 3 Assuming that the intersection of the mouse and the triangle patch is C (x, y, z), it satisfies equations (1) and (2); when the equation sets of the formulas (1) and (2) meet the condition of the formula (5), the actual cutting outline marking point picked up by the mouse, namely the actual cutting marking point, can be obtained by calculating the values of t, lambda and U, V.
The specific operation method for interactively picking up the boundary outline by the mouse comprises the following steps:
step 1.1: and (5) mouse interaction design.
And performing mouse interaction operation by using an observer mode, and calling a corresponding event processing function after the mouse receives the information. Such as onLeftButtonDown () is called to handle the specific event function that is triggered after receiving the mouse left button down event message. The custom MousinteractyloStyle class is derived from vtkInteractylortrackback Cemer (VTK provides the application programming interface for the basic mouse operation) and covers the OnKeyPress () function, onLeftTonDown () function, onLeftButtonUp () function, and OnMouseMove () function of this class, and custom pick-up curve function, drawCurve ().
In the rewritten virtual function onLeftButtonDown (), the coordinate values in the screen coordinates picked up by the mouse are acquired through the call of the GetEventposition () function, and the coordinate values of the mouse pick-up position (x, y, z) in the world coordinate system are returned through the GetWorldPoint () function. And adding the pick-up point rendering data to the Actor rendering pipeline.
During the mouse movement, the handling of the mouse movement event is done by the OnMouseMove () function. According to the self-defined refreshing time interval of the system, capturing the screen coordinate values of the current moment and the last moment picked up by the mouse in a certain time interval, and respectively realizing the screen coordinate values through two functions of GetEventposition () and GetLastEventposition (). The captured window coordinate values are then passed to a Pick (double x, double y, double z, vtkRender) function having four parameter values, the first three representing the current window coordinate values, where z is typically 0 and render represents the interactive object. And then acquiring world coordinate values corresponding to the screen coordinates through a GetPickPosition () function and storing the world coordinate values into a vector container. A geometry store of the dataset polydata three-dimensional Point Data (Point Data) is defined and stored. And adding the mouse pick-up point rendering data into an Actor rendering pipeline. And finally, drawing and three-dimensionally displaying the coordinate track picked up in the moving process of the mouse by calling a custom drawCurve () function.
In the drawCurve () function, vtkLine is used to draw a line segment between the front and back point coordinate marker point positions that the mouse picks up when the system time is continuously refreshed during movement. When the mouse starts to move, clicking the left mouse button to record the initial movement position coordinate value of the mouse, pressing the left mouse button in the movement process of the mouse and keeping the left mouse button unchanged, continuously capturing the position coordinate picked up by the mouse in the screen window, and drawing a connecting line between the position coordinate and the three-dimensional display coordinate point, namely the movement track of the mouse. When the left button of the mouse is released, the track information P of the current segment is recorded 1 And defining a track line P (g, P, vecG, vecEg, vegGp), and respectively recording the starting point and the end coordinate value of each track, the index value of the grid, the grid passing by, the side information of each grid and the coordinate of the intersection point of the track and the side. When the mouse moves again, the left mouse button is pressed, track coordinates which are scratched on the data object when the mouse is pressed are captured, and when the left mouse button is released again, a track P is generated 2 . When the mouse returns to the initial position, the right button of the mouse is clicked, and curve drawing is finished. And adding all track curve data P of the mouse on the curved surface into the vector container tracks for storage. If the distance between the end coordinate and the starting point coordinate of the last track is smaller than a preset threshold value, the track is connected with the first track curve to form a closed curve area.
Step 1.2: mouse pickup curve result illustration
The function realized by the drawCurve () function is to display the mouse curve drawing process in real time and acquire interaction information. Firstly, the boundary contour information is picked up through mouse interaction to establish the unit geometry structure and the unit topology structure of the data set polydata, and then the defined point data and the defined unit data are added into the polydata through the SetPoints () function and the SetLines () function. And setting a color code value and a curve width attribute value through a mapper, and adding a mouse track into a current visual pipeline for real-time rendering. As shown in fig. 2, a closed curve is drawn on the read rabbit model (the known bunny model of the university of stanford) by mouse movement, and the track information of the curve on the interactive object is recorded.
Step 2: cutting the curved substrate.
The data to be cut is surface layer curved surface data of irregular triangular mesh with side and point topological structure, continuous curve data drawn by user on the curved surface mesh forms boundary contour of target cutting area, curve forming the area to be cut is called cutting line (in track information record, P 1 ,P 2 I.e., cut lines), which are the basic cutting units for cutting the curved surface of the irregular triangular mesh, and the outline of the target cutting area is a closed area composed of one or more cut lines connected end to end. In order to be able to better extract the cut information, a gray fusion method is proposed taking into account the correlation of pixels in the local area.
The cutting process mainly comprises the following steps: firstly, establishing a directional bounding box structure tree by using VTKOBB; then collision detection is carried out on the cutting line and the cut curved surface to determine intersected triangular grids, intersection points are calculated through intersection of the triangular grids, and gray level fusion processing is carried out; finally, triangulating the polygon generated by the new curved surface generated by cutting; schematic cutting diagrams of triangular mesh curved surfaces are shown in fig. 3a and 3 b.
The specific process for cutting the curved substrate comprises the following steps:
step 2.1: collision detection between the cut lines and the triangular mesh.
Here, cutting an area to be cut, first, determining a positional relationship between a cutting line and a triangular mesh; in order to improve the efficiency of collision detection, it is necessary to implement by a collision detection algorithm.
In the construction process of the OBB tree, firstly, a bounding box of the whole object is established, then the bounding box is divided into two parts, and the two decomposed bounding box structure tree nodes are used as child nodes of the bounding box nodes. Performing collision detection on the cutting line and the cut surface, and determining the intersecting position; for the intersection information td (r 1, r 2), the r1 triangular mesh belongs to a triangular mesh curved surface on the cut surface, and the r2 cutting line is track information picked up by mouse interaction. The collision detection is to judge triangular mesh information of collision between the cut surface and the cutting line by a method of establishing a direction bounding box structure tree.
Step 2.2: and (5) cutting the triangular mesh curved surface.
In the triangular mesh cutting algorithm, the basic unit of cutting is cutting lines, intersecting triangular mesh information (td 1, td2, …, tdn) is determined through the construction of an OBB tree, the intersection point of the cutting lines and a mesh curved surface is obtained, whether the cutting lines intersect with the colliding triangular mesh or not is firstly solved, and then the intersection point is determined according to the intersecting edges. And after all the triangular grids are cut, the intersection points and the intersection edges of the cutting lines and the triangular grids can be obtained. And then processing and sequencing the edge intersection points through the gray level fusion of the triangular mesh. And finally, calculating the intersection line between the cutting line and the curved surface grid through the ordered intersection points, namely the boundary triangular grid of the area to be cut. The intersecting line may be a continuous curve or a plurality of continuous curves.
The gray level fusion idea is as follows: the three-dimensional image is cut, the triangular surface patch of the three-dimensional image is required to be subjected to gray processing in order to better extract the contour information, and the image pixels of different gray areas can be represented by a formula (6).
To make three-dimensional image pixel larger than G 1 And less than G 2 Is described as I (G) 1 ) And I (G) 2 ) Performing three-dimensional image gray level fusion by using a formula (7); the constraint condition of gray level fusion of the three-dimensional image is shown in a formula (10).
Step 2.3: and determining the internal grids of the cut boundary.
Specifically, searching the rest internal grid vertices according to the determined initial internal grid vertices; the cut surface is a curved surface formed by triangular grids of a complete topological structure, and other grids adjacent to the grid edge and the vertexes thereof are obtained according to the index values of the grids to which the grid vertexes belong; from step 2.2, the grid information intersecting the cut lines has been determined, it can be determined which grids have border cut lines passing.
FIG. 4a shows a region of a surface to be cut consisting of four cut lines, point M being the internal initial triangular mesh vertex; FIG. 4b shows a schematic diagram of a first vertex search result, wherein black triangular mesh vertices represent mesh points inside the target area and white mesh vertices represent vertices outside the area; firstly, pushing a point M into a stack S, when a storage element in the stack is not empty, popping up a stack top element, and setting an popped vertex as an internal triangular grid vertex (represented by a black vertex); judging all grid edges connected with the vertexes one by one, if another vertex P on one searched grid edge is not an internal grid vertex and the grid edge does not have a cutting line which is not traversed, stacking the vertexes P as internal grid vertexes, and setting the vertexes P1, P2 and P3 as internal triangular grid vertexes as three edges formed by vertexes P1, P2 and P3 connected with the vertexes M do not pass through the cutting line, and then pressing the vertexes into a stack; if there are non-even cutting lines which are not traversed and the triangle edge have intersection points, marking the vertex P as an external triangle grid vertex, and setting the vertex P4 as an external triangle grid vertex (represented by white points) because one cutting line is arranged on the edge connecting the vertex M and the vertex P4; when the element in the stack is not empty, whether the grid edge connected with the stack-top stack-pulling element passes through the cutting line is sequentially judged according to the steps, when the element in the stack is empty, the search is ended, and all internal triangular grid vertexes (represented by black vertexes) obtained after the search is completed are shown in fig. 4 c.
Step 2.4: re-triangularization after cutting.
After the triangular cutting is completed, a new curved surface formed by the boundary triangular meshes determined by the intersecting lines is generated according to the original topological position relationship on the initial triangular mesh curved surface. The generated new curved surface edge is added with a plurality of intersection points to generate polygons, and the polygons need to be re-triangulated to ensure the complete triangular mesh curved surface topological structure.
The disclosed application embodiments re-triangulate the new surface after the cut using a vtksurface reconstructionfilter. The surface reconstruction algorithm can well maintain the curved surface topological structure, is suitable for real-time processing, and has strong interactivity. The VTKSurotoureFilter is used for realizing the three-dimensional point cloud implicit surface reconstruction method, sampling is carried out on grids, and the surface profile of the grid curved surface of the zero equivalent surface is extracted by using the VTKContourFilter by calculating the signed distance measurement from the points to the curved surface.
After the mouse contour pickup is completed on the volume data, the three-dimensional texture used for drawing is updated, but the related texture representing the transfer function is maintained, and drawing is carried out again, so that the consistency of drawing effects before and after cutting is ensured. The drawing algorithm is completed in one drawing channel, so that the drawing has higher real-time performance. The cutting area of any shape body can be determined by interactively controlling the related orthogonal lines on three views, so that the aim of rapid and accurate cutting is fulfilled.
Step 3: extracting the surface of a substrate: and (6) generating a curved surface model.
Specifically, this step may include:
step 3.1: and (5) generating an initial curved surface.
In step 2, the object region curved surface model, that is, the inner surface of the model matrix, is obtained by interactive cutting on the facial skin mesh model data object through a mouse, and further processing is required to generate the curved surface model. According to the application embodiment of the disclosure, a curved surface matrix model is obtained by moving an inner curved surface along the direction of a point normal vector, and then a complete curved surface matrix model is generated by reconstructing point cloud data of an inner layer and an outer layer of the matrix.
In VTK, the filter vtkPolyDataNormals () is capable of performing calculation of a dot normal vector and a cell normal vector for a cell patch. As shown in fig. 5a, which shows a visual display of three-dimensional mesh data of facial skin, fig. 5b, which shows a normal vector direction representation of each mesh vertex of curved surface data, wherein the normal vector direction is represented using a three-dimensional arrow aid, fig. 5c, which shows a skin-cutting curved surface model, and fig. 5d, which shows a normal vector direction representation of a skin-cutting curved surface model, wherein the normal vector direction is represented using a cone arrow.
And after the right normal vector of the curved surface is obtained by calculation, a matrix structure is obtained by adopting a vector offset mode. Firstly, extracting normal vector of grid vertex and storing in variable value, and storing grid vertex coordinates in three-dimensional array point, calculating normal vector offset product factor according to set offset distance dis, wherein factor=invsqrt ((dis)/((value [0 ]. Value [0 ])+ (value [1 ]. Value [1 ])+ (value [2 ]. Value [2 ])). And then, the grid vertex is subjected to distance offset according to the normal vector product factor and the coordinate value of the point. And finally, establishing a starting point geometric structure and a unit topological structure for the offset data point set.
Step 3.2: parameter values for the geometry data points are determined. The invention adopts the shortest distance mapping relation when the parameter value of the data point is obtained, namely, the point with the shortest distance from the geometric structure data point on the initial curved surface is used as the corresponding point of the data point on the initial curved surface, and the normal vector of the point is the parameter value of the data point on the initial curved surface.
Let p denote the data point, q denote the point on the initial surface that is the shortest distance to p, and the vector pq is in the normal direction of q along the initial surface, so q satisfies equation (11).
Let l denote the distance of the data point p from the initial surface q, then equation (11) is expressed as equation (12); the geometric data point parameter values are solved and the data points are corrected according to the formula (11) and the consensus (12).
Step 3.3: topology construction.
After the parameter values of the data points are obtained, the curved surface control points can be established, and then the complete topological structure of the key is constructed. Implementing the construction of topological junctions involves local data extraction of several classes. vtkextraactselection is to extract a subset from vtkdata set. vtkextraselect extracts a subset of cells and points from its input data object. Its first input port is given by dataobject and the second input port is the content description on vtkSelection. Based on the contents of the vtkSelection, each vtkselector will be created to identify the selected element. The vtkSelectedNode object is typically used with the vtkSelectation object, the vtkSelectionNode determines the data type of the data to be extracted, sets the element data type (vtkSelectionNode:: CELL) by SetFieldType, and sets the index number data type (vtkSelectionNode:: INDICES) by SetContentType. vtkselect is an array that stores the vtkSelectionnode types. In order to achieve data extraction, triangle unit information of the vtkIdTypeArray data type needs to be input in selection, and then unit data to be extracted is filtered through call of SetInputConnection () to establish pipeline connection. The selection independent dataset is then processed using SetInputData (). The selected region is then inverted using INVERSE (). Finally, in order to maintain a stable topological structure, the topological structure of the object needs to be restored by calling the BuildCells (), buildLinks () functions.
And finally, outputting the manufactured curved surface model into an STL format by using a data conversion output interface provided by the VTK, and manufacturing a guide template by using a 3D printing mode.
In summary, the application embodiments of the present disclosure have the following advantages:
(1) By adopting a mouse pickup mode, track information can be quickly obtained, a target area to be cut can be drawn in real time, and a personalized curved surface model can be accurately and quickly obtained by quickly extracting the model by using collision detection, gray level fusion and re-triangularization for the three-dimensional image of the cutting area.
(2) By analyzing and comparing the model cutting and extracting results, the mouse pick-up and cutting method provided by the invention can help a user to quickly and accurately acquire a three-dimensional modeling model, and experiments show the effectiveness and applicability of the method.
It should be noted that, in the medical application scenario, the three-dimensional modeling guide plate is adopted to assist the operation, so that the accuracy and the safety can be improved, the risk in the operation can be reduced, and a good clinical effect can be obtained, the method is a technology worthy of popularization and wide application, but the pathological change part can be observed only by printing out a physical object, so that the printed model cannot be recovered after one-time cutting exercise, and needs to be printed again, thereby wasting printing materials and printing time; however, in the related art, there are often problems of large loss of printing materials, long printing time and the like in operation planning and preoperative exercise period; the method for interactively cutting and extracting the three-dimensional image has obvious advantages in the aspects of saving processing resources, improving efficiency and safety, so that the technology has high possibility of being adopted; meanwhile, in the aspect of application of transformation and face makeup of the face parts of the human, the mouse pickup method provided by the application embodiment of the invention can cut and extract the characteristic points of the face parts of the human, acquire the image data of the region of interest and make up, and has considerable application prospects in the fields of face lifting or make up.
In order to achieve the above image processing method, an embodiment of the present disclosure further provides an image processing apparatus. As shown in fig. 6, the apparatus 600 may include:
the computing unit 601 is configured to obtain a boundary contour of a region to be cut in the three-dimensional image, so as to obtain a target cutting line of the region to be cut; the three-dimensional image comprises a plurality of Mesh grids;
a determining unit 602, configured to perform collision test and gray level fusion on the to-be-cut area based on the target cutting line, and generate a first curved surface;
and the processing unit 603 is configured to generate a curved surface model of the region to be cut based on the first curved surface.
In an embodiment, the determining unit 602 may specifically be configured to:
performing collision test on the target cutting line and the Mesh grids, and determining a plurality of first Mesh grids intersecting the target cutting line from the Mesh grids;
determining intersection points of the first Mesh grids and the target cutting lines to obtain first intersection points;
gray fusion is carried out on the Mesh grids to obtain a fusion result;
sequencing the first intersection points based on the gray fusion result to obtain a sequencing result;
And generating a first curved surface based on the sorting result.
In an embodiment, the determining unit 602 may specifically be configured to:
generating intersecting lines of the target cutting lines and the Mesh grids based on the sorting result to obtain first intersecting lines;
and determining the Mesh grid of the area to be cut based on the first intersecting line, and generating a first curved surface.
In an embodiment, the determining unit 602 may specifically be configured to:
determining a grid area positioned in the area to be cut in each first Mesh grid based on the first intersecting line to obtain a first area of each first Mesh grid; the first area is of an N-sided shape, and N is an integer greater than 3;
triangularizing the first area of each first Mesh grid to obtain a plurality of corresponding second Mesh grids;
determining a plurality of internal grids of the region to be cut;
and generating a first curved surface based on the plurality of second Mesh grids and the plurality of internal grids.
In an embodiment, the determining unit 602 may specifically be configured to:
determining whether a third Mesh grid adjacent to the current internal grid meets a first preset condition; the first preset condition includes that the vertexes of the third Mesh grid do not belong to the vertexes of the internal grid, and the third Mesh grid does not intersect with the target cutting line;
And under the condition that the third Mesh grid meets the first preset condition, determining the third Mesh grid as an internal grid.
In an embodiment, the computing unit 601 may specifically be configured to:
performing mouse marking on the boundary position of the region to be cut in the three-dimensional image to obtain coordinate values of a plurality of marking points and obtain coordinate information;
acquiring coordinate tracks of the mouse among different coordinate points to obtain track information;
and obtaining the target cutting line of the region to be cut based on the coordinate information and the track information.
In an embodiment, the processing unit 603 may specifically be configured to:
performing point normal vector movement on the first curved surface to generate a model matrix;
and generating a curved surface model of the region to be cut based on the model matrix.
In an embodiment, the processing unit 603 may specifically be configured to:
and carrying out point cloud reconstruction on the inner and outer layer point cloud data of the model matrix to obtain a curved surface model of the region to be cut.
It should be noted that: in the image processing apparatus provided in the above embodiment, only the division of each program module is used for illustration, and in practical application, the above processing allocation may be performed by different program modules according to needs, that is, the internal structure of the apparatus is divided into different program modules, so as to complete all or part of the above processing. In addition, the image processing apparatus and the image processing method provided in the foregoing embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments and are not described herein again.
Based on the hardware implementation of the program modules, and in order to implement the method of the embodiments of the present disclosure, the embodiments of the present disclosure further provide an electronic device, as shown in fig. 7, the electronic device 700 includes:
a communication interface 701 capable of information interaction with other devices;
a processor 702, connected to the communication interface 701, for implementing information interaction with other devices, and configured to execute the methods provided by one or more of the above technical solutions when executing a computer program;
a memory 703, said computer program being stored on said memory 703.
Specifically, the processor 702 may be configured to:
obtaining the boundary contour of the region to be cut in the three-dimensional image to obtain a target cutting line of the region to be cut; the three-dimensional image comprises a plurality of Mesh grids;
based on the target cutting line, performing collision test and gray fusion on the to-be-cut area to generate a first curved surface;
and generating a curved surface model of the region to be cut based on the first curved surface.
In one embodiment, the processor 702 may be specifically configured to:
performing collision test on the target cutting line and the Mesh grids, and determining a plurality of first Mesh grids intersecting the target cutting line from the Mesh grids;
Determining intersection points of the first Mesh grids and the target cutting lines to obtain first intersection points;
gray fusion is carried out on the Mesh grids to obtain a fusion result;
sequencing the first intersection points based on the gray fusion result to obtain a sequencing result;
and generating a first curved surface based on the sorting result.
In one embodiment, the processor 702 may be specifically configured to:
generating intersecting lines of the target cutting lines and the Mesh grids based on the sorting result to obtain first intersecting lines;
and determining the Mesh grid of the area to be cut based on the first intersecting line, and generating a first curved surface.
In one embodiment, the processor 702 may be specifically configured to:
determining a grid area positioned in the area to be cut in each first Mesh grid based on the first intersecting line to obtain a first area of each first Mesh grid; the first area is of an N-sided shape, and N is an integer greater than 3;
triangularizing the first area of each first Mesh grid to obtain a plurality of corresponding second Mesh grids;
determining a plurality of internal grids of the region to be cut;
and generating a first curved surface based on the plurality of second Mesh grids and the plurality of internal grids.
In one embodiment, the processor 702 may be specifically configured to:
determining whether a third Mesh grid adjacent to the current internal grid meets a first preset condition; the first preset condition includes that the vertexes of the third Mesh grid do not belong to the vertexes of the internal grid, and the third Mesh grid does not intersect with the target cutting line;
and under the condition that the third Mesh grid meets the first preset condition, determining the third Mesh grid as an internal grid.
In one embodiment, the processor 702 may be specifically configured to:
performing mouse marking on the boundary position of the region to be cut in the three-dimensional image to obtain coordinate values of a plurality of marking points and obtain coordinate information;
acquiring coordinate tracks of the mouse among different coordinate points to obtain track information;
and obtaining the target cutting line of the region to be cut based on the coordinate information and the track information.
In one embodiment, the processor 702 may be specifically configured to:
performing point normal vector movement on the first curved surface to generate a model matrix;
and generating a curved surface model of the region to be cut based on the model matrix.
In one embodiment, the processor 702 may be specifically configured to:
And carrying out point cloud reconstruction on the inner and outer layer point cloud data of the model matrix to obtain a curved surface model of the region to be cut.
It should be noted that: the specific processing of the processor 702 may be understood with reference to the methods described above.
Of course, in actual practice, the various components in electronic device 700 would be coupled together via bus system 704. It is appreciated that bus system 704 is used to enable connected communications between these components. The bus system 704 includes a power bus, a control bus, and a status signal bus in addition to the data bus. But for clarity of illustration, the various buses are labeled as bus system 704 in fig. 7.
The memory 703 in the present embodiment is used to store various types of data to support the operation of the electronic device 700. Examples of such data include: any computer program for operating on the electronic device 700.
The method disclosed in the embodiments of the present application may be applied to the processor 702, or implemented by the processor 702. The processor 702 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the method described above may be performed by integrated logic circuitry in hardware or instructions in software in the processor 702. The processor 702 described above may be a general purpose processor, a digital signal processor (DSP, digital Signal Processor), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. The processor 702 may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present application. The general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method disclosed in the embodiments of the present application may be directly embodied in a hardware decoding processor or implemented by a combination of hardware and software modules in the decoding processor. The software modules may be located in a storage medium in a memory 703, and the processor 702 reads information in the memory 703 and performs the steps of the method described above in connection with its hardware.
In an exemplary embodiment, the electronic device 700 may be implemented by one or more application specific integrated circuits (ASIC, application Specific Integrated Circuit), DSPs, programmable logic devices (PLD, programmable Logic Device), complex programmable logic devices (CPLD, complex Programmable Logic Device), field-programmable gate arrays (FPGA, field-Programmable Gate Array), general purpose processors, controllers, microcontrollers (MCU, micro Controller Unit), microprocessors (Microprocessor), or other electronic components for performing the aforementioned methods.
It is to be understood that the memory (memory 703) of embodiments of the present application may be either volatile memory or nonvolatile memory, and may include both volatile and nonvolatile memory. Wherein the nonvolatile Memory may be Read Only Memory (ROM), programmable Read Only Memory (PROM, programmable Read-Only Memory), erasable programmable Read Only Memory (EPROM, erasable Programmable Read-Only Memory), electrically erasable programmable Read Only Memory (EEPROM, electrically Erasable Programmable Read-Only Memory), magnetic random access Memory (FRAM, ferromagnetic random access Memory), flash Memory (Flash Memory), magnetic surface Memory, optical disk, or compact disk Read Only Memory (CD-ROM, compact Disc Read-Only Memory); the magnetic surface memory may be a disk memory or a tape memory. The volatile memory may be random access memory (RAM, random Access Memory), which acts as external cache memory. By way of example, and not limitation, many forms of RAM are available, such as static random access memory (SRAM, static Random Access Memory), synchronous static random access memory (SSRAM, synchronous Static Random Access Memory), dynamic random access memory (DRAM, dynamic Random Access Memory), synchronous dynamic random access memory (SDRAM, synchronous Dynamic Random Access Memory), double data rate synchronous dynamic random access memory (ddr SDRAM, double Data Rate Synchronous Dynamic Random Access Memory), enhanced synchronous dynamic random access memory (ESDRAM, enhanced Synchronous Dynamic Random Access Memory), synchronous link dynamic random access memory (SLDRAM, syncLink Dynamic Random Access Memory), direct memory bus random access memory (DRRAM, direct Rambus Random Access Memory). The memory described in the embodiments of the present application is intended to comprise, without being limited to, these and any other suitable types of memory.
Embodiments of the present disclosure also propose a non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute the malware detection method described in the above embodiments of the present disclosure.
Embodiments of the present disclosure also provide a computer program product comprising a computer program which, when executed by a processor, performs the malware detection method described in the above embodiments of the present disclosure.
The embodiment of the disclosure also provides a chip, and the chip can be seen from the schematic structural diagram of the chip shown in fig. 8. The chip shown in fig. 8 includes a processor 801 and interface circuitry 802. Wherein the number of processors 801 may be one or more, and the number of interface circuits 802 may be one or more.
Optionally, the chip further comprises a memory for storing necessary computer programs and data; the interface circuit 802 is configured to receive a signal from the memory and send a signal to the processor 801, the signal including computer instructions stored in the memory, which when executed by the processor 801, cause the electronic device to perform the malware detection method described in the above embodiments of the present disclosure.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
In the description of the present specification, reference is made to the terms "one embodiment," "some embodiments," "illustrative embodiments," "examples," "specific examples," or "some examples," etc., meaning that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, system that includes a processing module, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: electrical connections (control methods) with one or more wires, portable computer cartridges (magnetic devices), RAM, ROM, EPROM or flash memory, optical fiber devices, and portable Compact Disc Read Only Memory (CDROM). Additionally, the computer-readable medium may even be paper or other suitable medium upon which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of embodiments of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, and the program may be stored in a computer readable storage medium, where the program when executed includes one or a combination of the steps of the method embodiments.
Furthermore, functional units in various embodiments of the present invention may be integrated into one processing module, or each unit may exist alone physically, or two or more units may be integrated into one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented as software functional modules and sold or used as a stand-alone product. The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, or the like.
While embodiments of the present invention have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the invention, and that variations, modifications, alternatives, and variations of the above embodiments may be made by those of ordinary skill in the art within the scope of the invention.

Claims (12)

1. An image processing method, the method comprising:
acquiring a boundary contour of a region to be cut from a three-dimensional image by using a mouse pickup method to obtain a target cutting line of the region to be cut; the three-dimensional image comprises a plurality of Mesh grids;
based on the target cutting line, performing collision test and gray fusion on the to-be-cut area to generate a first curved surface;
and generating a curved surface model of the region to be cut based on the first curved surface.
2. The method of claim 1, wherein the performing collision test and gray scale fusion on the region to be cut based on the target cut line to generate a first curved surface comprises:
performing collision test on the target cutting line and the Mesh grids, and determining a plurality of first Mesh grids intersecting the target cutting line from the Mesh grids;
Determining intersection points of the first Mesh grids and the target cutting lines to obtain first intersection points;
gray fusion is carried out on the Mesh grids to obtain a fusion result;
sequencing the first intersection points based on the gray fusion result to obtain a sequencing result;
and generating a first curved surface based on the sorting result.
3. The method of claim 2, wherein generating a first curved surface based on the ranking result comprises:
generating intersecting lines of the target cutting lines and the Mesh grids based on the sorting result to obtain first intersecting lines;
and determining the Mesh grid of the area to be cut based on the first intersecting line, and generating a first curved surface.
4. A method according to claim 3, wherein the determining the Mesh grid of the area to be cut based on the first intersection line, generating a first curved surface, comprises:
determining a grid area positioned in the area to be cut in each first Mesh grid based on the first intersecting line to obtain a first area of each first Mesh grid; the first area is of an N-sided shape, and N is an integer greater than 3;
triangularizing the first area of each first Mesh grid to obtain a plurality of corresponding second Mesh grids;
Determining a plurality of internal grids of the region to be cut;
and generating a first curved surface based on the plurality of second Mesh grids and the plurality of internal grids.
5. The method of claim 4, wherein the determining a plurality of internal grids of the area to be cut comprises:
determining whether a third Mesh grid adjacent to the current internal grid meets a first preset condition; the first preset condition includes that the vertexes of the third Mesh grid do not belong to the vertexes of the internal grid, and the third Mesh grid does not intersect with the target cutting line;
and under the condition that the third Mesh grid meets the first preset condition, determining the third Mesh grid as an internal grid.
6. The method according to any one of claims 1 to 5, wherein the acquiring the boundary contour of the area to be cut in the three-dimensional image by using the mouse pickup method to obtain the target cutting line of the area to be cut includes:
performing mouse marking on the boundary position of the region to be cut in the three-dimensional image to obtain coordinate values of a plurality of marking points and obtain coordinate information;
acquiring coordinate tracks of the mouse among different coordinate points to obtain track information;
And obtaining the target cutting line of the region to be cut based on the coordinate information and the track information.
7. The method of any one of claims 1 to 5, wherein the generating a curved surface model of the region to be cut based on the first curved surface comprises:
performing point normal vector movement on the first curved surface to generate a model matrix;
and generating a curved surface model of the region to be cut based on the model matrix.
8. The method of claim 7, wherein generating a curved model of the region to be cut based on the model base comprises:
and carrying out point cloud reconstruction on the inner and outer layer point cloud data of the model matrix to obtain a curved surface model of the region to be cut.
9. An image processing apparatus, characterized in that the apparatus comprises:
the computing unit is used for acquiring the boundary outline of the area to be cut from the three-dimensional image by utilizing a mouse pickup method to obtain a target cutting line of the area to be cut; the three-dimensional image comprises a plurality of Mesh grids;
the determining unit is used for carrying out collision test and gray fusion on the to-be-cut area based on the target cutting line to generate a first curved surface;
And the processing unit is used for generating a curved surface model of the area to be cut based on the first curved surface.
10. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 8.
11. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1 to 8.
12. A chip comprising one or more interface circuits and one or more processors; the interface circuit is for receiving a signal from a memory of an electronic device and sending the signal to the processor, the signal comprising computer instructions stored in the memory, which when executed by the processor, cause the electronic device to perform the method of any one of claims 1 to 8.
CN202311567456.1A 2023-11-22 2023-11-22 Image processing method, device, electronic equipment, chip and storage medium Pending CN117635634A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311567456.1A CN117635634A (en) 2023-11-22 2023-11-22 Image processing method, device, electronic equipment, chip and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311567456.1A CN117635634A (en) 2023-11-22 2023-11-22 Image processing method, device, electronic equipment, chip and storage medium

Publications (1)

Publication Number Publication Date
CN117635634A true CN117635634A (en) 2024-03-01

Family

ID=90029686

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311567456.1A Pending CN117635634A (en) 2023-11-22 2023-11-22 Image processing method, device, electronic equipment, chip and storage medium

Country Status (1)

Country Link
CN (1) CN117635634A (en)

Similar Documents

Publication Publication Date Title
CN109859296B (en) Training method of SMPL parameter prediction model, server and storage medium
CN110807451B (en) Face key point detection method, device, equipment and storage medium
US8384716B2 (en) Image processing method
Zhou et al. Topology repair of solid models using skeletons
CN109978984A (en) Face three-dimensional rebuilding method and terminal device
Mitani et al. 3D sketch: sketch-based model reconstruction and rendering
Qian et al. Automatic unstructured all-hexahedral mesh generation from B-Reps for non-manifold CAD assemblies
Long et al. Neuraludf: Learning unsigned distance fields for multi-view reconstruction of surfaces with arbitrary topologies
CN109766866B (en) Face characteristic point real-time detection method and detection system based on three-dimensional reconstruction
Bhattacharjee et al. A survey on sketch based content creation: from the desktop to virtual and augmented reality
CN115439607A (en) Three-dimensional reconstruction method and device, electronic equipment and storage medium
Liu et al. High-quality textured 3D shape reconstruction with cascaded fully convolutional networks
CN111583408B (en) Human body three-dimensional modeling system based on hand-drawn sketch
Li et al. On surface reconstruction: A priority driven approach
Lei et al. What's the Situation With Intelligent Mesh Generation: A Survey and Perspectives
Kazmi et al. Efficient sketch‐based creation of detailed character models through data‐driven mesh deformations
Brown Interactive part selection for mesh and point models using hierarchical graph-cut partitioning
CN117635634A (en) Image processing method, device, electronic equipment, chip and storage medium
CN115082640A (en) Single image-based 3D face model texture reconstruction method and equipment
Dekkers et al. A sketching interface for feature curve recovery of free-form surfaces
CN108090953A (en) Area-of-interest method for reconstructing, system and computer readable storage medium
Maghoumi et al. Gemsketch: Interactive image-guided geometry extraction from point clouds
Brett et al. A Method of 3D Surface Correspondence for Automated Landmark Generation.
Xu et al. Animated 3d line drawings with temporal coherence
Djuren et al. K-Surfaces: B\'{e} zier-Splines Interpolating at Gaussian Curvature Extrema

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination