CN114418992A - Interactive 2D and 3D medical image registration parameter automatic generation method - Google Patents

Interactive 2D and 3D medical image registration parameter automatic generation method Download PDF

Info

Publication number
CN114418992A
CN114418992A CN202210057988.XA CN202210057988A CN114418992A CN 114418992 A CN114418992 A CN 114418992A CN 202210057988 A CN202210057988 A CN 202210057988A CN 114418992 A CN114418992 A CN 114418992A
Authority
CN
China
Prior art keywords
dimensional
model
image
window
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210057988.XA
Other languages
Chinese (zh)
Inventor
屈磊
丁鹏
吴军
苗永春
邹恒东
尚宏伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui University
Original Assignee
Anhui University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui University filed Critical Anhui University
Priority to CN202210057988.XA priority Critical patent/CN114418992A/en
Publication of CN114418992A publication Critical patent/CN114418992A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Medical Informatics (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention relates to an automatic generation method of registration parameters of interactive 2D and 3D medical images, which solves the defects of poor initial registration parameter search precision and low search efficiency compared with the prior art. The invention comprises the following steps: loading three-dimensional image data; realizing three-dimensional model rendering reconstruction on the window, carrying out two-dimensional mapping on the three-dimensional model, and displaying the real-time mapped image in alignment with the X-ray image to be registered; loading a two-dimensional image to be registered and drawing the two-dimensional image on a window in a two-dimensional texture mode; and dragging the mouse to realize the alignment of the three-dimensional model and the two-dimensional image so as to obtain proper registration parameters. The invention comprehensively considers the difference between the quality of the image drawn on the window and the quality of the X-ray image needing to be registered after the three-dimensional volume data is subjected to maximum density projection, can finish the alignment of two images with higher quality, can obtain a group of proper initial registration parameters and can print and output the parameters on the console.

Description

Interactive 2D and 3D medical image registration parameter automatic generation method
Technical Field
The invention relates to the technical field of medical image processing, in particular to an automatic generation method of interactive 2D and 3D medical image registration parameters.
Background
With the advancement of medical imaging equipment, images containing accurate anatomical information such as CT, MRI; at the same time, images containing functional information such as SPECT may also be acquired. However, diagnosis by observing different images requires a spatial imagination and a subjective experience of a doctor. By adopting a correct image registration method, various information can be accurately fused into the same image, so that doctors can observe the focus and the structure from various angles more conveniently and more accurately. Meanwhile, the change conditions of the focus and the organ can be quantitatively analyzed by registering the dynamic images acquired at different moments, so that the medical diagnosis, the operation plan formulation and the radiotherapy plan are more accurate and reliable. Therefore, the registration of medical images with different modalities and dimensions becomes a hot and pioneering topic of current medical image informatics research.
In the application process of medical image registration processing, finding a group of optimal initial registration parameters can bring great convenience to the subsequent registration process, the initial registration of medical images, namely the rough registration of the medical images, is a link for roughly aligning the images so as to facilitate the next fine registration, but how to find a group of appropriate initial registration parameters is a difficult problem, so that finding an appropriate method for efficiently optimizing the initial registration parameters with high quality has important significance.
In the previous research on registration of 2D and 3D medical images, the main registration methods are roughly divided into two types: a feature-based registration method and a grayscale-based registration method.
A feature-based registration method. The feature-based registration method firstly performs preprocessing on an image to be registered, namely a feature extraction process. And then matching the characteristics of the two images by using the extracted characteristics. Since there are many features available in an image, a number of feature-based methods have been developed: (1) based on the registration of point features, the point features are one of the most common image features in image registration and are divided into external feature points and internal feature points. (2) Line segments are another feature in the image that is easy to extract based on the registration of straight line features. The Hough transform is an effective method for extracting straight lines in an image. The Hough transform can transform a curve or straight line of a given shape in the original image to obtain a point position of a transform space domain. It makes all points on the curve or straight line of the given shape of the original image converge to a certain point position on the transform domain to form a peak. Thus, the problem of detecting a straight line or a curve in the original image becomes a problem of finding a peak in the transformation space. The correct establishment of the corresponding relationship of the line segments extracted from the two images is still the key point and the difficulty of the method. By comprehensively considering the slope of the straight line segment and the position relation of the end points, a histogram of the information indexes can be generated, and the matching of the convergence beam of the histogram to the straight line segment can be found. (3) Registration based on contour and curve features. (4) Registration based on the surface features. The most typical algorithm is "head-hat", i.e. extracting a surface model from an image is called "head", and extracting a set of points from the contour of another image is called: a cap is provided. The method comprises the steps of transforming a point set of the hat onto the head by rigid body transformation or selective affine transformation, and then adopting an optimization algorithm to enable the mean square of each point of the hat to the surface of the head to be minimum according to the distance.
A gray scale based registration method. The method directly utilizes the gray information of the image to carry out registration, thereby avoiding errors caused by segmentation, and having the characteristics of high precision, strong robustness and capability of realizing automatic registration without preprocessing. The registration method based on gray level processing mainly comprises the following steps: one is that the representative proportion and direction and other elements are directly calculated through the image gray scale; another type is to use the full gray scale information in the registration process. The first method is represented by the moment and spindle method, and the second method is generally referred to as voxel formality. (1) The moment and spindle method is that the mass center and the spindle of two images are calculated by using the principle of the mass distribution of a classical mechanical object, and then the two images are subjected to translation, rotation and other transformations to achieve registration. With this approach, the image can be modeled as a point distribution of elliptical regions. Such a distribution can be described by the first and second moments of the positions of the points. This method is sensitive to the absence of data, requiring that the entire object must appear in both images. Overall, the registration accuracy is poor, so it is currently used more for coarse registration, to initially align two images, to reduce the search steps of the subsequent main registration method. (2) The voxel similarity method is a method which is researched more at present. Since it uses all the gray information in the image, this method is generally more stable at one time and can obtain quite accurate results. A further advantage of the method is that it is fully automated and does not require special pre-treatment. However, this method requires a large number of complicated calculations, and therefore, has been put into practical use in recent years.
Therefore, in the field of registration of 2D and 3D medical images, registration methods based on gray scale information have become popular for current research, and the accuracy thereof has surpassed feature-based registration methods. In the registration process based on gray information, how to find a group of suitable initial registration parameters in a short time and facilitate subsequent fine registration becomes a technical problem which needs to be solved urgently.
Disclosure of Invention
The invention aims to solve the defects of poor initial registration parameter searching precision and low searching efficiency in the prior art, and provides an interactive automatic generation method of registration parameters of 2D and 3D medical images to solve the problems.
In order to achieve the purpose, the technical scheme of the invention is as follows:
an interactive automatic generation method of registration parameters of 2D and 3D medical images comprises the following steps:
11) loading three-dimensional image data: acquiring three-dimensional image data to be registered of the interactive medical image;
12) realizing three-dimensional model rendering reconstruction on the window, carrying out two-dimensional mapping on the three-dimensional model, and displaying the real-time mapped image in alignment with the X-ray image to be registered;
13) loading a two-dimensional image to be registered and drawing the two-dimensional image on a window in a two-dimensional texture mode: reading a corresponding two-dimensional image, converting the two-dimensional image into two-dimensional texture data by using a texture mapping method, mapping texture pixels in a texture space to pixels in a window space, and drawing the image on a rendering window;
14) dragging the mouse to realize the alignment of the three-dimensional model and the two-dimensional image so as to obtain proper registration parameters: and dragging the three-dimensional model to perform rotation and translation operation by using mouse interaction, so that the three-dimensional model and the two-dimensional image can be aligned under visual observation, and a group of registration parameters is automatically generated.
The method for realizing three-dimensional model rendering reconstruction on the window, carrying out two-dimensional mapping on the three-dimensional model, and displaying the real-time mapped image and the X-ray image to be registered in an alignment manner comprises the following steps:
21) processing the three-dimensional image data into data suitable for an OpenGL rendering pipeline: aiming at a plurality of input three-dimensional slice image sequences, namely dicom data, writing a python script by using a pydicom library of python, converting the dicom data into a binary file suitable for a system, and storing the rest of dicom data in the binary file by using a C + + container;
22) loading data into a pipeline that renders rendering in real-time: storing the binary data into a rendering pipeline through a corresponding function of OpenGL and a fixed rendering flow;
23) drawing a two-dimensional window: projecting the corresponding three-dimensional CT image by using a maximum density projection algorithm so as to display the three-dimensional CT image on a window in a volume rendering mode;
24) designing a rotation interaction mode of the three-dimensional model by using an arcball algorithm: the method comprises the following steps of adopting an arcball idea, storing information of each rotation of the mouse interaction model into a quaternion, and controlling the rotation of the model by converting the quaternion into an Euler angle and a rotation matrix;
25) setting an interaction mode of mouse translation of the three-dimensional model: setting a translation interaction mode of the model through window coordinate calculation and mouse interaction of the QT;
26) and rendering and displaying in real time by using the interaction result control model.
The processing of the three-dimensional image data into data suitable for an OpenGL rendering pipeline comprises the following steps:
31) reading the dicom data in the release folder by using a python script; opening a cmd console, converting the dicom data into a binary bin file with a specific format by inputting a command in the cmd for the subsequent rendering process to use, and storing the binary bin file under a corresponding folder;
32) circularly traversing all slice data for, outputting a data path to a character string in a formatted mode by using a sprintf _ s function of C + +, reading a series of data under the path, and opening a binary file by using a fopen _ s function of C + +;
33) reading information in a binary bin file by using a C + + fread _ s function, wherein the information comprises the position of a slice in an image sequence, the distance of slice image pixels along the X-axis direction, the distance of the slice image pixels along the Y-axis direction, the width of the slice and the height of the slice, storing the rest dicom data in a container, and adjusting the size of the container to prevent data overflow;
34) after data are read, sequencing all slice data based on the positions of the slices;
35) and loading the dicom form data of the three-dimensional volume data through the read binary file information.
The loading of data into a pipeline for real-time rendering and drawing comprises the following steps:
41) a rendering pipeline calls a function to generate a three-dimensional texture object, and three-dimensional textures are bound in the rendering pipeline;
42) performing three-dimensional texture mapping, wherein parameters of a mapping function are the width and the height of each two-dimensional slice image and the depth of data, and performing texture filtering on the mapped three-dimensional texture;
43) generating and setting a vertex data attribute, and binding the vertex data in the rendering pipeline;
44) compiling shaders of OpenGL and Vertex shaders, executing Vertex shading once for each Vertex sent to GPU, wherein the functions of the shaders are to convert three-dimensional coordinates of each Vertex in a virtual space into two-dimensional coordinates displayed on a window and have depth information for z-buffer; writing a fragment shader, and calculating the color and other attributes of each pixel; compiling and linking the compiled shader, and deleting the shader after the compiling is finished;
45) and transmitting the read three-dimensional texture into a shader, performing texture drawing on the three-dimensional model through texture sampling, and finishing loading.
The drawing of the two-dimensional window comprises the following steps:
51) the shader is used for modifying the ray projection algorithm based on the principle of maximum density projection so as to realize the maximum density projection function;
52) acquiring a position label of a maximum density value, a position label of texture loading, a position label of a camera position parameter and a position label of a model view projection transformation matrix MVP matrix in a maximum density projection shader by utilizing a GetUniformLocation function of OpenGL;
53) loading position parameters of the volume data in the space one by one through corresponding position labels, and then loading a three-dimensional texture mapping;
54) and calling a maximum density projection shader to project the loaded three-dimensional volume data, so that the model subjected to maximum density projection is drawn on the two-dimensional window.
The rotation interaction mode for designing the three-dimensional model by utilizing the arcball algorithm comprises the following steps:
61) mapping two-dimensional window coordinates after mouse interaction, imagining a unit hemisphere to be positioned at the center of a window, adjusting the range of the two-dimensional window coordinates to the range of [ -1.. 1],
the formula: pt.x ═ (pt.x × adjust width) -1.0f, pt.y ═ 1.0f- (pt.y adjust height); wherein pt is a defined three-dimensional coordinate, Adjust Width is a scaling factor of the width, and Adjust height is a scaling factor of the length; adjustistwidth is 1.0f/((NewWidth-1.0f) × 0.5f), NewWidth and NewHeight are the width and height of the two-dimensional window; adjust height ═ 1.0f/((NewHeight-1.0f) × 0.5 f);
62) the coordinates of two points of the window are unitized into two points on a hemispherical surface through a mapping formula, if the two-dimensional window coordinate is not on a unit hemisphere taking the center of the window as the center of the circle, the two-dimensional coordinate is zoomed on the hemisphere, and the zoom factor is set to norm 1.0/FuncSqrt (length), wherein length is (pt.x) + (pt.y); mapping and converting the two-dimensional coordinates into two points on a hemisphere space, if the two-dimensional coordinates are on a unit hemisphere, calculating a Z-direction coordinate pt.z according to an X-direction axis coordinate pt.x and a Y-direction coordinate pt.y, wherein the calculation formula is pt.z ═ FuncSqrt (1.0f-length), and the length ═ pt.x) + (pt.y @);
63) setting that a starting point is generated when a left mouse button is pressed down, and mapping coordinate values when the mouse button is pressed down into three-dimensional coordinates through previous coordinates to be stored in a vector form; when the mouse is released, generating an end point, mapping coordinate values during the release into three-dimensional coordinates through coordinates, and storing the three-dimensional coordinates in a vector form to obtain direction vectors of the starting point and the end point in the rotary interaction process;
64) setting a combined quaternion q ═ v, w ═ x, y, z, w ] composed of two parts, one is the number w, which is equal to cos θ/2, where θ is the angle of rotation, and one is the vector v, which is equal to sin θ/2 times the vector along the axis of rotation; the result of the two quaternion operations is the result of their rotation combination, so that the rotation combination operation is expressed by quaternion cross multiplication; the cross product of the two rotation vectors records the direction of the rotation axis, and the dot product of the two vectors records the rotation angle;
65) and calling a corresponding function to store the cross product and the dot product calculated before in the form of quaternion, wherein the direction vectors of the two rotations are OP1 and OP2, namely, the inner product s of the two vectors is firstly solved: and (5) obtaining the outer product v of two vectors when s is OP1 and OP 2: v is OP1 × OP2, and q is [ s, v ] which is a rotational quaternion;
66) the quaternion is converted into a rotation matrix, the formula is TM (QuatToMatrix) (q), the TM matrix is a corresponding rotation matrix, and the rotation matrix acts on a model transformation matrix rendered by OpenGL, so that the model is controlled to rotate;
67) the quaternion is converted into an Euler angle, the formula is Rotate ═ QuatToeulerAngle (q), Rotate is the corresponding Euler angle, and the rotation angle around three coordinate axes in the real-time interaction process of the model can be displayed;
68) carrying out rotation interactive operation on the model through a mouse to obtain a quaternion of each rotation interaction; and continuously storing and displaying the rotation information through quaternion multiplication.
The interactive mode for setting the mouse translation three-dimensional model comprises the following steps:
71) when a right mouse button is pressed down, recording the current two-dimensional window coordinate, namely a starting point, and when the mouse stops sliding, recording the current two-dimensional window coordinate, namely an ending point;
72) calling a QPointF function of the QT to calculate the difference between the two-dimensional window coordinates in the X-axis direction and the Y-axis direction so as to determine the translation of the model in the X-axis direction and the translation in the Y-axis direction;
73) calling a QT mouse wheel mechanism to control the translation transformation effect of the model on the Z axis, wherein when the mouse wheel slides upwards, the model moves towards the positive direction of the Z axis, and when the mouse wheel slides downwards, the model moves towards the negative direction of the Z axis;
74) and (3) the translation parameters of the model in the X-axis direction, the Y-axis direction and the Z-axis direction are substituted into a translation matrix, and the calculated translation matrix is substituted into a model transformation matrix, so that the translation operation of the model is controlled in real time.
The real-time rendering and displaying by using the interaction result control model comprises the following steps:
81) converting quaternion for storing rotation information into a rotation matrix, and converting vector for storing translation information into a translation matrix;
82) setting a model transformation matrix as a product of a translation matrix, a rotation matrix and a scaling matrix; the model transformation matrix acts on the model, the control model is transformed in the world coordinate system, and the object is transformed from the model coordinate system to the world coordinate system;
83) setting a view matrix, and converting the object from a world coordinate system to a viewpoint coordinate system;
84) setting a projection matrix, performing projection transformation on the object model, and converting the object from a viewpoint coordinate system to a cutting coordinate system;
85) and setting a viewport transformation, and converting the object from the cutting coordinate system to a window coordinate system, thereby rendering the drawing result on the window in real time.
Advantageous effects
Compared with the prior art, the interactive automatic generation method for the registration parameters of the 2D and 3D medical images comprehensively considers the difference between the quality of the image drawn on the window and the quality of the X-ray image needing to be registered after the three-dimensional volume data is subjected to maximum density projection, can finish the alignment of the two images with higher quality, can obtain a group of proper initial registration parameters and can print and output the parameters on the console.
Three-dimensional CT data are drawn on a two-dimensional window through maximum density projection, an X-ray image needing to be registered is drawn on the two-dimensional window in a two-dimensional texture mode, and a three-dimensional model is interacted through a mouse, so that the model is aligned with the X-ray image under visual and visual observation, the searching precision of initial registration parameters is improved on one hand, and the searching efficiency of the initial registration parameters is effectively improved on the other hand.
Drawings
FIG. 1 is a sequence diagram of the method of the present invention;
FIG. 2 is a two-dimensional prior art X-ray view of a hip joint;
FIG. 3 is a prior art corresponding two-dimensional mask map;
FIG. 4 is a rendering display on a two-dimensional window after three-dimensional CT data is loaded;
FIG. 5 is a display of the three-dimensional model after a translational rotational interaction;
FIG. 6a is a diagram of a two-dimensional radiograph and a three-dimensional model rendered simultaneously for display on a window;
FIG. 6b is a diagram showing the alignment of the two-dimensional radiograph and the three-dimensional model on the window after mouse interaction.
Detailed Description
So that the manner in which the above recited features of the present invention can be understood and readily understood, a more particular description of the invention, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings, wherein:
as shown in fig. 1, the method for automatically generating the registration parameters of the interactive 2D and 3D medical images according to the present invention includes the following steps:
first, loading three-dimensional image data: three-dimensional image data of the interactive medical image to be registered is loaded.
And secondly, rendering and reconstructing the three-dimensional model on the window, performing two-dimensional mapping on the three-dimensional model, and displaying the real-time mapped image in alignment with the X-ray image to be registered.
Rendering and displaying the model on a window in a short time by utilizing maximum density projection to realize interactive operation of the three-dimensional model; and the rotation and translation parameters of the three-dimensional model are acquired in real time through manual operation, and printout is performed, so that the subsequent registration and alignment links are facilitated. Through interactive operation, the rotation of the three-dimensional model is controlled subjectively, the operation is convenient and fast, the posture of the three-dimensional model is controlled to achieve the effect of aligning with an X-ray image, and a group of appropriate initial registration parameters can be obtained quickly in the follow-up process. Meanwhile, the difficulty of obtaining the registration parameters through interactive registration is the rendering and interaction of a three-dimensional model, and if the three-dimensional model and a two-dimensional image are displayed on a two-dimensional window at the same time, OpenGL needs to be called for rendering the three-dimensional model. The method comprises the following specific steps:
(1) processing the three-dimensional image data into data suitable for an OpenGL rendering pipeline: for a plurality of input three-dimensional slice image sequences, namely, dicom data, a python script is written by using a pydicom library of python, the dicom data are converted into binary files suitable for a system, and then the rest of the dicom data in the binary files are stored by using a C + + container.
A1) Reading the dicom data in the release folder by using a python script; opening a cmd console, converting the dicom data into a binary bin file with a specific format by inputting a command in the cmd for the subsequent rendering process to use, and storing the binary bin file under a corresponding folder;
A2) circularly traversing all slice data for, outputting a data path to a character string in a formatted mode by using a sprintf _ s function of C + +, reading a series of data under the path, and opening a binary file by using a fopen _ s function of C + +;
A3) reading information in a binary bin file by using a C + + fread _ s function, wherein the information comprises the position of a slice in an image sequence, the distance of slice image pixels along the X-axis direction, the distance of the slice image pixels along the Y-axis direction, the width of the slice and the height of the slice, storing the rest dicom data in a container, and adjusting the size of the container to prevent data overflow;
A4) after data are read, sequencing all slice data based on the positions of the slices;
A5) and loading the dicom form data of the three-dimensional volume data through the read binary file information.
(2) Loading data into a pipeline that renders rendering in real-time: binary data is stored into the rendering pipeline through the corresponding function of OpenGL and a fixed rendering flow.
B1) A rendering pipeline calls a function to generate a three-dimensional texture object, and three-dimensional textures are bound in the rendering pipeline;
B2) performing three-dimensional texture mapping, wherein parameters of a mapping function are the width and the height of each two-dimensional slice image and the depth of data, and performing texture filtering on the mapped three-dimensional texture;
B3) generating and setting a vertex data attribute, and binding the vertex data in the rendering pipeline;
B4) compiling shaders of OpenGL and Vertex shaders, executing Vertex shading once for each Vertex sent to GPU, wherein the functions of the shaders are to convert three-dimensional coordinates of each Vertex in a virtual space into two-dimensional coordinates displayed on a window and have depth information for z-buffer; writing a fragment shader, and calculating the color and other attributes of each pixel; compiling and linking the compiled shader, and deleting the shader after the compiling is finished;
B5) and transmitting the read three-dimensional texture into a shader, performing texture drawing on the three-dimensional model through texture sampling, and finishing loading.
(3) Drawing a two-dimensional window: the corresponding three-dimensional CT image is projected using a maximum intensity projection algorithm for display on the window in the form of a volume rendering.
Visualizing the high-gray-value structure in the volume data by utilizing maximum density projection; firstly, determining the position of a light source point and the position of volume data in a space, then emitting a virtual ray outwards from the light source point, wherein the intersection point of the ray and a plane determines the position of each pixel point in an MIP image; a series of virtual light rays are emitted from the back to the front in the slice direction in the volume data and projected onto a two-dimensional plane, when the light rays penetrate through the volume data, equidistant sampling is carried out on the light rays, the attribute maximum value in the sampling points is taken as the output of the light rays, and the scalar value of the coordinates of the position of each sampling point can be calculated through interpolation; the window pixel color value corresponding to the light can be obtained by performing color mapping on the output value, and the final result is drawn on the projected two-dimensional plane.
C1) The shader is used for modifying the ray projection algorithm based on the principle of maximum density projection so as to realize the maximum density projection function;
C2) acquiring a position label of a maximum density value, a position label of texture loading, a position label of a camera position parameter and a position label of a model view projection transformation matrix MVP matrix in a maximum density projection shader by utilizing a GetUniformLocation function of OpenGL;
C3) loading position parameters of the volume data in the space one by one through corresponding position labels, and then loading a three-dimensional texture mapping;
C4) and calling a maximum density projection shader to project the loaded three-dimensional volume data, so that the model subjected to maximum density projection is drawn on the two-dimensional window.
The maximum density projection can relatively accurately simulate original volume data in a short time, and a three-dimensional model is reconstructed through volume rendering; the three-dimensional model is drawn on the two-dimensional window through maximum density projection, so that the time consumption of registration projection can be greatly reduced, and the practicability of interactive registration is enhanced. Due to the defect that interaction of three-dimensional volume data generated by calling maximum density projection by medical image processing toolkits ITK and VTK is difficult, the invention firstly converts the data type, and performs maximum density projection on the volume data through corresponding functions of OpenGL so as to display the volume data on a window, complex rendering pipeline binding and preprocessing and shader writing are required in the process, but the rendering model generation speed is fast in the calling process.
(4) Designing a rotation interaction mode of the three-dimensional model by using an arcball algorithm: the method adopts the concept of arcball, stores the information of each rotation of the mouse interaction model into a quaternion, and controls the rotation of the model by converting the quaternion into the form of an Euler angle and a rotation matrix.
D1) Mapping two-dimensional window coordinates after mouse interaction, imagining a unit hemisphere to be positioned at the center of a window, adjusting the range of the two-dimensional window coordinates to the range of [ -1.. 1],
the formula: pt.x ═ (pt.x × adjust width) -1.0f, pt.y ═ 1.0f- (pt.y adjust height); wherein pt is a defined three-dimensional coordinate, Adjust Width is a scaling factor of the width, and Adjust height is a scaling factor of the length; adjustistwidth is 1.0f/((NewWidth-1.0f) × 0.5f), NewWidth and NewHeight are the width and height of the two-dimensional window; adjust height ═ 1.0f/((NewHeight-1.0f) × 0.5 f);
D2) the coordinates of two points of the window are unitized into two points on a hemispherical surface through a mapping formula, if the two-dimensional window coordinate is not on a unit hemisphere taking the center of the window as the center of the circle, the two-dimensional coordinate is zoomed on the hemisphere, and the zoom factor is set to norm 1.0/FuncSqrt (length), wherein length is (pt.x) + (pt.y); mapping and converting the two-dimensional coordinates into two points on a hemisphere space, if the two-dimensional coordinates are on a unit hemisphere, calculating a Z-direction coordinate pt.z according to an X-direction axis coordinate pt.x and a Y-direction coordinate pt.y, wherein the calculation formula is pt.z ═ FuncSqrt (1.0f-length), and the length ═ pt.x) + (pt.y @);
D3) setting that a starting point is generated when a left mouse button is pressed down, and mapping coordinate values when the mouse button is pressed down into three-dimensional coordinates through previous coordinates to be stored in a vector form; when the mouse is released, generating an end point, mapping coordinate values during the release into three-dimensional coordinates through coordinates, and storing the three-dimensional coordinates in a vector form to obtain direction vectors of the starting point and the end point in the rotary interaction process;
D4) setting a combined quaternion q ═ v, w ═ x, y, z, w ] composed of two parts, one is the number w, which is equal to cos θ/2, where θ is the angle of rotation, and one is the vector v, which is equal to sin θ/2 times the vector along the axis of rotation; the result of the two quaternion operations is the result of their rotation combination, so that the rotation combination operation is expressed by quaternion cross multiplication; the cross product of the two rotation vectors records the direction of the rotation axis, and the dot product of the two vectors records the rotation angle;
D5) and calling a corresponding function to store the cross product and the dot product calculated before in the form of quaternion, wherein the direction vectors of the two rotations are OP1 and OP2, namely, the inner product s of the two vectors is firstly solved: and (5) obtaining the outer product v of two vectors when s is OP1 and OP 2: v is OP1 × OP2, and q is [ s, v ] which is a rotational quaternion;
D6) the quaternion is converted into a rotation matrix, the formula is TM (QuatToMatrix) (q), the TM matrix is a corresponding rotation matrix, and the rotation matrix acts on a model transformation matrix rendered by OpenGL, so that the model is controlled to rotate;
D7) the quaternion is converted into an Euler angle, the formula is Rotate ═ QuatToeulerAngle (q), Rotate is the corresponding Euler angle, and the rotation angle around three coordinate axes in the real-time interaction process of the model can be displayed;
D8) performing rotation interactive operation on the model through a mouse to obtain a quaternion of each rotation interaction; quaternions can be multiplied, so that the rotation information can be stored and displayed continuously.
The quaternion rotation can avoid deadlock of the universal lock, the rotation of a vector passing through an arbitrary origin can be executed only by one quaternion with 4 dimensions, convenience and rapidness are realized, and the efficiency is higher than that of a rotation matrix under certain realization conditions; and quaternion rotation can provide smooth interpolation. The quaternion rotation interaction is more convenient, and a set of rotation parameters are accurately provided in real time. Through QT's mouse interaction, obtain the coordinate information of mouse point, need utilize arcball's principle design rotation interaction, what general model rotation adopted all is Euler rotation, and this design adopts the quaternion rotation, and is rotatory a bit more complicated than Euler, because has had a dimensionality more, understands more difficult, not directly perceived.
(5) Setting an interaction mode of mouse translation of the three-dimensional model: and setting a translation interaction mode of the model through window coordinate calculation and mouse interaction of the QT.
E1) When a right mouse button is pressed down, recording the current two-dimensional window coordinate, namely a starting point, and when the mouse stops sliding, recording the current two-dimensional window coordinate, namely an ending point;
E2) calling a QPointF function of the QT to calculate the difference between the two-dimensional window coordinates in the X-axis direction and the Y-axis direction so as to determine the translation of the model in the X-axis direction and the translation in the Y-axis direction;
E3) calling a QT mouse wheel mechanism to control the translation transformation effect of the model on the Z axis, wherein when the mouse wheel slides upwards, the model moves towards the positive direction of the Z axis, and when the mouse wheel slides downwards, the model moves towards the negative direction of the Z axis;
E4) and (3) the translation parameters of the model in the X-axis direction, the Y-axis direction and the Z-axis direction are substituted into a translation matrix, and the calculated translation matrix is substituted into a model transformation matrix, so that the translation operation of the model is controlled in real time.
(6) And rendering and displaying in real time by using the interaction result control model.
F1) Converting quaternion for storing rotation information into a rotation matrix, and converting vector for storing translation information into a translation matrix;
F2) setting a model transformation matrix as a product of a translation matrix, a rotation matrix and a scaling matrix; the model transformation matrix acts on the model, the control model is transformed in the world coordinate system, and the object is transformed from the model coordinate system to the world coordinate system;
F3) setting a view matrix, and converting the object from a world coordinate system to a viewpoint coordinate system;
F4) setting a projection matrix, performing projection transformation on the object model, and converting the object from a viewpoint coordinate system to a cutting coordinate system;
F5) and setting a viewport transformation, and converting the object from the cutting coordinate system to a window coordinate system, thereby rendering the drawing result on the window in real time.
And thirdly, loading a two-dimensional image to be registered and drawing the two-dimensional image on the window in a two-dimensional texture mode. Reading a corresponding two-dimensional image, converting the two-dimensional image into two-dimensional texture data by using a texture mapping method, mapping texture pixels in a texture space to pixels in a window space, and drawing the image on a rendering window.
And fourthly, dragging the mouse to realize the alignment of the three-dimensional model and the two-dimensional image so as to obtain proper registration parameters. And dragging the three-dimensional model to perform rotation and translation operation by using mouse interaction, so that the three-dimensional model and the two-dimensional image can be aligned under visual observation, and a group of registration parameters is automatically generated and printed out.
As shown in fig. 2, which is a two-dimensional X-ray photograph of the hip joint, and as shown in fig. 3, which is a two-dimensional photograph of a mask, applied to a two-dimensional X-ray film. As shown in fig. 4, the image is a projection image rendered and displayed on a window after maximum intensity projection, and is an image drawn on the window by projection transformation of a three-dimensional hip joint CT slice sequence image, and at this time, the model is a display result in an initial posture without rotational-translational interactive operation. As shown in fig. 5, after a certain rotation and translation operation is performed on an image, the model is displayed in the posture at the position, the model is dragged to perform rotation transformation through the mouse left key interaction model, certain rotation transformation is performed around the X axis, the Y axis and the Z axis respectively, the model is dragged to perform translation transformation through the mouse right key interaction model, certain translation transformation is performed to the X axis and the Y axis respectively, the model is dragged to perform scaling transformation through forward rolling or backward rolling of a mouse roller, and certain translation transformation is performed to the Z axis; the pose is a pose display obtained by rotating the model eighty degrees around the X axis, translating 26 and 57 unit values respectively towards the X axis and the Y axis, and amplifying 1450 unit values towards the Z axis.
As shown in fig. 6a, a two-dimensional X-ray hip image is rendered on a window in two-dimensional texture as a background image, a three-dimensional hip CT image is rendered on the window through three-dimensional texture mapping and maximum density projection as a foreground image, and a mouse can be dragged to perform interactive operation on the foreground image so as to align the foreground image with the background image; the image is a display result of a foreground image and a background image in an initial posture; as shown in fig. 6b, dragging the mouse to perform a rotation and translation interactive operation on the foreground model, so that the foreground model and the background X-ray image achieve a certain alignment effect under visual observation, and the right contour of the foreground model displayed in the image is approximately aligned with the background X-ray image, i.e. the translation and rotation parameters in this posture are a set of relatively suitable initial configuration parameters; after the method is used for processing, a set of suitable 2D and 3D initial registration parameters can be obtained in a short time.
The foregoing shows and describes the general principles, essential features, and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are merely illustrative of the principles of the invention, but that various changes and modifications may be made without departing from the spirit and scope of the invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (8)

1. An automatic generation method for interactive 2D and 3D medical image registration parameters is characterized by comprising the following steps:
11) loading three-dimensional image data: acquiring three-dimensional image data to be registered of the interactive medical image;
12) realizing three-dimensional model rendering reconstruction on the window, carrying out two-dimensional mapping on the three-dimensional model, and displaying the real-time mapped image in alignment with the X-ray image to be registered;
13) loading a two-dimensional image to be registered and drawing the two-dimensional image on a window in a two-dimensional texture mode: reading a corresponding two-dimensional image, converting the two-dimensional image into two-dimensional texture data by using a texture mapping method, mapping texture pixels in a texture space to pixels in a window space, and drawing the image on a rendering window;
14) dragging the mouse to realize the alignment of the three-dimensional model and the two-dimensional image so as to obtain proper registration parameters: and dragging the three-dimensional model to perform rotation and translation operation by using mouse interaction, so that the three-dimensional model and the two-dimensional image can be aligned under visual observation, and a group of registration parameters is automatically generated and printed out.
2. The method of claim 1, wherein the steps of rendering and reconstructing a three-dimensional model on a window, two-dimensionally mapping the three-dimensional model, and displaying an image after real-time mapping in alignment with an X-ray image to be registered include:
21) processing the three-dimensional image data into data suitable for an OpenGL rendering pipeline: aiming at a plurality of input three-dimensional slice image sequences, namely dicom data, writing a python script by using a pydicom library of python, converting the dicom data into a binary file suitable for a system, and storing the rest of dicom data in the binary file by using a C + + container;
22) loading data into a pipeline that renders rendering in real-time: storing the binary data into a rendering pipeline through a corresponding function of OpenGL and a fixed rendering flow;
23) drawing a two-dimensional window: projecting the corresponding three-dimensional CT image by using a maximum density projection algorithm so as to display the three-dimensional CT image on a window in a volume rendering mode;
24) designing a rotation interaction mode of the three-dimensional model by using an arcball algorithm: the method comprises the following steps of adopting an arcball idea, storing information of each rotation of the mouse interaction model into a quaternion, and controlling the rotation of the model by converting the quaternion into an Euler angle and a rotation matrix;
25) setting an interaction mode of mouse translation of the three-dimensional model: setting a translation interaction mode of the model through window coordinate calculation and mouse interaction of the QT;
26) and rendering and displaying in real time by using the interaction result control model.
3. The method of claim 2, wherein the processing of the three-dimensional image data into data suitable for an OpenGL rendering pipeline comprises:
31) reading the dicom data in the release folder by using a python script; opening a cmd console, converting the dicom data into a binary bin file with a specific format by inputting a command in the cmd for the subsequent rendering process to use, and storing the binary bin file under a corresponding folder;
32) circularly traversing all slice data for, outputting a data path to a character string in a formatted mode by using a sprintf _ s function of C + +, reading data under the path, and opening a binary file by using a fopen _ s function of C + +;
33) reading information in a binary bin file by using a C + + fread _ s function, wherein the information comprises the position of a slice in an image sequence, the distance of slice image pixels along the X-axis direction, the distance of the slice image pixels along the Y-axis direction, the width of the slice and the height of the slice, storing the rest dicom data in a container, and adjusting the size of the container to prevent data overflow;
34) after data are read, sequencing all slice data based on the positions of the slices;
35) and loading the dicom form data of the three-dimensional volume data through the read binary file information.
4. The method of claim 2, wherein the step of loading data into a real-time rendering pipeline comprises the steps of:
41) a rendering pipeline calls a function to generate a three-dimensional texture object, and three-dimensional textures are bound in the rendering pipeline;
42) performing three-dimensional texture mapping, wherein parameters of a mapping function are the width and the height of each two-dimensional slice image and the depth of data, and performing texture filtering on the mapped three-dimensional texture;
43) generating and setting a vertex data attribute, and binding the vertex data in the rendering pipeline;
44) compiling shaders of OpenGL and Vertex shaders, executing Vertex shading once for each Vertex sent to GPU, wherein the functions of the shaders are to convert three-dimensional coordinates of each Vertex in a virtual space into two-dimensional coordinates displayed on a window and have depth information for z-buffer; writing a fragment shader, and calculating the color and other attributes of each pixel; compiling and linking the compiled shader, and deleting the shader after the compiling is finished;
45) and transmitting the read three-dimensional texture into a shader, performing texture drawing on the three-dimensional model through texture sampling, and finishing loading.
5. The method of claim 2, wherein the rendering of the two-dimensional window comprises the steps of:
51) the shader is used for modifying the ray projection algorithm based on the principle of maximum density projection so as to realize the maximum density projection function;
52) acquiring a position label of a maximum density value, a position label of texture loading, a position label of a camera position parameter and a position label of a model view projection transformation matrix MVP matrix in a maximum density projection shader by utilizing a GetUniformLocation function of OpenGL;
53) loading position parameters of the volume data in the space one by one through corresponding position labels, and then loading a three-dimensional texture mapping;
54) and calling a maximum density projection shader to project the loaded three-dimensional volume data, so that the model subjected to maximum density projection is drawn on the two-dimensional window.
6. The method for automatically generating the registration parameters of interactive 2D and 3D medical images as claimed in claim 2, wherein the rotation interactive mode for designing the three-dimensional model by using the arcball algorithm comprises the following steps:
61) mapping two-dimensional window coordinates after mouse interaction, imagining a unit hemisphere to be positioned at the center of a window, adjusting the range of the two-dimensional window coordinates to the range of [ -1.. 1],
the formula: pt.x ═ (pt.x × adjust width) -1.0f, pt.y ═ 1.0f- (pt.y adjust height); wherein pt is a defined three-dimensional coordinate, Adjust Width is a scaling factor of the width, and Adjust height is a scaling factor of the length; adjustistwidth is 1.0f/((NewWidth-1.0f) × 0.5f), NewWidth and NewHeight are the width and height of the two-dimensional window; adjust height ═ 1.0f/((NewHeight-1.0f) × 0.5 f);
62) the coordinates of two points of the window are unitized into two points on a hemispherical surface through a mapping formula, if the two-dimensional window coordinate is not on a unit hemisphere taking the center of the window as the center of the circle, the two-dimensional coordinate is zoomed on the hemisphere, and the zoom factor is set to norm 1.0/FuncSqrt (length), wherein length is (pt.x) + (pt.y); mapping and converting the two-dimensional coordinates into two points on a hemisphere space, if the two-dimensional coordinates are on a unit hemisphere, calculating a Z-direction coordinate pt.z according to an X-direction axis coordinate pt.x and a Y-direction coordinate pt.y, wherein the calculation formula is pt.z ═ FuncSqrt (1.0f-length), and the length ═ pt.x) + (pt.y @);
63) setting that a starting point is generated when a left mouse button is pressed down, and mapping coordinate values when the mouse button is pressed down into three-dimensional coordinates through previous coordinates to be stored in a vector form; when the mouse is released, generating an end point, mapping coordinate values during the release into three-dimensional coordinates through coordinates, and storing the three-dimensional coordinates in a vector form to obtain direction vectors of the starting point and the end point in the rotary interaction process;
64) setting a combined quaternion q ═ v, w ═ x, y, z, w ] composed of two parts, one is the number w, which is equal to cos θ/2, where θ is the angle of rotation, and one is the vector v, which is equal to sin θ/2 times the vector along the axis of rotation; the result of the two quaternion operations is the result of their rotation combination, so that the rotation combination operation is expressed by quaternion cross multiplication; the cross product of the two rotation vectors records the direction of the rotation axis, and the dot product of the two vectors records the rotation angle;
65) and calling a corresponding function to store the cross product and the dot product calculated before in the form of quaternion, wherein the direction vectors of the two rotations are OP1 and OP2, namely, the inner product s of the two vectors is firstly solved: and (5) obtaining the outer product v of two vectors when s is OP1 and OP 2: v is OP1 × OP2, and q is [ s, v ] which is a rotational quaternion;
66) the quaternion is converted into a rotation matrix, the formula is TM (QuatToMatrix) (q), the TM matrix is a corresponding rotation matrix, and the rotation matrix acts on a model transformation matrix rendered by OpenGL, so that the model is controlled to rotate;
67) the quaternion is converted into an Euler angle, the formula is Rotate ═ QuatToeulerAngle (q), Rotate is the corresponding Euler angle, and the rotation angle around three coordinate axes in the real-time interaction process of the model can be displayed;
68) carrying out rotation interactive operation on the model through a mouse to obtain a quaternion of each rotation interaction; and continuously storing and displaying the rotation information through quaternion multiplication.
7. The method for automatically generating the registration parameters of the interactive 2D and 3D medical images according to claim 2, wherein the interactive mode for setting the mouse translation three-dimensional model comprises the following steps:
71) when a right mouse button is pressed down, recording the current two-dimensional window coordinate, namely a starting point, and when the mouse stops sliding, recording the current two-dimensional window coordinate, namely an ending point;
72) calling a QPointF function of the QT to calculate the difference between the two-dimensional window coordinates in the X-axis direction and the Y-axis direction so as to determine the translation of the model in the X-axis direction and the translation in the Y-axis direction;
73) calling a QT mouse wheel mechanism to control the translation transformation effect of the model on the Z axis, wherein when the mouse wheel slides upwards, the model moves towards the positive direction of the Z axis, and when the mouse wheel slides downwards, the model moves towards the negative direction of the Z axis;
74) and (3) the translation parameters of the model in the X-axis direction, the Y-axis direction and the Z-axis direction are substituted into a translation matrix, and the calculated translation matrix is substituted into a model transformation matrix, so that the translation operation of the model is controlled in real time.
8. The method for automatically generating interactive 2D and 3D medical image registration parameters according to claim 2, wherein the real-time rendering and displaying by using the interactive result control model comprises the following steps:
81) converting quaternion for storing rotation information into a rotation matrix, and converting vector for storing translation information into a translation matrix;
82) setting a model transformation matrix as a product of a translation matrix, a rotation matrix and a scaling matrix; the model transformation matrix acts on the model, the control model is transformed in the world coordinate system, and the object is transformed from the model coordinate system to the world coordinate system;
83) setting a view matrix, and converting the object from a world coordinate system to a viewpoint coordinate system;
84) setting a projection matrix, performing projection transformation on the object model, and converting the object from a viewpoint coordinate system to a cutting coordinate system;
85) and setting a viewport transformation, and converting the object from the cutting coordinate system to a window coordinate system, thereby rendering the drawing result on the window in real time.
CN202210057988.XA 2022-01-19 2022-01-19 Interactive 2D and 3D medical image registration parameter automatic generation method Pending CN114418992A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210057988.XA CN114418992A (en) 2022-01-19 2022-01-19 Interactive 2D and 3D medical image registration parameter automatic generation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210057988.XA CN114418992A (en) 2022-01-19 2022-01-19 Interactive 2D and 3D medical image registration parameter automatic generation method

Publications (1)

Publication Number Publication Date
CN114418992A true CN114418992A (en) 2022-04-29

Family

ID=81274333

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210057988.XA Pending CN114418992A (en) 2022-01-19 2022-01-19 Interactive 2D and 3D medical image registration parameter automatic generation method

Country Status (1)

Country Link
CN (1) CN114418992A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116630450A (en) * 2023-05-29 2023-08-22 中国人民解放军陆军军医大学 Method, device and storage medium for extracting and encoding characteristics in arterial interlayer cavity
CN116958332A (en) * 2023-09-20 2023-10-27 南京竹影数字科技有限公司 Method and system for mapping 3D model in real time of paper drawing based on image recognition
CN117058342A (en) * 2023-10-12 2023-11-14 天津科汇新创科技有限公司 Spine 3D voxel model construction method based on projection image
CN117173314A (en) * 2023-11-02 2023-12-05 腾讯科技(深圳)有限公司 Image processing method, device, equipment, medium and program product
CN117974647A (en) * 2024-03-29 2024-05-03 青岛大学 Three-dimensional linkage type measurement method, medium and system for two-dimensional medical image

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116630450A (en) * 2023-05-29 2023-08-22 中国人民解放军陆军军医大学 Method, device and storage medium for extracting and encoding characteristics in arterial interlayer cavity
CN116958332A (en) * 2023-09-20 2023-10-27 南京竹影数字科技有限公司 Method and system for mapping 3D model in real time of paper drawing based on image recognition
CN116958332B (en) * 2023-09-20 2023-12-22 南京竹影数字科技有限公司 Method and system for mapping 3D model in real time of paper drawing based on image recognition
CN117058342A (en) * 2023-10-12 2023-11-14 天津科汇新创科技有限公司 Spine 3D voxel model construction method based on projection image
CN117058342B (en) * 2023-10-12 2024-01-26 天津科汇新创科技有限公司 Spine 3D voxel model construction method based on projection image
CN117173314A (en) * 2023-11-02 2023-12-05 腾讯科技(深圳)有限公司 Image processing method, device, equipment, medium and program product
CN117173314B (en) * 2023-11-02 2024-02-23 腾讯科技(深圳)有限公司 Image processing method, device, equipment, medium and program product
CN117974647A (en) * 2024-03-29 2024-05-03 青岛大学 Three-dimensional linkage type measurement method, medium and system for two-dimensional medical image
CN117974647B (en) * 2024-03-29 2024-06-07 青岛大学 Three-dimensional linkage type measurement method, medium and system for two-dimensional medical image

Similar Documents

Publication Publication Date Title
CN114418992A (en) Interactive 2D and 3D medical image registration parameter automatic generation method
US10733745B2 (en) Methods, systems, and computer readable media for deriving a three-dimensional (3D) textured surface from endoscopic video
Pandey et al. Image mosaicing: A deeper insight
Ahmed et al. Dense correspondence finding for parametrization-free animation reconstruction from video
Rematas et al. Image-based synthesis and re-synthesis of viewpoints guided by 3d models
Wang et al. 3d shape reconstruction from free-hand sketches
Tian et al. Medical image processing and analysis
Xu et al. Animating animal motion from still
Shen et al. A geometry-informed deep learning framework for ultra-sparse 3D tomographic image reconstruction
CN114445431B (en) Method and device for arbitrarily cutting medical three-dimensional image
Chen et al. Autosweep: Recovering 3d editable objects from a single photograph
Li et al. Animated 3D human avatars from a single image with GAN-based texture inference
Kang et al. Competitive learning of facial fitting and synthesis using uv energy
CN114863076A (en) Interactive image editing
Peng et al. XraySyn: realistic view synthesis from a single radiograph through CT priors
CN114494001A (en) Rendering sampling based three-dimensional model surface pattern flattening extraction method
Yin et al. Weakly-supervised photo-realistic texture generation for 3d face reconstruction
CN113658284A (en) X-ray image synthesis from CT images for training nodule detection systems
Shen et al. Novel-view X-ray projection synthesis through geometry-integrated deep learning
Maninchedda et al. Semantic 3d reconstruction of heads
CN114886558A (en) Endoscope projection method and system based on augmented reality
Chen 3D reconstruction of endoscopy images with NeRF
Bouafif et al. Monocular 3D head reconstruction via prediction and integration of normal vector field
CN112489218A (en) Single-view three-dimensional reconstruction system and method based on semi-supervised learning
Sajnani et al. GeoDiffuser: Geometry-Based Image Editing with Diffusion Models

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination