CN117635656A - Medical image registration method, device, equipment and storage medium - Google Patents

Medical image registration method, device, equipment and storage medium Download PDF

Info

Publication number
CN117635656A
CN117635656A CN202311586601.0A CN202311586601A CN117635656A CN 117635656 A CN117635656 A CN 117635656A CN 202311586601 A CN202311586601 A CN 202311586601A CN 117635656 A CN117635656 A CN 117635656A
Authority
CN
China
Prior art keywords
dimensional
image
target
cone
ray
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311586601.0A
Other languages
Chinese (zh)
Inventor
郭延恩
杨杰
徐少康
邵明昊
唐文彬
宓海
蔡宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jirui Medical Technology Co ltd
Original Assignee
Shanghai Jirui Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jirui Medical Technology Co ltd filed Critical Shanghai Jirui Medical Technology Co ltd
Priority to CN202311586601.0A priority Critical patent/CN117635656A/en
Publication of CN117635656A publication Critical patent/CN117635656A/en
Pending legal-status Critical Current

Links

Landscapes

  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The application provides a medical image registration method, a device, equipment and a storage medium, and relates to the technical field of medical image processing. The method comprises the following steps: acquiring a three-dimensional CT image and a two-dimensional X-ray image of a target part of a target object; the three-dimensional CT image and the two-dimensional X-ray image both comprise a plurality of sections of continuous vertebral bodies; processing the three-dimensional CT image and the two-dimensional X-ray image to obtain an initial positioning transformation matrix of the three-dimensional CT image, a first target two-dimensional image corresponding to the three-dimensional CT image and a second target two-dimensional image corresponding to the two-dimensional X-ray image; the first target two-dimensional image and the second target two-dimensional image are marked with cone labels; inputting the first target two-dimensional image and the second target two-dimensional image into a preset neural network model, and outputting rotation and translation parameters; determining a registration transformation matrix according to the initial positioning transformation matrix and the rotation translation parameters; the registration transformation matrix is used for registration of the target site. The method and the device improve the efficiency and the accuracy of surgical navigation.

Description

Medical image registration method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of medical image processing technologies, and in particular, to a medical image registration method, apparatus, device, and storage medium.
Background
Surgical navigation techniques, also known as image guided surgery (Image Guided Surgery), are an advanced medical technique. In surgical navigation, the pre-operative planning can be converted into the actual position of the patient in operation based on the registration transformation matrix determined by the image registration technology, so as to guide the surgical robot or assist the doctor to perform high-precision surgical operation.
However, the traditional image registration method generally adopts a mode of multiple iterations to acquire the registration transformation matrix, so that the calculation time is long, local optimization is easy to fall into, the accuracy of the registration transformation matrix is not high enough, and the efficiency and the accuracy of surgical navigation are seriously affected.
For this reason, a new medical image registration method is needed to solve the above-mentioned problems.
Disclosure of Invention
The application provides a medical image registration method, a device, equipment and a storage medium, which are used for solving the problems of how to improve the efficiency and accuracy of surgical navigation and the like.
In a first aspect, the present application provides a medical image registration method, the method comprising:
acquiring a three-dimensional CT image and a two-dimensional X-ray image of a target part of a target object; the three-dimensional CT image is an image acquired before the target part is processed, the two-dimensional X-ray image is an image acquired in the process of processing the target part, and the three-dimensional CT image and the two-dimensional X-ray image both comprise a plurality of sections of continuous vertebral bodies;
Processing the three-dimensional CT image and the two-dimensional X-ray image to obtain an initial positioning transformation matrix of the three-dimensional CT image, a first target two-dimensional image corresponding to the three-dimensional CT image and a second target two-dimensional image corresponding to the two-dimensional X-ray image; wherein, the first target two-dimensional image and the second target two-dimensional image are marked with cone labels;
inputting the first target two-dimensional image and the second target two-dimensional image into a preset neural network model, and outputting rotation translation parameters;
determining a registration transformation matrix according to the initial positioning transformation matrix and the rotation translation parameters; the registration transformation matrix is used for registering the target part.
Optionally, processing the three-dimensional CT image and the two-dimensional X-ray image to obtain an initial positioning transformation matrix of the three-dimensional CT image, a first target two-dimensional image corresponding to the three-dimensional CT image, and a second target two-dimensional image corresponding to the two-dimensional X-ray image, including:
performing primary positioning treatment on the three-dimensional CT image to obtain a primary positioning transformation matrix of the three-dimensional CT image;
generating a two-dimensional basic image of the three-dimensional CT image by projection based on imaging parameters of a C-arm X-ray machine corresponding to the two-dimensional X-ray image;
And performing cone detection processing on the two-dimensional basic image and the two-dimensional X-ray image, and marking cone labels on the two-dimensional basic image and the two-dimensional X-ray image to obtain the first target two-dimensional image and the second target two-dimensional image.
Optionally, performing primary positioning processing on the three-dimensional CT image to obtain a primary positioning transformation matrix of the three-dimensional CT image, including:
determining coordinates of key points of the target part, and determining coordinates of position points which are the same as the key points on the three-dimensional CT image;
and determining an initial positioning transformation matrix of the three-dimensional CT image based on the coordinates of the key points and the coordinates of the position points which are the same as the key points.
Optionally, generating the two-dimensional basic image of the three-dimensional CT image based on imaging parameters of the C-arm X-ray machine corresponding to the two-dimensional X-ray image by projection includes:
setting the positions of an emission light source and a receiver of the C-arm X-ray machine and the position of the three-dimensional CT image according to imaging parameters of the C-arm X-ray machine corresponding to the two-dimensional X-ray image;
and generating a two-dimensional basic image of the three-dimensional CT image by adopting a digital reconstruction radiographic image method for projection.
Optionally, the cone label includes a cone wrapping rectangle, performing cone detection processing on the two-dimensional basic image and the two-dimensional X-ray image, and marking the cone label on the two-dimensional basic image and the two-dimensional X-ray image to obtain the first target two-dimensional image and the second target two-dimensional image, including:
determining the cone boundary of each cone based on a preset boundary determination mode for each cone on the two-dimensional basic image, marking cone wrapping rectangles on the two-dimensional basic image based on the cone boundary to obtain the first target two-dimensional image, and marking the same cone wrapping rectangles on the position area of the corresponding cone on the two-dimensional X-ray image to obtain the second target two-dimensional image;
or, determining the cone boundary of each cone based on a preset boundary determination mode for each cone on the two-dimensional X-ray image, and marking the cone wrapping rectangle on the two-dimensional X-ray image based on the cone boundary to obtain the second target two-dimensional image; and marking the same cone wrapping rectangle on the position area of the corresponding cone on the two-dimensional basic image to obtain the first target two-dimensional image.
Optionally, determining a registration transformation matrix according to the initial positioning transformation matrix and the rotational translation parameter includes:
converting the rotation translation parameters into a rotation translation matrix;
and calculating the product of the initial positioning transformation matrix and the rotation translation matrix, and determining the product as the registration transformation matrix.
Optionally, imaging parameters of the C-arm X-ray machine corresponding to the two-dimensional X-ray image may be multiple sets, and a two-dimensional base image of the corresponding three-dimensional CT image may be generated based on each set of imaging parameters; when the imaging parameters are multiple groups, the preset neural network model comprises a plurality of input layers and a plurality of intermediate layers.
In a second aspect, the present application provides a medical image registration apparatus, the apparatus comprising:
an acquisition unit for acquiring a three-dimensional CT image and a two-dimensional X-ray image of a target portion of a target object; the three-dimensional CT image is an image acquired before the target part is processed, the two-dimensional X-ray image is an image acquired in the process of processing the target part, and the three-dimensional CT image and the two-dimensional X-ray image both comprise a plurality of sections of continuous vertebral bodies;
the processing unit is used for processing the three-dimensional CT image and the two-dimensional X-ray image, and acquiring an initial positioning transformation matrix of the three-dimensional CT image, a first target two-dimensional image corresponding to the three-dimensional CT image and a second target two-dimensional image corresponding to the two-dimensional X-ray image; wherein, the first target two-dimensional image and the second target two-dimensional image are marked with cone labels;
The model calling unit is used for inputting the first target two-dimensional image and the second target two-dimensional image into a preset neural network model and outputting rotation translation parameters;
the determining unit is used for determining a registration transformation matrix according to the initial positioning transformation matrix and the rotation translation parameter; the registration transformation matrix is used for registering the target part.
In a third aspect, the present application provides an electronic device, including: a processor, and a memory communicatively coupled to the processor;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored in the memory to implement the method as described above.
In a fourth aspect, the present application provides a computer readable storage medium having stored therein computer executable instructions which when executed by a processor are adapted to carry out the method as described above.
In a fifth aspect, the present application provides a computer program product comprising a computer program for implementing the method as described above when being executed by a processor.
The medical image registration method, device, equipment and storage medium provided by the application comprise the following steps: acquiring a three-dimensional CT image and a two-dimensional X-ray image of a target part of a target object; the three-dimensional CT image is an image acquired before the target part is processed, the two-dimensional X-ray image is an image acquired in the process of processing the target part, and the three-dimensional CT image and the two-dimensional X-ray image both comprise a plurality of sections of continuous vertebral bodies; processing the three-dimensional CT image and the two-dimensional X-ray image to obtain an initial positioning transformation matrix of the three-dimensional CT image, a first target two-dimensional image corresponding to the three-dimensional CT image and a second target two-dimensional image corresponding to the two-dimensional X-ray image; wherein, the first target two-dimensional image and the second target two-dimensional image are marked with cone labels; inputting the first target two-dimensional image and the second target two-dimensional image into a preset neural network model, and outputting rotation translation parameters; determining a registration transformation matrix according to the initial positioning transformation matrix and the rotation translation parameters; the registration transformation matrix is used for registering the target part. According to the scheme, the neural network model based on deep learning is introduced to calculate the rotation translation parameters, the needed registration transformation matrix can be obtained rapidly and accurately, the time for image registration is shortened greatly, the accuracy of the obtained registration transformation matrix is higher, then the preoperative planning can be rapidly and accurately converted to the actual position of a patient, and the efficiency and the accuracy of surgical navigation are improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic flow chart of a medical image registration method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a process flow according to an embodiment of the present disclosure;
fig. 3 is a schematic view of a cone wrapping rectangle according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a further process flow according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a preset neural network model according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of another preset neural network model according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a model training process according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a medical image registration apparatus according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of still another medical image registration apparatus according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Specific embodiments thereof have been shown by way of example in the drawings and will herein be described in more detail. These drawings and the written description are not intended to limit the scope of the inventive concepts in any way, but to illustrate the concepts of the present application to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims of this application and in the above-described figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented, for example, in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
First, some technical terms related to the present application will be explained:
image registration, i.e., the alignment of one image with another image over an image structure (e.g., organ, tissue, lesion, etc.) is accomplished by spatial transformation, such as rotation or translation, etc.
Digitally reconstructing a radiological image (Digitally Reconstructed Radiographs, DRR) by converting a three-dimensional image (three-dimensional volume data) into a simulated radiological image on a two-dimensional image plane in a perspective projection manner; the tissue structure and the anatomical information in the three-dimensional image are projected onto the two-dimensional image through a computer algorithm to simulate the image generated by X-ray or other radiological imaging techniques. The technology is widely applied in the field of medical imaging, can be used for aspects of operation planning, image diagnosis, treatment simulation and the like, and helps doctors to better understand the anatomical structure of patients and make relevant medical decisions.
Style migration (Style Transfer), a computer vision technique, aims to Transfer the Style of one image to another image, thereby creating a new image, retaining the content of the original image but adopting the artistic Style of the other image; style migration in medical images is a technique that fuses style features of different medical images; based on the concept of style migration, the style characteristics of one medical image are transferred to another medical image, so that a new medical image is created, the content information of the original image is reserved, and the visual style of the other image is adopted; the aim of the technology is to perform style conversion on medical images of different sources and different types, so that the medical images can present specific visual expression forms, and meanwhile, the diagnostic information and the anatomical structure of the original images are reserved; for example, the style of a CT scan image may be converted to an artistic style similar to a magnetic resonance imaging (Magnetic Resonance Imaging, MRI) image, or the style of an X-ray image may be transferred to a visual style similar to an ultrasound image.
Traditional surgical planning schemes typically rely on a physician manually locating a focal region based on a three-dimensional image of the patient obtained prior to surgery, and from this to create a surgical plan, where the location of the surgical site is based on trauma, tactile perception by the physician, or by manually marking points outside the patient, taking X-ray images multiple times, etc. The scheme has complex flow, inaccurate position positioning and easy damage to patients to a certain extent.
With the development of medical technology, an operation navigation technology, also called image guided surgery (Image Guided Surgery), is an advanced medical technology, combines a computer image processing technology with a medical imaging technology, realizes organ or tissue segmentation of a patient image before surgery, performs operation planning and simulation, photographs a two-dimensional (2D) X-ray image through a C-arm X-ray machine during surgery, and registers a three-dimensional (3D) CT image before surgery with a two-dimensional X-ray image during surgery by using an image registration technology to obtain a registration transformation matrix, so that the pre-surgery planning can be accurately converted to the actual position of the patient during surgery, and further guides a surgical robot or an auxiliary doctor to perform high-precision operation. Accordingly, a doctor can guide the operation through the real-time image, better locate the lesion part and protect surrounding healthy tissues, and ensure the success and safety of the operation.
In surgical navigation, it is critical to acquire the registration transformation matrix quickly and accurately through image registration techniques. However, the current image registration method generally adopts a mode of multiple iterations to acquire a registration transformation matrix, so that the calculation time is long, local optimization is easy to fall into, the accuracy of the registration transformation matrix is not high enough, and the efficiency and the accuracy of surgical navigation are seriously affected.
In order to solve the problems, the application provides a medical image registration method, a medical image registration device, medical image registration equipment and a storage medium. After the acquired three-dimensional CT image and two-dimensional X-ray image are processed to a certain extent, a pre-trained neural network model is called to calculate space transformation parameters (namely rotation translation parameters), and then a registration transformation matrix can be calculated. According to the method, the neural network model based on deep learning is introduced to calculate the rotation translation parameters, the needed registration transformation matrix can be obtained rapidly and accurately, the time for image registration is shortened greatly, the accuracy of the obtained registration transformation matrix is higher, the preoperative planning can be rapidly and accurately converted to the actual position of a patient, and the efficiency and the accuracy of surgical navigation are improved.
The following describes the technical solutions of the present application and how the technical solutions of the present application solve the above technical problems in detail with specific embodiments. The following embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a medical image registration method according to an embodiment of the present application. The execution subject of the embodiments of the present application may be a medical image registration apparatus, which may be deployed in a surgical navigation system, which may be located on an electronic device, which may be a server or a server cluster, etc., without limitation. The embodiment of the application will be described in detail taking an example that the execution subject is a medical image registration device.
As shown in fig. 1, the medical image registration method provided in this embodiment includes:
s101, acquiring a three-dimensional CT image and a two-dimensional X-ray image of a target part of a target object; the three-dimensional CT image is an image acquired before the target part is processed, the two-dimensional X-ray image is an image acquired in the process of processing the target part, and the three-dimensional CT image and the two-dimensional X-ray image both comprise a plurality of sections of continuous vertebral bodies.
For example, in the present application, the three-dimensional CT image and the two-dimensional X-ray image of the target portion of the target object may be acquired by the corresponding acquisition device and then uploaded to the medical image registration apparatus, or may be manually input by a user. It will be understood that the target object is the object to be operated on, and the target site is the site to be operated on, and in this application, the site to be operated on mainly refers to the spine. The three-dimensional CT image is obtained by scanning the spine by an electronic computer tomography (Computed Tomography, CT) before the operation of the object to be operated, the two-dimensional X-ray image is obtained by scanning the spine by a C-arm X-ray machine during the operation of the object to be operated, and the three-dimensional CT image and the two-dimensional X-ray image both comprise a plurality of sections of continuous vertebral bodies.
S102, processing a three-dimensional CT image and a two-dimensional X-ray image to obtain an initial positioning transformation matrix of the three-dimensional CT image, a first target two-dimensional image corresponding to the three-dimensional CT image and a second target two-dimensional image corresponding to the two-dimensional X-ray image; the first target two-dimensional image and the second target two-dimensional image are marked with cone labels.
For example, after the three-dimensional CT image and the two-dimensional X-ray image of the target portion of the target object are acquired, the medical image registration apparatus of the present application processes the three-dimensional CT image and the two-dimensional X-ray image, and specific processes include initial positioning, cone segmentation, digital reconstructed radiological image DRR, cone detection, and the like, whereby an initial positioning transformation matrix of the three-dimensional CT image, a first target two-dimensional image corresponding to the three-dimensional CT image, and a second target two-dimensional image corresponding to the two-dimensional X-ray image can be acquired.
In one example, processing the three-dimensional CT image and the two-dimensional X-ray image to obtain an initial positioning transformation matrix of the three-dimensional CT image, a first target two-dimensional image corresponding to the three-dimensional CT image, and a second target two-dimensional image corresponding to the two-dimensional X-ray image may include the following steps S10 to S30:
s10, performing initial positioning processing on the three-dimensional CT image to obtain an initial positioning transformation matrix of the three-dimensional CT image.
Illustratively, the initial positioning is to spatially transform the preoperative three-dimensional CT image into the intraoperative corresponding position prior to image registration such that the transformed position approximates the true position.
In one example, performing initial positioning processing on the three-dimensional CT image to obtain an initial positioning transformation matrix of the three-dimensional CT image may include:
s11, determining coordinates of key points of the target part, and determining coordinates of position points which are the same as the key points on the three-dimensional CT image.
And S12, determining an initial positioning transformation matrix of the three-dimensional CT image based on the coordinates of the key points and the coordinates of the position points which are the same as the key points.
For example, several key points may be selected on the target part of the target object, and the coordinates of the key points are determined and recorded as CorT; selecting the position points which are the same as the key points on the three-dimensional CT image, determining the coordinates of the position points, and marking the position points as CorS; then the initial positioning transformation matrix is obtained through a point registration formula and is marked as M 0 . Wherein the point registration formula is CorS=M 0 *CorT。
S20, generating a two-dimensional basic image of the three-dimensional CT image by projection based on imaging parameters of a C-arm X-ray machine corresponding to the two-dimensional X-ray image.
The imaging parameters of the C-arm X-ray machine include, for example, the distance between the emitting light source of the C-arm X-ray machine and the receiver of the X-ray machine, the size of the receiver, the spatial resolution, etc. Based on these imaging parameters of the C-arm X-ray machine, a two-dimensional base image of the three-dimensional CT image can be projected.
In one example, generating a two-dimensional base image of a three-dimensional CT image based on imaging parameters of a C-arm X-ray machine corresponding to the two-dimensional X-ray image may include: setting the positions of an emission light source and a receiver of the C-shaped arm X-ray machine and the position of a three-dimensional CT image according to imaging parameters of the C-shaped arm X-ray machine corresponding to the two-dimensional X-ray image; and a two-dimensional basic image of the three-dimensional CT image is generated by adopting a digital reconstruction radiographic image method for projection.
Illustratively, according to imaging parameters of the C-arm X-ray machine, positions of an emission light source and a receiver of the C-arm X-ray machine may be set, and three-dimensional CT images may be placed, and then two-dimensional base images of the three-dimensional CT images may be generated by projection using an algorithm such as Siddon (a digital radiological image reconstruction method).
The imaging parameters of the C-arm X-ray machine can be multiple groups, and a two-dimensional basic image of a corresponding three-dimensional CT image can be generated based on each group of imaging parameters. When imaging parameters of the C-arm X-ray machine are multiple groups, generating two-dimensional basic images of a plurality of corresponding three-dimensional CT images, and inputting the two-dimensional basic images and the two-dimensional X-ray images which are processed subsequently into a preset neural network model by taking the groups as units.
S30, performing cone detection processing on the two-dimensional basic image and the two-dimensional X-ray image, and marking cone labels on the two-dimensional basic image and the two-dimensional X-ray image respectively to obtain a first target two-dimensional image and a second target two-dimensional image.
Illustratively, the cone label includes information such as cone wrap rectangle, cone name, etc. When the two-dimensional basic image and the two-dimensional X-ray image are subjected to the cone detection processing, the two-dimensional basic image can be processed first, and the two-dimensional X-ray image can be processed first. It will be appreciated that the vertebral bodies in the image are segmented prior to the vertebral body detection process, and that the specific implementation of the vertebral body segmentation is not limited in this application.
Fig. 2 is an architecture schematic diagram of a process flow provided in an embodiment of the present application. As shown in fig. 2, the two-dimensional basic image Im is first processed. The two-dimensional basic image Im of the three-dimensional CT image is processed through the cone detection module to obtain a two-dimensional basic image Im marked with a cone label (namely, a first target two-dimensional image), and then the two-dimensional X-ray image X is processed based on the two-dimensional basic image Im marked with the cone label to obtain a two-dimensional X-ray image X marked with the cone label (namely, a second target two-dimensional image).
Specifically, during processing, for each section of cone on the two-dimensional basic image, determining the cone boundary of each section of cone based on a preset boundary determination mode, marking cone wrapping rectangles on the two-dimensional basic image based on the cone boundary to obtain a first target two-dimensional image, and marking the same cone wrapping rectangles on the position area of the corresponding cone on the two-dimensional X-ray image to obtain a second target two-dimensional image.
It will be appreciated that the acquired three-dimensional CT image includes a plurality of consecutive vertebrae, and thus, the two-dimensional base image generated based on the projection of the three-dimensional CT image also includes a plurality of consecutive vertebrae. And each section of the vertebral body is independent, on a two-dimensional basic image of the three-dimensional CT image, the left and right directions are taken as the X direction, the up and down directions are taken as the Y direction, and the outer wrapping rectangle of the vertebral body can be determined based on the boundary of the vertebral body of each section of the vertebral body. For example, the left and right boundaries of the two-dimensional basic image can be determined to be the left and right boundaries of the cone wrapping rectangle, that is, the X direction can be recorded as [0, max]The method comprises the steps of carrying out a first treatment on the surface of the Further, the maximum and minimum upper and lower boundaries of the cone are determined as the upper and lower boundaries of the cone wrapping rectangle, that is, the Y direction can be recorded as [ Y ] min ,Y max ]。
Optionally, in order to surround each segment of the vertebral body as much as possible, a certain distance can be further expanded outwards when the upper and lower boundaries are taken, for example, the upper and lower boundaries are expanded by 2mm on the basis of the maximum and minimum upper and lower boundaries, and then the expanded upper and lower boundaries are determined to be the upper and lower boundaries of the rectangle wrapped by the vertebral body.
Fig. 3 is a schematic view of a cone wrapping rectangle according to an embodiment of the present application. As shown in fig. 3, a graphic illustration a is a segment of a cone on a two-dimensional basic image, and a graphic illustration B is a cone wrapping rectangle of the segment of the cone.
Illustratively, after each segment of the two-dimensional basic image is marked with a corresponding cone wrapping rectangle as shown in fig. 3, a corresponding first target two-dimensional image can be obtained. And then, marking the same cone wrapping rectangle on the position area of the corresponding cone on the two-dimensional X-ray image, and obtaining a second target two-dimensional image.
Fig. 4 is an architecture schematic diagram of yet another process flow provided in an embodiment of the present application. As shown in fig. 4, a two-dimensional X-ray image X is first processed. The two-dimensional X-ray image X is processed through the cone detection module to obtain a two-dimensional X-ray image X marked with a cone label (namely a second target two-dimensional image), and then the two-dimensional basic image Im is processed based on the two-dimensional X-ray image X marked with the cone label to obtain a two-dimensional basic image Im marked with the cone label (namely a first target two-dimensional image).
Specifically, during processing, firstly, determining the cone boundary of each section of cone based on a preset boundary determination mode aiming at each section of cone on a two-dimensional X-ray image, and marking the cone wrapping rectangle on the two-dimensional X-ray image based on the cone boundary to obtain a second target two-dimensional image; and marking the same cone wrapping rectangle on the position area of the corresponding cone on the two-dimensional basic image to obtain a first target two-dimensional image. The specific implementation manner is similar to the above, and will not be repeated here.
So far, the initial positioning transformation matrix M of the three-dimensional CT image can be obtained by the mode 0 A first target two-dimensional image corresponding to the three-dimensional CT image and a second target two-dimensional image corresponding to the two-dimensional X-ray image.
S103, inputting the first target two-dimensional image and the second target two-dimensional image into a preset neural network model, and outputting rotation and translation parameters.
The medical image registration device of the application calls a preset neural network model, and the obtained first target two-dimensional image and the second target two-dimensional image are input into the preset neural network model, so that rotation and translation parameters can be obtained. Wherein the rotation and translation parameters can be represented by 6 parameters, or can be represented by a matrix, the 6 parameters including rotation parameters (r 1 ,r 2 ,r 3 ) Translation parameter (t) x ,t y ,t z )。
The preset neural network model is an exemplary pre-trained model for processing the first target two-dimensional image and the second target two-dimensional image to output rotation and translation parameters (the training process is described in the following embodiments, which are not repeated here). For example, the model may be a convolutional neural network model (Convolutional Neural Network, CNN) or a classification network model similar to VGGNet/GoogleNet/ResNet/DenseNet, etc. The preset neural network model based on deep learning can rapidly and accurately output rotation and translation parameters, so that the efficiency and accuracy of image registration are improved, and the efficiency and accuracy of subsequent operation navigation are improved.
Fig. 5 is a schematic structural diagram of a preset neural network model according to an embodiment of the present application. If the C-arm X-ray machine only takes one pose (i.e. only one set of imaging parameters), and only one image is taken in the two-dimensional X-ray image, the network structure of the preset neural network model may only include the input layer, the intermediate layer, and the output layer as shown in fig. 5. The input layer generally includes 6 channels, each channel having a size of 512 x 512, a front three channel receiving a first target two-dimensional image and a rear three channel receiving a second target two-dimensional image. The middle layer can be composed of a plurality of convolution layers, an activation layer and a pooling layer, and can also use the existing network structure, such as ResNet, denseNet; the last layer in the intermediate layer is typically composed of 512 channels, each of size 7*7. The output layer is used for converting the last layer of the middle layer into an output result, and converting the output result into 6*1 output through convolution full connection and other modes, namely outputting the rotation translation parameters of the application.
In some examples, the imaging parameters of the C-arm X-ray machine corresponding to the two-dimensional X-ray image may be multiple sets, and a two-dimensional base image of the corresponding three-dimensional CT image may be generated based on each set of imaging parameters; when the imaging parameters are multiple groups, the preset neural network model can also comprise a plurality of input layers and a plurality of intermediate layers.
Fig. 6 is a schematic structural diagram of still another preset neural network model according to an embodiment of the present application. When the C-arm X-ray machine has two postures, the network structure of the preset neural network model may be as shown in fig. 6, and be composed of an input layer 11, an input layer 12, an intermediate layer 11, an intermediate layer 12, a splicing layer, an intermediate layer 2 and an output layer 2. Wherein the stitching layer is used to stitch together the outputs of the intermediate layer 11 and the intermediate layer 12, for example, stitching the profiles of two 512 channels 7*7 into a profile of 7*7 of 1024 channels. The middle layer 2 is composed of several layers (1 to 3) of convolution, activation, pooling or full connection layers for inputting the output processed output of the splicing layer into the value output layer 2. The output layer 2 is used for converting the output of the last layer of the intermediate layer 2 into output with the size of 6*1 by convolution full connection and the like, namely outputting the rotation translation parameter of the application.
And by analogy, when the C-arm X-ray machine has more postures, the C-arm X-ray machine has more input layers and middle layers, and the first target two-dimensional image and the second target two-dimensional image are input into the input layers in units of groups. The gesture of the C-arm X-ray machine is not limited, for example, the imaging parameters of the C-arm X-ray machine in the normal gesture and the imaging parameters of the C-arm X-ray machine in the lateral gesture can be taken, the corresponding first target two-dimensional image and the second target two-dimensional image are respectively input into the normal neural network and the lateral neural network of the preset neural network model, and the rotation translation parameters are output through the fusion neural network.
S104, determining a registration transformation matrix according to the initial positioning transformation matrix and the rotation translation parameters; wherein the registration transformation matrix is used for registering the target site.
Illustratively, the rotational translation parameter includes a rotational parameter (r 1 ,r 2 ,r 3 ) Translation parameter (t) x ,t y ,t z ) When determining the registration transformation matrix, the 6 parameters need to be converted firstly, and then the registration transformation matrix is determined according to the converted rotation translation parameters and the initial positioning transformation matrix.
In one example, determining the registration transformation matrix from the initial localization transformation matrix and the rotational translation parameters may include:
s100, converting the rotation translation parameters into a rotation translation matrix.
S200, calculating the product of the initial positioning transformation matrix and the rotation translation matrix, and determining the product as a registration transformation matrix.
The rotational translation matrix may be illustratively denoted as M 1 The registration transformation matrix may be denoted as M.
Wherein R is 11 、R 12 、R 13 、R 21 、R 22 、R 23 、R 31 、R 32 、R 33 The method can be calculated by the following formula:
R 11 =cos(r 2 )cos(r 3 )
R 12 =cos(r 1 )sin(r 3 )+sin(r 1 )sin(r 2 )cos(r 3 )
R 13 =sin(r 1 )sin(r 3 )-cos(r 1 )sin(r 2 )cos(r 3 )
R 21 =-cos(r 2 )sin(r 3 )
R 22 =cos(r 1 )cos(r 3 )-sin(r 1 )sin(r 2 )sin(r 3 )
R 23 =sin(r 1 )cos(r 3 )+cos(r 1 )sin(r 2 )sin(r 3 )
R 31 =sin(r 2 )
R 32 =-sin(r 1 )cos(r 2 )
R 33 =cos(r 1 )cos(r 2 )
the registration transformation matrix may then be transformed according to the formula m=m 1 *M 0 And (5) calculating to obtain the product. Thus, a registration transformation matrix for registering the target portion can be obtained. Based on the registration transformation matrix, the coordinates of the preoperative plan can be calculated to the intraoperative position, thereby guiding the surgical robot or assisting the doctor to complete high-precision surgical operation.
The medical image registration method provided by the embodiment of the application comprises the following steps: acquiring a three-dimensional CT image and a two-dimensional X-ray image of a target part of a target object; the three-dimensional CT image is an image acquired before the target part is processed, the two-dimensional X-ray image is an image acquired in the process of processing the target part, and the three-dimensional CT image and the two-dimensional X-ray image both comprise a plurality of sections of continuous vertebral bodies; processing the three-dimensional CT image and the two-dimensional X-ray image to obtain an initial positioning transformation matrix of the three-dimensional CT image, a first target two-dimensional image corresponding to the three-dimensional CT image and a second target two-dimensional image corresponding to the two-dimensional X-ray image; wherein, the first target two-dimensional image and the second target two-dimensional image are marked with cone labels; inputting the first target two-dimensional image and the second target two-dimensional image into a preset neural network model, and outputting rotation and translation parameters; determining a registration transformation matrix according to the initial positioning transformation matrix and the rotation translation parameters; wherein the registration transformation matrix is used for registering the target site. According to the scheme, the neural network model based on deep learning is introduced to calculate the rotation translation parameters, the needed registration transformation matrix can be obtained rapidly and accurately, the time for image registration is shortened greatly, the accuracy of the obtained registration transformation matrix is higher, then the preoperative planning can be rapidly and accurately converted to the actual position of a patient, and the efficiency and the accuracy of surgical navigation are improved.
Next, a brief description will be given of a process of training a preset neural network model.
Fig. 7 is a schematic architecture diagram of a model training process according to an embodiment of the present application. As shown in fig. 7, when training the preset neural network model, a training data set is first generated according to the three-dimensional CT image. When the training data set is generated, vertebral body segmentation is carried out on vertebral bodies in the preoperative three-dimensional CT image, then transformation processing such as integral rotation translation, single vertebral body rotation, DRR projection, style migration and the like is sequentially carried out on each section of vertebral bodies, so that a transformed three-dimensional CT image is obtained, a two-dimensional basic image Im is generated based on the three-dimensional CT image projection before transformation, a two-dimensional X-ray image X is generated based on the three-dimensional CT image projection after transformation, rotation translation parameters are corresponding to the three-dimensional CT image after transformation, the rotation translation parameters are used as gold standards for learning reasoning of a CNN network model, and the training data set is obtained by analogy. The training data set generated based on the method is more reasonable, and powerful support can be provided for model training.
Then, the cone detection module is used for marking the two-dimensional basic image Im and the two-dimensional X-ray image X in the training data set to obtain the two-dimensional basic image Im and the two-dimensional X-ray image X with cone labels, namely the first target two-dimensional image and the second target two-dimensional image in the embodiment, and the process is similar to the processing process of the embodiment and is not repeated here.
And finally, inputting the two-dimensional basic image Im with the cone label and the two-dimensional X-ray image X into a CNN network model, outputting a rotation translation parameter by the CNN network model, comparing the rotation translation parameter output by the CNN network model with a gold standard rotation translation parameter, calculating a loss value based on a loss function, and continuously and iteratively updating the CNN network model through a BP feedback neural network to obtain a trained preset neural network model. The neural network model can be better optimized in an iterative mode by setting different loss functions, and the method is not limited in the application.
The following are device embodiments of the present application, which may be used to perform method embodiments of the present application. For details not disclosed in the device embodiments of the present application, please refer to the method embodiments of the present application.
Fig. 8 is a schematic structural diagram of a medical image registration apparatus according to an embodiment of the present application. As shown in fig. 8, the medical image registration apparatus 80 provided in the embodiment of the present application includes an acquisition unit 801, a processing unit 802, a model calling unit 803, and a determination unit 804.
Wherein, the acquiring unit 801 is configured to acquire a three-dimensional CT image and a two-dimensional X-ray image of a target portion of a target object; the three-dimensional CT image is an image acquired before the target part is processed, the two-dimensional X-ray image is an image acquired in the process of processing the target part, and the three-dimensional CT image and the two-dimensional X-ray image both comprise a plurality of sections of continuous vertebral bodies.
The processing unit 802 is configured to process the three-dimensional CT image and the two-dimensional X-ray image, and obtain an initial positioning transformation matrix of the three-dimensional CT image, a first target two-dimensional image corresponding to the three-dimensional CT image, and a second target two-dimensional image corresponding to the two-dimensional X-ray image; the first target two-dimensional image and the second target two-dimensional image are marked with cone labels.
The model calling unit 803 is configured to input the first target two-dimensional image and the second target two-dimensional image into a preset neural network model, and output rotation and translation parameters.
A determining unit 804, configured to determine a registration transformation matrix according to the initial positioning transformation matrix and the rotation translation parameter; wherein the registration transformation matrix is used for registering the target site.
The device provided in this embodiment may be used to perform the method of the foregoing embodiment, and its implementation principle and technical effects are similar, and will not be described herein again.
Fig. 9 is a schematic structural diagram of still another medical image registration apparatus according to an embodiment of the present application. As shown in fig. 9, the medical image registration apparatus 90 provided in the embodiment of the present application includes an acquisition unit 901, a processing unit 902, a model calling unit 903, and a determination unit 904.
The acquiring unit 901 is configured to acquire a three-dimensional CT image and a two-dimensional X-ray image of a target portion of a target object; the three-dimensional CT image is an image acquired before the target part is processed, the two-dimensional X-ray image is an image acquired in the process of processing the target part, and the three-dimensional CT image and the two-dimensional X-ray image both comprise a plurality of sections of continuous vertebral bodies.
The processing unit 902 is configured to process the three-dimensional CT image and the two-dimensional X-ray image, and obtain an initial positioning transformation matrix of the three-dimensional CT image, a first target two-dimensional image corresponding to the three-dimensional CT image, and a second target two-dimensional image corresponding to the two-dimensional X-ray image; the first target two-dimensional image and the second target two-dimensional image are marked with cone labels.
The model invoking unit 903 is configured to input the first target two-dimensional image and the second target two-dimensional image into a preset neural network model, and output rotation and translation parameters.
A determining unit 904, configured to determine a registration transformation matrix according to the initial positioning transformation matrix and the rotation translation parameter; wherein the registration transformation matrix is used for registering the target site.
In one example, the processing unit 902 includes a first processing module 9021, a second processing module 9022, and a third processing module 9023.
The first processing module 9021 is configured to perform initial positioning processing on the three-dimensional CT image, and obtain an initial positioning transformation matrix of the three-dimensional CT image.
The second processing module 9022 is configured to generate a two-dimensional base image of the three-dimensional CT image based on imaging parameters of the C-arm X-ray machine corresponding to the two-dimensional X-ray image.
And the third processing module 9023 is used for performing cone detection processing on the two-dimensional basic image and the two-dimensional X-ray image, and marking cone labels on the two-dimensional basic image and the two-dimensional X-ray image respectively to obtain a first target two-dimensional image and a second target two-dimensional image.
In one example, the first processing module 9021 is specifically configured to determine coordinates of a keypoint of the target region, and determine coordinates of a location point on the three-dimensional CT image that is the same as the keypoint; and determining an initial localization transformation matrix of the three-dimensional CT image based on the coordinates of the key points and the coordinates of the same position points as the key points.
In one example, the second processing module 9022 is specifically configured to set the positions of the emission light source and the receiver of the C-arm X-ray machine and the position of the three-dimensional CT image according to the imaging parameters of the C-arm X-ray machine corresponding to the two-dimensional X-ray image; and a two-dimensional basic image of the three-dimensional CT image is generated by adopting a digital reconstruction radiographic image method for projection.
In one example, the cone label includes a cone wrapping rectangle, and the third processing module 9023 is specifically configured to determine, for each segment of cone on the two-dimensional basic image, a cone boundary of each segment of cone based on a preset boundary determination manner, mark the cone wrapping rectangle on the two-dimensional basic image based on the cone boundary, obtain a first target two-dimensional image, and mark the same cone wrapping rectangle on a position area of a corresponding cone on the two-dimensional X-ray image, so as to obtain a second target two-dimensional image; or, determining the cone boundary of each cone based on a preset boundary determination mode for each cone on the two-dimensional X-ray image, and marking the cone wrapping rectangle on the two-dimensional X-ray image based on the cone boundary to obtain a second target two-dimensional image; and marking the same cone wrapping rectangle on the position area of the corresponding cone on the two-dimensional basic image to obtain a first target two-dimensional image.
In one example, the determination unit 904 includes a conversion module 9041 and a calculation module 9042.
A conversion module 9041 for converting the rotational translation parameters into a rotational translation matrix.
The calculating module 9042 is configured to calculate a product of the initial positioning transformation matrix and the rotation translation matrix, and determine the product as the registration transformation matrix.
In one example, the imaging parameters of the C-arm X-ray machine corresponding to the two-dimensional X-ray image can be multiple groups, and a two-dimensional basic image of the corresponding three-dimensional CT image can be generated based on each group of imaging parameters; when imaging parameters are multiple groups, the preset neural network model comprises a plurality of input layers and a plurality of middle layers.
The device provided in this embodiment may be used to perform the method of the foregoing embodiment, and its implementation principle and technical effects are similar, and will not be described herein again.
It should be noted that, it should be understood that the division of the modules of the above apparatus is merely a division of a logic function, and may be fully or partially integrated into a physical entity or may be physically separated. And these modules may all be implemented in software in the form of calls by the processing element; or can be realized in hardware; the method can also be realized in a form of calling software by a processing element, and the method can be realized in a form of hardware by a part of modules. The functions of the above data processing module may be called and executed by a processing element of the above apparatus, and may be stored in a memory of the above apparatus in the form of program codes. The implementation of the other modules is similar. In addition, all or part of the modules can be integrated together or can be independently implemented. The processing element here may be an integrated circuit with signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in a software form.
Fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 10, the electronic device 100 includes: a processor 1001, and a memory 1002 communicatively coupled to the processor.
Wherein the memory 1002 stores computer-executable instructions; the processor 1001 executes computer-executable instructions stored in the memory 1002 to implement a method as in any of the preceding claims.
In the specific implementation of the electronic device described above, it should be understood that the processor may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The method disclosed in connection with the embodiments of the present application may be embodied directly in hardware processor execution or in a combination of hardware and software modules in a processor.
Embodiments of the present application also provide a computer-readable storage medium having stored therein computer-executable instructions that, when executed by a processor, are configured to implement a method as in any of the preceding claims.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the method embodiments described above may be performed by computer instruction related hardware. The foregoing program may be stored in a computer readable storage medium. The program, when executed, performs steps including the method embodiments described above; and the aforementioned storage medium includes: various media that can store program code, such as ROM, RAM, magnetic or optical disks.
Embodiments of the present application also provide a computer program product comprising a computer program for implementing a method as in any of the preceding claims when executed by a processor.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It is to be understood that the present application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (10)

1. A method of medical image registration, the method comprising:
acquiring a three-dimensional CT image and a two-dimensional X-ray image of a target part of a target object; the three-dimensional CT image is an image acquired before the target part is processed, the two-dimensional X-ray image is an image acquired in the process of processing the target part, and the three-dimensional CT image and the two-dimensional X-ray image both comprise a plurality of sections of continuous vertebral bodies;
processing the three-dimensional CT image and the two-dimensional X-ray image to obtain an initial positioning transformation matrix of the three-dimensional CT image, a first target two-dimensional image corresponding to the three-dimensional CT image and a second target two-dimensional image corresponding to the two-dimensional X-ray image; wherein, the first target two-dimensional image and the second target two-dimensional image are marked with cone labels;
inputting the first target two-dimensional image and the second target two-dimensional image into a preset neural network model, and outputting rotation translation parameters;
Determining a registration transformation matrix according to the initial positioning transformation matrix and the rotation translation parameters; the registration transformation matrix is used for registering the target part.
2. The method of claim 1, wherein processing the three-dimensional CT image and the two-dimensional X-ray image to obtain an initial localization transformation matrix of the three-dimensional CT image, a first target two-dimensional image corresponding to the three-dimensional CT image, and a second target two-dimensional image corresponding to the two-dimensional X-ray image, comprises:
performing primary positioning treatment on the three-dimensional CT image to obtain a primary positioning transformation matrix of the three-dimensional CT image;
generating a two-dimensional basic image of the three-dimensional CT image by projection based on imaging parameters of a C-arm X-ray machine corresponding to the two-dimensional X-ray image;
and performing cone detection processing on the two-dimensional basic image and the two-dimensional X-ray image, and marking cone labels on the two-dimensional basic image and the two-dimensional X-ray image to obtain the first target two-dimensional image and the second target two-dimensional image.
3. The method of claim 2, wherein performing initial positioning processing on the three-dimensional CT image to obtain an initial positioning transformation matrix of the three-dimensional CT image comprises:
Determining coordinates of key points of the target part, and determining coordinates of position points which are the same as the key points on the three-dimensional CT image;
and determining an initial positioning transformation matrix of the three-dimensional CT image based on the coordinates of the key points and the coordinates of the position points which are the same as the key points.
4. The method of claim 2, wherein generating a two-dimensional basis image of the three-dimensional CT image based on imaging parameters of a C-arm X-ray machine corresponding to the two-dimensional X-ray image comprises:
setting the positions of an emission light source and a receiver of the C-arm X-ray machine and the position of the three-dimensional CT image according to imaging parameters of the C-arm X-ray machine corresponding to the two-dimensional X-ray image;
and generating a two-dimensional basic image of the three-dimensional CT image by adopting a digital reconstruction radiographic image method for projection.
5. The method of claim 2, wherein the cone label comprises a cone wrapping rectangle, performing cone detection processing on the two-dimensional base image and the two-dimensional X-ray image, and marking the cone label on the two-dimensional base image and the two-dimensional X-ray image to obtain the first target two-dimensional image and the second target two-dimensional image, respectively, comprising:
Determining the cone boundary of each cone based on a preset boundary determination mode for each cone on the two-dimensional basic image, marking cone wrapping rectangles on the two-dimensional basic image based on the cone boundary to obtain the first target two-dimensional image, and marking the same cone wrapping rectangles on the position area of the corresponding cone on the two-dimensional X-ray image to obtain the second target two-dimensional image;
or, determining the cone boundary of each cone based on a preset boundary determination mode for each cone on the two-dimensional X-ray image, and marking the cone wrapping rectangle on the two-dimensional X-ray image based on the cone boundary to obtain the second target two-dimensional image; and marking the same cone wrapping rectangle on the position area of the corresponding cone on the two-dimensional basic image to obtain the first target two-dimensional image.
6. The method according to any one of claims 1-5, wherein determining a registration transformation matrix from the initial positioning transformation matrix and the rotational translation parameters comprises:
converting the rotation translation parameters into a rotation translation matrix;
and calculating the product of the initial positioning transformation matrix and the rotation translation matrix, and determining the product as the registration transformation matrix.
7. The method according to any one of claims 1-5, wherein the imaging parameters of the C-arm X-ray machine corresponding to the two-dimensional X-ray image may be multiple sets, and a two-dimensional base image of the corresponding three-dimensional CT image may be generated based on each set of imaging parameters; when the imaging parameters are multiple groups, the preset neural network model comprises a plurality of input layers and a plurality of intermediate layers.
8. A medical image registration apparatus, the apparatus comprising:
an acquisition unit for acquiring a three-dimensional CT image and a two-dimensional X-ray image of a target portion of a target object; the three-dimensional CT image is an image acquired before the target part is processed, the two-dimensional X-ray image is an image acquired in the process of processing the target part, and the three-dimensional CT image and the two-dimensional X-ray image both comprise a plurality of sections of continuous vertebral bodies;
the processing unit is used for processing the three-dimensional CT image and the two-dimensional X-ray image, and acquiring an initial positioning transformation matrix of the three-dimensional CT image, a first target two-dimensional image corresponding to the three-dimensional CT image and a second target two-dimensional image corresponding to the two-dimensional X-ray image; wherein, the first target two-dimensional image and the second target two-dimensional image are marked with cone labels;
The model calling unit is used for inputting the first target two-dimensional image and the second target two-dimensional image into a preset neural network model and outputting rotation translation parameters;
the determining unit is used for determining a registration transformation matrix according to the initial positioning transformation matrix and the rotation translation parameter; the registration transformation matrix is used for registering the target part.
9. An electronic device, the electronic device comprising: a processor, and a memory communicatively coupled to the processor;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored in the memory to implement the method of any one of claims 1-7.
10. A computer readable storage medium having stored therein computer executable instructions which when executed by a processor are adapted to carry out the method of any one of claims 1-7.
CN202311586601.0A 2023-11-24 2023-11-24 Medical image registration method, device, equipment and storage medium Pending CN117635656A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311586601.0A CN117635656A (en) 2023-11-24 2023-11-24 Medical image registration method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311586601.0A CN117635656A (en) 2023-11-24 2023-11-24 Medical image registration method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117635656A true CN117635656A (en) 2024-03-01

Family

ID=90017403

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311586601.0A Pending CN117635656A (en) 2023-11-24 2023-11-24 Medical image registration method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117635656A (en)

Similar Documents

Publication Publication Date Title
US11989338B2 (en) Using optical codes with augmented reality displays
US10154823B2 (en) Guiding system for positioning a patient for medical imaging
US8145012B2 (en) Device and process for multimodal registration of images
JP2022526989A (en) Spatial registration between the tracking system and the image using 2D image projection
JP5243754B2 (en) Image data alignment
CN105520716B (en) Real-time simulation of fluoroscopic images
Jiang et al. Registration technology of augmented reality in oral medicine: A review
JP2008526270A (en) Improved data representation for RTP
JP2009022754A (en) Method for correcting registration of radiography images
JP2017035469A (en) Image processing apparatus, image processing method, and program
CN107752979B (en) Automatic generation method of artificial projection, medium and projection image determination device
EP2259726A1 (en) Respiration determination apparatus
WO2019073681A1 (en) Radiation imaging device, image processing method, and image processing program
WO2021111223A1 (en) Registration of an image with a tracking system
US11278742B2 (en) Image guided treatment delivery
US9254106B2 (en) Method for completing a medical image data set
KR102619994B1 (en) Biomedical image processing devices, storage media, biomedical devices, and treatment systems
EP4287120A1 (en) Guidance during medical procedures
WO2023232492A1 (en) Guidance during medical procedures
CN117635656A (en) Medical image registration method, device, equipment and storage medium
JP2017225487A (en) Radiotherapy support system, image generation method, and image generation program
CN112102225A (en) Registration apparatus, method for registration, and computer-readable storage medium
TWI786667B (en) Method and device for generating three-dimensional image data of human body skeletal joints
CN117677358A (en) Augmented reality system and method for stereoscopic projection and cross-referencing of intra-operative field X-ray fluoroscopy and C-arm computed tomography imaging
JP2024005113A (en) Image processing device, method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination