CN113781593A - Four-dimensional CT image generation method and device, terminal device and storage medium - Google Patents

Four-dimensional CT image generation method and device, terminal device and storage medium Download PDF

Info

Publication number
CN113781593A
CN113781593A CN202110938287.2A CN202110938287A CN113781593A CN 113781593 A CN113781593 A CN 113781593A CN 202110938287 A CN202110938287 A CN 202110938287A CN 113781593 A CN113781593 A CN 113781593A
Authority
CN
China
Prior art keywords
image
dimensional
time phase
target user
intraoperative
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110938287.2A
Other languages
Chinese (zh)
Other versions
CN113781593B (en
Inventor
邓金城
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenying Medical Technology Shenzhen Co ltd
Original Assignee
Shenying Medical Technology Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenying Medical Technology Shenzhen Co ltd filed Critical Shenying Medical Technology Shenzhen Co ltd
Priority to CN202110938287.2A priority Critical patent/CN113781593B/en
Publication of CN113781593A publication Critical patent/CN113781593A/en
Application granted granted Critical
Publication of CN113781593B publication Critical patent/CN113781593B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/100764D tomography; Time-sequential 3D tomography

Abstract

The application is applicable to the technical field of image processing, and provides a method and a device for generating a four-dimensional CT image, terminal equipment and a storage medium. The generation method comprises the following steps: when a target user is in operation and the respiratory state of the target user is a reference respiratory state, acquiring a reference three-dimensional CT image obtained by scanning the target user by CT equipment; determining a first three-dimensional CT image of a reference time phase corresponding to a reference respiratory state from a preoperative four-dimensional CT image; carrying out image registration on the reference three-dimensional CT image and the first three-dimensional CT image of the reference time phase to obtain image registration parameters; transforming the first three-dimensional CT image of the residual time phase according to the image registration parameters to obtain a second three-dimensional CT image of the residual time phase; and generating an intraoperative four-dimensional CT image according to the reference three-dimensional CT image and the second three-dimensional CT image of the residual time phase. The intraoperative four-dimensional CT image can be generated through the method and the device.

Description

Four-dimensional CT image generation method and device, terminal device and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method and an apparatus for generating a four-dimensional CT image, a terminal device, and a storage medium.
Background
In order to eliminate or reduce the influence of respiratory motion artifact on chest and abdomen organ electronic Computed Tomography (CT), reflect the change rule of chest and abdomen organ along with time, and achieve the purpose of accurate diagnosis and treatment. Ichikawa et al, 2000, proposed the concept of 4D-CT, incorporating a time factor into the three-dimensional reconstruction of CT images to form four-dimensional CT images, i.e., 4D-CT.
In the prior art, three-dimensional CT images and respiratory signals are acquired synchronously before an operation, and the acquired three-dimensional CT images are marked with time phases in a respiratory cycle to obtain three-dimensional CT images of each time phase, wherein the three-dimensional CT images of each time phase form a three-dimensional image sequence which changes along with time, and finally form four-dimensional CT images before the operation. However, in the operation, since the posture and the arrangement of the patient are different from those of the preoperative four-dimensional CT image, the preoperatively formed four-dimensional CT image cannot be applied to the operation.
Disclosure of Invention
The application provides a method and a device for generating a four-dimensional CT image, terminal equipment and a storage medium, so as to generate an intraoperative four-dimensional CT image.
In a first aspect, an embodiment of the present application provides a method for generating a four-dimensional CT image, where the method includes:
when a target user is in operation and the respiratory state of the target user is a reference respiratory state, acquiring a reference three-dimensional CT image obtained by scanning the target user by CT equipment;
determining a first three-dimensional CT image of a reference time phase from a preoperative four-dimensional CT image, wherein the preoperative four-dimensional CT image comprises first three-dimensional CT images of N time phases, N is an integer greater than 1, and the reference time phase refers to the time phase corresponding to the reference respiratory state;
carrying out image registration on the reference three-dimensional CT image and the first three-dimensional CT image of the reference time phase to obtain image registration parameters;
transforming the first three-dimensional CT image of the residual time phase according to the image registration parameters to obtain a second three-dimensional CT image of the residual time phase, wherein the residual time phase refers to the time phase except the reference time phase in the N time phases;
generating an intraoperative four-dimensional CT image from the reference three-dimensional CT image and a second three-dimensional CT image of the remaining time phase, the reference three-dimensional CT image being the second three-dimensional CT image of the reference time phase in the intraoperative four-dimensional CT image.
In a second aspect, an embodiment of the present application provides a generation apparatus for a four-dimensional CT image, including:
the first acquisition module is used for acquiring a reference three-dimensional CT image obtained by scanning a target user by CT equipment when the target user is in operation and the respiratory state of the target user is a reference respiratory state;
the preoperative four-dimensional CT image comprises first three-dimensional CT images of N time phases, wherein N is an integer greater than 1, and the reference time phase refers to a time phase corresponding to the reference respiratory state;
the image registration module is used for carrying out image registration on the reference three-dimensional CT image and the first three-dimensional CT image of the reference time phase to obtain image registration parameters;
an image transformation module, configured to transform, according to the image registration parameter, a first three-dimensional CT image of a remaining time phase to obtain a second three-dimensional CT image of the remaining time phase, where the remaining time phase is a time phase of the N time phases except for the reference time phase;
an image generation module, configured to generate an intraoperative four-dimensional CT image according to the reference three-dimensional CT image and the second three-dimensional CT image of the remaining time phase, where the reference three-dimensional CT image is the second three-dimensional CT image of the reference time phase in the intraoperative four-dimensional CT image.
In a third aspect, an embodiment of the present application provides a terminal device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the generating method according to the first aspect when executing the computer program.
In a fourth aspect, the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the steps of the generating method according to the first aspect.
In a fifth aspect, the present application provides a computer program product, which when run on a terminal device, causes the terminal device to perform the steps of the generating method according to the first aspect.
As can be seen from the above, according to the scheme, when the target user is in an operation and the respiratory state of the target user is the reference respiratory state, the reference three-dimensional CT image obtained by scanning the target user by the CT device can be obtained, the first three-dimensional CT image of the reference time phase corresponding to the reference respiratory state is determined from the preoperative four-dimensional CT image, the image registration parameter can be obtained by performing image registration on the reference three-dimensional CT image and the first three-dimensional CT image of the reference time phase, the first three-dimensional CT images of the remaining time phases can be transformed according to the image registration parameter to obtain the second three-dimensional CT images of the remaining time phases, and the intraoperative four-dimensional CT image can be generated according to the reference three-dimensional CT image and the second three-dimensional CT images of the remaining time phases.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flow chart illustrating an implementation of a method for generating a four-dimensional CT image according to an embodiment of the present application;
FIG. 2 is an exemplary graph of a respiratory cycle;
FIG. 3 is an exemplary illustration of a first three-dimensional CT image for 10 phases;
fig. 4 is a schematic flow chart illustrating an implementation of a method for generating a four-dimensional CT image according to a second embodiment of the present application;
FIG. 5 is an exemplary illustration of a puncture path;
fig. 6 is a schematic structural diagram of a four-dimensional CT image generation apparatus according to a third embodiment of the present application;
fig. 7 is a schematic structural diagram of a terminal device according to a fourth embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Before explaining the present application, terms referred to in the present application will be explained for the convenience of the reader.
Rigid transformation refers to a transformation method in which the distance between any two points on the same image is kept constant on the transformed image. The rigid body transformation can be split and divided into steps of translation, rotation and the like.
Rigid registration is to determine transformation parameters (rotation parameters and translation parameters in three directions such as X-axis, Y-axis, and Z-axis) in rigid transformation. Generally, an iterative method is used, and the similarity between two images (a reference image and a floating image) is maximized by continuously updating the rotation parameter and the translation parameter, so that the optimal rotation parameter and translation parameter are finally obtained. The optimal rotation parameter and translation parameter refer to the rotation parameter and translation parameter when the similarity is maximum.
The deformation transformation is a transformation method that the distance between any two points on the same image cannot be kept unchanged on the transformed image on the assumption that local deformation occurs in the image. B-spline transforms are typically used.
The deformation registration is to determine a transformation parameter (displacement amount of a deformation control point) in the deformation transformation. Generally, an iterative method is used to maximize the similarity between two images (a reference image and a floating image) by continuously updating the deformation field, and finally, the optimal deformation field is obtained. The optimal deformation field refers to the deformation field with the maximum similarity.
In addition, rigid registration is required before deformation registration, so that two images are initially aligned.
In particular implementations, the terminal devices described in embodiments of the present application include, but are not limited to, other portable devices such as mobile phones, laptop computers, or tablet computers having touch sensitive surfaces (e.g., touch screen displays and/or touch pads). It should also be understood that in some embodiments, the device is not a portable communication device, but is a desktop computer having a touch-sensitive surface (e.g., a touch screen display and/or touchpad).
In the discussion that follows, a terminal device that includes a display and a touch-sensitive surface is described. However, it should be understood that the terminal device may include one or more other physical user interface devices such as a physical keyboard, mouse, and/or joystick.
The terminal device supports various applications, such as one or more of the following: a drawing application, a presentation application, a word processing application, a website creation application, a disc burning application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an email application, an instant messaging application, an exercise support application, a photo management application, a digital camera application, a web browsing application, a digital music player application, and/or a digital video player application.
Various applications that may be executed on the terminal device may use at least one common physical user interface device, such as a touch-sensitive surface. One or more functions of the touch-sensitive surface and corresponding information displayed on the terminal can be adjusted and/or changed between applications and/or within respective applications. In this way, a common physical architecture (e.g., touch-sensitive surface) of the terminal can support various applications with user interfaces that are intuitive and transparent to the user.
It should be understood that, the sequence numbers of the steps in this embodiment do not mean the execution sequence, and the execution sequence of each process should be determined by the function and the inherent logic of the process, and should not constitute any limitation to the implementation process of the embodiment of the present application.
In order to explain the technical solution described in the present application, the following description will be given by way of specific examples.
Fig. 1 is a schematic flow chart illustrating an implementation process of a generation method of a four-dimensional CT image according to an embodiment of the present application, where the generation method is applied to a terminal device. As shown in fig. 1, the generation method may include the steps of:
step 101, when a target user is in operation and the respiratory state of the target user is a reference respiratory state, acquiring a reference three-dimensional CT image obtained by scanning the target user by a CT device.
The target user may refer to any user who needs to be scanned by the CT device. For example, patients with lesions in the thoracic and abdominal organs.
The reference respiratory state may refer to a preset respiratory state, and a respiratory state that is easily reproduced among all respiratory states may be used as the reference respiratory state.
Because the expiration tail end is a position which is easy to reappear, a target user can hold breath at the expiration tail end, and the target user is scanned through the CT equipment at the moment, so that a reference three-dimensional CT image can be obtained.
In order to accurately acquire a reference three-dimensional CT image, when a target user is scanned through CT equipment, the breathing of the target user can be monitored through breathing monitoring equipment to obtain a breathing signal of the target user, the breathing signal of the target user is sent to terminal equipment, the terminal equipment judges whether the breathing state of the target user is the expiration end or not according to the breathing signal, and when the breathing state of the target user is the expiration end, the target user is scanned through the CT equipment to obtain the reference three-dimensional CT image. The respiration monitoring device may be any device capable of monitoring the respiration of the user, and for example, a device capable of measuring the respiration amount by spirometry, measuring the height difference of the body surface following the respiration by an infrared imaging device, or measuring the pressure difference due to the respiration by a pressure sensor or the like, and converting these measurement signals into respiration signals may be used.
The terminal device can extract the expiratory volume of the target user from the respiratory signal, and when the expiratory volume of the target user is zero or less than the expiratory volume threshold, the respiratory state of the target user can be judged to be the expiratory end, otherwise, the respiratory state of the target user is not the expiratory end.
It should be noted that, after the reference three-dimensional CT image is obtained by the CT device, the reference three-dimensional CT image may be sent to the terminal device, so that the terminal device can obtain the reference three-dimensional CT image.
Step 102, determining a first three-dimensional CT image of a reference time phase from the preoperative four-dimensional CT image.
The preoperative four-dimensional CT image comprises first three-dimensional CT images of N time phases, wherein N is an integer larger than 1, and the reference time phase refers to the time phase corresponding to the reference respiratory state.
In the present embodiment, one breathing cycle is generally divided into N time phases, so that the four-dimensional CT image may be divided into N time phases according to the N time phases of one breathing cycle, and each time phase has a corresponding breathing state. For example, a four-dimensional CT image is divided into 10 phases, 0%, 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, and 90%, and it is generally specified that 0% of the phase corresponds to an inspiratory end and 50% of the phase corresponds to an expiratory end. When the reference respiratory state is the end of expiration, the reference phase is phase 50%. An exemplary graph of a breathing cycle is shown in fig. 2, where a breathing cycle is divided into 10 phases in fig. 2. Fig. 3 is an exemplary diagram of a first three-dimensional CT image of 10 phases.
The method for acquiring the preoperative four-dimensional CT image can be referred to the related introduction of the background art, and is not described herein again.
And 103, carrying out image registration on the reference three-dimensional CT image and the first three-dimensional CT image of the reference time phase to obtain image registration parameters.
Since the reference three-dimensional CT image is acquired when the target user is in the operation and the respiratory state is the reference respiratory state, the reference three-dimensional CT image can be understood as the second three-dimensional CT image of the reference time phase corresponding to the reference respiratory state, and therefore, the image registration parameters for transforming the preoperative four-dimensional CT image into the intraoperative four-dimensional CT image can be obtained by performing image registration on the reference three-dimensional CT image and the first three-dimensional CT image of the reference time phase.
As an alternative embodiment, the image registration of the reference three-dimensional CT image and the first three-dimensional CT image of the reference phase, and obtaining the image registration parameter includes:
and carrying out rigid registration and deformation registration on the reference three-dimensional CT image and the first three-dimensional CT image of the reference time phase in sequence to obtain a translation parameter and a rotation parameter corresponding to the rigid registration and a deformation field corresponding to the deformation registration.
In this embodiment, the reference three-dimensional CT image may be used as a reference image, and the reference three-dimensional CT image and the first three-dimensional CT image in the reference time phase are rigidly registered to obtain a translation parameter and a rotation parameter corresponding to the rigid registration, and then are deformably registered with the first three-dimensional CT image in the reference time phase to obtain image registration parameters such as a deformation field corresponding to the deformably registration.
Before the deformation registration of the reference three-dimensional CT image and the first three-dimensional CT image of the reference time phase, rigid registration is performed, so that the reference three-dimensional CT image and the first three-dimensional CT image of the reference time phase can be initially aligned, and the accuracy of the deformation registration is improved.
And 104, transforming the first three-dimensional CT images of the residual time phases according to the image registration parameters to obtain second three-dimensional CT images of the residual time phases.
Wherein the remaining phases refer to phases other than the reference phase among the N phases.
According to the image registration parameters obtained in step 103, the first three-dimensional CT images of all time phases in the preoperative four-dimensional CT image can be transformed into the second three-dimensional CT images of corresponding time phases in the intraoperative four-dimensional CT image. Because the reference three-dimensional CT image is the second three-dimensional CT image of the reference time phase in the intraoperative four-dimensional CT image, the first three-dimensional CT images of the residual N-1 time phases only need to be transformed according to the image registration parameters.
And 105, generating an intraoperative four-dimensional CT image according to the reference three-dimensional CT image and the second three-dimensional CT image of the residual time phase.
Wherein, the reference three-dimensional CT image is a second three-dimensional CT image of a reference time phase in the intraoperative four-dimensional CT image.
In one embodiment, the sequence of the corresponding second three-dimensional CT images in the intraoperative four-dimensional CT image may be determined according to the sequence of the first three-dimensional CT images of the N phases in the preoperative four-dimensional CT image.
For example, the sequence of the preoperative four-dimensional CT images of the first three-dimensional CT images of 10 phases refers to the sequence of the first three-dimensional CT images of 0% of the phase, the first three-dimensional CT images of 10% of the phase, the first three-dimensional CT images of 20% of the phase, the first three-dimensional CT images of 30% of the phase, the first three-dimensional CT images of 40% of the phase, the first three-dimensional CT images of 50% of the phase, the first three-dimensional CT images of 60% of the phase, the first three-dimensional CT images of 70% of the phase, the first three-dimensional CT images of 80% of the phase and the first three-dimensional CT images of 90% of the phase, and then the sequence of the second three-dimensional CT images of 10 phases in the intraoperative four-dimensional CT images is the sequence of the second three-dimensional CT images of 0% of the phase, the second three-dimensional CT images of 10% of the phase, the second three-dimensional CT images of 20% of the phase, the second three-dimensional CT images of 30% of the phase, the second three-dimensional CT images of 40% of the phase, the second three-dimensional CT images of 50% of the phase, A second three-dimensional CT image for phase 60%, a second three-dimensional CT image for phase 70%, a second three-dimensional CT image for phase 80%, and a second three-dimensional CT image for phase 90%.
According to the embodiment of the application, when a target user is in an operation and the respiratory state of the target user is a reference respiratory state, a reference three-dimensional CT image obtained by scanning the target user by a CT device can be obtained, a first three-dimensional CT image of a reference time phase corresponding to the reference respiratory state is determined from a preoperative four-dimensional CT image, an image registration parameter can be obtained by carrying out image registration on the reference three-dimensional CT image and the first three-dimensional CT image of the reference time phase, the first three-dimensional CT images of the remaining time phases can be transformed according to the image registration parameter to obtain second three-dimensional CT images of the remaining time phases, and the intraoperative four-dimensional CT image can be generated according to the reference three-dimensional CT image and the second three-dimensional CT images of the remaining time phases.
Fig. 4 is a schematic flow chart illustrating an implementation of a method for generating a four-dimensional CT image according to the second embodiment of the present application, where the method is applied to a terminal device. As shown in fig. 4, the generation method may include the steps of:
step 401, when the target user is in operation and the respiratory state of the target user is a reference respiratory state, acquiring a reference three-dimensional CT image obtained by the CT apparatus by scanning the target user.
The step is the same as step 101, and reference may be made to the related description of step 101, which is not described herein again.
Step 402, determining a first three-dimensional CT image of a reference phase from the preoperative four-dimensional CT image.
The step is the same as step 102, and reference may be made to the related description of step 102, which is not repeated herein.
And 403, performing image registration on the reference three-dimensional CT image and the first three-dimensional CT image of the reference time phase to obtain image registration parameters.
The step is the same as step 103, and reference may be made to the related description of step 103, which is not described herein again.
And 404, transforming the first three-dimensional CT images of the residual time phases according to the image registration parameters to obtain second three-dimensional CT images of the residual time phases.
The step is the same as step 104, and reference may be made to the related description of step 104, which is not described herein again.
Step 405, an intraoperative four-dimensional CT image is generated from the reference three-dimensional CT image and the second three-dimensional CT image of the remaining phase.
The step is the same as step 105, and reference may be made to the related description of step 105, which is not repeated herein.
In step 406, the current breathing state of the target user is obtained.
In one embodiment, the breathing signal of the target user may be acquired by the breathing monitoring device, and the breathing signal of the target user is sent to the terminal device, and the terminal device may determine the current breathing state of the target user by analyzing the received breathing signal of the target user.
As an alternative embodiment, the image registration parameters include a deformation field, and when acquiring the current respiratory state of the target user, the method further includes:
acquiring an intraoperative three-dimensional CT image of a target user, wherein the intraoperative three-dimensional CT image comprises a puncture path of a puncture needle;
according to the deformation field, updating the puncture path to obtain a target puncture path;
and displaying a target puncture path on the second three-dimensional CT image of the current time phase, wherein the target puncture path is used for guiding a puncture needle to puncture.
When the current respiratory state of the target user is obtained, the intraoperative three-dimensional CT image of the target user in the current respiratory state can be synchronously obtained, and puncture path planning is carried out on the intraoperative three-dimensional CT image to obtain the puncture path of the puncture needle.
Compared with the second three-dimensional CT image of the current time phase, the internal part of the intraoperative three-dimensional CT image is likely to deform, so that the puncture path needs to be updated according to the deformation field, and the accuracy of the puncture path on the second three-dimensional CT image of the current time phase is improved.
The doctor can adjust the position of the puncture needle in an actual scene according to the target puncture path displayed on the second three-dimensional CT image of the current time phase and the position of the puncture needle on the second three-dimensional CT image of the current time phase, so that the puncture needle can puncture along the target puncture path, the position of the puncture needle changes according to different respiratory motions, and the positioning and navigation precision in the thoracoabdominal operation is improved.
As an alternative embodiment, the puncturing path refers to a path from the puncturing point to the target point, and updating the puncturing path according to the deformation field includes:
and updating the position information of the target point in the puncture path according to the deformation field.
Because the intraoperative three-dimensional CT image and the second three-dimensional CT image of the current time phase are positioned in the same space, although the interior of the intraoperative three-dimensional CT image is possibly deformed, the position of a puncture point of a puncture path is usually not changed, and the position which is changed is usually a target point, so that only the position information of the target point can be updated according to a deformation field, and the updating speed of the puncture path is improved. The updated position information of the target point may refer to position information of the target point in the second three-dimensional CT image of the current phase.
An exemplary diagram of the puncture path is shown in fig. 5. The point of entry in fig. 5 is referred to as the point of entry of the puncture.
In step 407, a second three-dimensional CT image of the current phase is determined from the intraoperative four-dimensional CT image.
The intraoperative four-dimensional CT image comprises N time phases of second three-dimensional CT images, and the N time phases respectively correspond to one breathing state, so that the second three-dimensional CT image of the current time phase corresponding to the current breathing state can be searched from the intraoperative four-dimensional CT image. The current phase may refer to a phase corresponding to the current respiratory state.
And step 408, displaying the puncture needle on the second three-dimensional CT image of the current time phase.
Wherein the needle is in a first position for the target user.
The puncture needle is displayed on the second three-dimensional CT image of the current time phase, the position of the puncture needle in the actual scene can be mapped to the image space, and a doctor can conveniently know the position of the puncture needle in the actual scene by checking the position of the puncture needle in the second three-dimensional CT image of the current time phase.
As an alternative embodiment, displaying the needle on the second three-dimensional CT image of the current phase includes:
acquiring the position information of the puncture needle in a reference frame space, wherein the reference frame is positioned at a second position of a target user, and the second position is different from the first position;
and transforming the position information of the puncture needle to the second three-dimensional image of the current time phase from the reference frame space to obtain the position information of the puncture needle in the second three-dimensional image of the current time phase, and displaying the puncture needle at the corresponding position in the second three-dimensional image of the current time phase.
And a reference frame can be arranged at the second position of the target user, and the reference frame is synchronized with the target user and reflects the posture and the arrangement of the target user.
The reference frame space may refer to a coordinate system established based on the reference frame, for example, a three-dimensional coordinate system established with the reference frame as an origin.
In one embodiment, the terminal equipment can determine the coordinate transformation relation from the space of the binocular positioning system to the space of the reference frame according to the coordinates of the reference frame in the space of the binocular positioning system; according to the coordinate of the puncture needle in the space of the binocular positioning system and the coordinate transformation relation from the space of the binocular positioning system to the space of the reference frame, the position information of the puncture needle in the space of the reference frame can be obtained.
Wherein, the reference frame and the puncture needle are both in the shooting range of the binocular positioning system. The coordinate of the reference frame and the coordinate of the puncture needle in the space of the binocular positioning system can be obtained by the binocular positioning system through the built-in two cameras. The binocular positioning system space may refer to a coordinate system of any one of the two cameras of the binocular positioning system, and may also be referred to as a camera coordinate system.
As an alternative embodiment, the transforming the position information of the puncture needle from the reference frame space to the second three-dimensional image of the current phase comprises:
acquiring a coordinate transformation relation between a reference frame space and an image space, wherein the image space is a coordinate system of an intraoperative three-dimensional CT image;
and transforming the position information of the puncture needle from the reference frame space to a second three-dimensional image of the current time phase according to the coordinate transformation relation between the reference frame space and the image space and the deformation field.
The terminal device may perform spatial registration on the reference frame space and the image space through a spatial registration algorithm such as a least square method, an Iterative Closest Point (ICP) method, and the like, to obtain a coordinate transformation relationship between the reference frame space and the image space.
The terminal equipment can transform the position information of the puncture needle from the reference frame space to an intraoperative three-dimensional CT image according to the coordinate transformation relation between the reference frame space and the image space, and can transform the position information of the puncture needle from the intraoperative three-dimensional CT image to a second three-dimensional image of the current time phase according to the deformation field.
It should be noted that the terminal device, the binocular positioning system, the CT device, the respiration monitoring device, the reference frame, and the puncture needle in this embodiment may form an intraoperative navigation positioning system, and through the intraoperative navigation positioning system, the puncture path may be predicted based on an intraoperative four-dimensional CT image, the puncture path may be updated, and the precision of positioning and navigation in the thoracoabdominal operation may be improved.
In this embodiment, on the basis of the first embodiment, by obtaining the current respiratory state of the target user, the second three-dimensional CT image of the current time phase can be determined from the intraoperative four-dimensional CT image generated in the first embodiment, and by displaying the puncture needle and the puncture path on the second three-dimensional CT image of the current time phase, a doctor can conveniently adjust the position of the puncture needle in real time according to the puncture path, and the positioning and navigation accuracy in the thoracoabdominal operation is improved.
Fig. 6 is a schematic structural diagram of a four-dimensional CT image generation apparatus according to the third embodiment of the present invention, and only the portions related to the third embodiment of the present invention are shown for convenience of description.
The above generation device includes:
the first acquisition module 61 is configured to acquire a reference three-dimensional CT image obtained by scanning a target user by a CT device when the target user is in an operation and the respiratory state of the target user is a reference respiratory state;
a first determining module 62, configured to determine a first three-dimensional CT image of a reference time phase from a preoperative four-dimensional CT image, where the preoperative four-dimensional CT image includes first three-dimensional CT images of N time phases, N is an integer greater than 1, and the reference time phase refers to a time phase corresponding to the reference respiratory state;
an image registration module 63, configured to perform image registration on the reference three-dimensional CT image and the first three-dimensional CT image of the reference time phase to obtain an image registration parameter;
an image transformation module 64, configured to transform the first three-dimensional CT image of the remaining time phase according to the image registration parameter to obtain a second three-dimensional CT image of the remaining time phase, where the remaining time phase is a time phase of the N time phases except for the reference time phase;
an image generating module 65, configured to generate an intraoperative four-dimensional CT image according to the reference three-dimensional CT image and the second three-dimensional CT image of the remaining time phase, where the reference three-dimensional CT image is the second three-dimensional CT image of the reference time phase in the intraoperative four-dimensional CT image.
Optionally, the generating device further includes:
the state acquisition module is used for acquiring the current breathing state of the target user;
a second determining module, configured to determine a second three-dimensional CT image of a current time phase from the intraoperative four-dimensional CT image, where the current time phase is a time phase corresponding to the current respiratory state;
and the puncture needle display module is used for displaying a puncture needle on the second three-dimensional CT image of the current time phase, and the puncture needle is positioned at the first position of the target user.
Optionally, the generating device further includes:
the second acquisition module is used for acquiring an intraoperative three-dimensional CT image of the target user, wherein the intraoperative three-dimensional CT image comprises a puncture path of the puncture needle;
the path updating module is used for updating the puncture path according to the deformation field to obtain a target puncture path;
and the path display module is used for displaying the target puncture path on the second three-dimensional CT image of the current time phase, and the target puncture path is used for guiding the puncture needle to puncture.
Optionally, the puncture path refers to a path from a puncture entry point to a target point, and the path update module is specifically configured to:
and updating the position information of the target point in the puncture path according to the deformation field.
Optionally, the path display module includes:
the position acquisition unit is used for acquiring the position information of the puncture needle in a reference frame space, the reference frame is positioned at a second position of the target user, and the second position is different from the first position;
and the position conversion unit is used for converting the position information of the puncture needle from the reference frame space to the second three-dimensional image of the current time phase to obtain the position information of the puncture needle in the second three-dimensional image of the current time phase, and displaying the puncture needle at the corresponding position in the second three-dimensional image of the current time phase.
Optionally, the position conversion unit is specifically configured to:
acquiring a coordinate transformation relation between the reference frame space and an image space, wherein the image space is a coordinate system of the intraoperative three-dimensional CT image;
and transforming the position information of the puncture needle from the reference frame space to a second three-dimensional image of the current time phase according to the coordinate transformation relation between the reference frame space and the image space and the deformation field.
Optionally, the image registration module 63 is specifically configured to:
and carrying out rigid registration and deformation registration on the reference three-dimensional CT image and the first three-dimensional CT image of the reference time phase in sequence to obtain a translation parameter and a rotation parameter corresponding to the rigid registration and a deformation field corresponding to the deformation registration.
The generating device provided in the embodiment of the present application can be applied to the first method embodiment and the second method embodiment, and for details, reference is made to the description of the first method embodiment and the second method embodiment, and details are not repeated here.
Fig. 7 is a schematic structural diagram of a terminal device according to a fourth embodiment of the present application. As shown in fig. 7, the terminal device 7 of this embodiment includes: one or more processors 70 (only one of which is shown), a memory 71 and a computer program 72 stored in said memory 71 and executable on said processor 70. The steps in the various generation method embodiments described above are implemented when the computer program 72 is executed by the processor 70.
The terminal device 7 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, a processor 70, a memory 71. It will be appreciated by those skilled in the art that fig. 7 is merely an example of a terminal device 7 and does not constitute a limitation of the terminal device 7 and may comprise more or less components than shown, or some components may be combined, or different components, for example the terminal device may further comprise input output devices, network access devices, buses, etc.
The Processor 70 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 71 may be an internal storage unit of the terminal device 7, such as a hard disk or a memory of the terminal device 7. The memory 71 may also be an external storage device of the terminal device 7, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 7. Further, the memory 71 may also include both an internal storage unit and an external storage device of the terminal device 7. The memory 71 is used for storing the computer program and other programs and data required by the terminal device. The memory 71 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
When the computer program product runs on a terminal device, the terminal device can implement the steps in the method embodiments.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A method for generating a four-dimensional CT image, the method comprising:
when a target user is in operation and the respiratory state of the target user is a reference respiratory state, acquiring a reference three-dimensional CT image obtained by scanning the target user by CT equipment;
determining a first three-dimensional CT image of a reference time phase from a preoperative four-dimensional CT image, wherein the preoperative four-dimensional CT image comprises first three-dimensional CT images of N time phases, N is an integer greater than 1, and the reference time phase refers to the time phase corresponding to the reference respiratory state;
carrying out image registration on the reference three-dimensional CT image and the first three-dimensional CT image of the reference time phase to obtain image registration parameters;
transforming the first three-dimensional CT image of the residual time phase according to the image registration parameters to obtain a second three-dimensional CT image of the residual time phase, wherein the residual time phase refers to the time phase except the reference time phase in the N time phases;
generating an intraoperative four-dimensional CT image from the reference three-dimensional CT image and a second three-dimensional CT image of the remaining time phase, the reference three-dimensional CT image being the second three-dimensional CT image of the reference time phase in the intraoperative four-dimensional CT image.
2. The method of generating as set forth in claim 1, further including, after generating the intraoperative four-dimensional CT image:
acquiring the current respiratory state of the target user;
determining a second three-dimensional CT image of a current time phase from the intraoperative four-dimensional CT image, wherein the current time phase refers to a time phase corresponding to the current respiratory state;
and displaying a puncture needle on the second three-dimensional CT image of the current time phase, wherein the puncture needle is positioned at the first position of the target user.
3. The generation method of claim 2, wherein the image registration parameters include a deformation field, and when obtaining the current respiratory state of the target user, further comprising:
acquiring an intraoperative three-dimensional CT image of the target user, wherein the intraoperative three-dimensional CT image comprises a puncture path of the puncture needle;
updating the puncture path according to the deformation field to obtain a target puncture path;
and displaying the target puncture path on the second three-dimensional CT image of the current time phase, wherein the target puncture path is used for guiding the puncture needle to puncture.
4. The method of generating as claimed in claim 3, wherein said puncture path is a path from a puncture entry point to a target point, and said updating said puncture path according to said deformation field comprises:
and updating the position information of the target point in the puncture path according to the deformation field.
5. The generation method of claim 3, wherein said displaying a puncture needle on the second three-dimensional CT image of the current phase comprises:
acquiring position information of the puncture needle in a reference frame space, wherein the reference frame is located at a second position of the target user, and the second position is different from the first position;
and transforming the position information of the puncture needle to the second three-dimensional image of the current time phase from the reference frame space to obtain the position information of the puncture needle in the second three-dimensional image of the current time phase, and displaying the puncture needle at the corresponding position in the second three-dimensional image of the current time phase.
6. The method of generating as set forth in claim 5, wherein said transforming the position information of the needle from the frame of reference space onto the second three-dimensional image of the current phase comprises:
acquiring a coordinate transformation relation between the reference frame space and an image space, wherein the image space is a coordinate system of the intraoperative three-dimensional CT image;
and transforming the position information of the puncture needle from the reference frame space to a second three-dimensional image of the current time phase according to the coordinate transformation relation between the reference frame space and the image space and the deformation field.
7. The generation method according to any one of claims 1 to 6, wherein the image registering the reference three-dimensional CT image and the first three-dimensional CT image of the reference phase to obtain image registration parameters includes:
and carrying out rigid registration and deformation registration on the reference three-dimensional CT image and the first three-dimensional CT image of the reference time phase in sequence to obtain a translation parameter and a rotation parameter corresponding to the rigid registration and a deformation field corresponding to the deformation registration.
8. A four-dimensional CT image generation apparatus, comprising:
the first acquisition module is used for acquiring a reference three-dimensional CT image obtained by scanning a target user by CT equipment when the target user is in operation and the respiratory state of the target user is a reference respiratory state;
the preoperative four-dimensional CT image comprises first three-dimensional CT images of N time phases, wherein N is an integer greater than 1, and the reference time phase refers to a time phase corresponding to the reference respiratory state;
the image registration module is used for carrying out image registration on the reference three-dimensional CT image and the first three-dimensional CT image of the reference time phase to obtain image registration parameters;
an image transformation module, configured to transform, according to the image registration parameter, a first three-dimensional CT image of a remaining time phase to obtain a second three-dimensional CT image of the remaining time phase, where the remaining time phase is a time phase of the N time phases except for the reference time phase;
an image generation module, configured to generate an intraoperative four-dimensional CT image according to the reference three-dimensional CT image and the second three-dimensional CT image of the remaining time phase, where the reference three-dimensional CT image is the second three-dimensional CT image of the reference time phase in the intraoperative four-dimensional CT image.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the generating method according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the generation method according to any one of claims 1 to 7.
CN202110938287.2A 2021-08-16 2021-08-16 Four-dimensional CT image generation method, device, terminal equipment and storage medium Active CN113781593B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110938287.2A CN113781593B (en) 2021-08-16 2021-08-16 Four-dimensional CT image generation method, device, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110938287.2A CN113781593B (en) 2021-08-16 2021-08-16 Four-dimensional CT image generation method, device, terminal equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113781593A true CN113781593A (en) 2021-12-10
CN113781593B CN113781593B (en) 2023-10-27

Family

ID=78837944

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110938287.2A Active CN113781593B (en) 2021-08-16 2021-08-16 Four-dimensional CT image generation method, device, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113781593B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130035588A1 (en) * 2011-08-03 2013-02-07 Siemens Corporation Magnetic resonance imaging for therapy planning
CN105816196A (en) * 2016-05-13 2016-08-03 上海联影医疗科技有限公司 Marking tape for 4DCT imaging and 4DCT imaging method
US20180247412A1 (en) * 2015-03-12 2018-08-30 Mirada Medical Limited Method and apparatus for assessing image registration
CN112515763A (en) * 2020-11-27 2021-03-19 中国科学院深圳先进技术研究院 Target positioning display method, system and device and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130035588A1 (en) * 2011-08-03 2013-02-07 Siemens Corporation Magnetic resonance imaging for therapy planning
US20180247412A1 (en) * 2015-03-12 2018-08-30 Mirada Medical Limited Method and apparatus for assessing image registration
CN105816196A (en) * 2016-05-13 2016-08-03 上海联影医疗科技有限公司 Marking tape for 4DCT imaging and 4DCT imaging method
CN112515763A (en) * 2020-11-27 2021-03-19 中国科学院深圳先进技术研究院 Target positioning display method, system and device and electronic equipment

Also Published As

Publication number Publication date
CN113781593B (en) 2023-10-27

Similar Documents

Publication Publication Date Title
US9384528B2 (en) Image annotation using a haptic plane
US8836703B2 (en) Systems and methods for accurate measurement with a mobile device
Maier-Hein et al. Towards mobile augmented reality for on-patient visualization of medical images
CN102667857B (en) Bone in X-ray photographs suppresses
US20110262015A1 (en) Image processing apparatus, image processing method, and storage medium
CN109171793B (en) Angle detection and correction method, device, equipment and medium
CN109715070B (en) Image processing device, image processing method, and program
US10083278B2 (en) Method and system for displaying a timing signal for surgical instrument insertion in surgical procedures
US20140218397A1 (en) Method and apparatus for providing virtual device planning
JP2014518125A (en) Follow-up image acquisition plan and / or post-processing
CN108038904B (en) Three-dimensional reconstruction system for medical images
US10281804B2 (en) Image processing apparatus, image processing method, and program
US10810717B2 (en) Image processing apparatus, image processing method, and image processing system
US20190005611A1 (en) Multi-Point Annotation Using a Haptic Plane
CN105190633B (en) Image viewing
CN112869761B (en) Medical image diagnosis support system, medical image processing apparatus, and medical image processing method
CN113781593B (en) Four-dimensional CT image generation method, device, terminal equipment and storage medium
CN113262048B (en) Spatial registration method and device, terminal equipment and intraoperative navigation system
US10755379B2 (en) Multi-point annotation using a haptic plane
CN116439691A (en) Joint activity detection method based on artificial intelligence and related equipment
CN108510432B (en) Method, device and system for displaying image and storage medium
Dussel et al. Automated 3D thorax model generation using handheld video-footage
CN114332171A (en) Coordinate system calibration method and device, electronic equipment and storage medium
US20220262018A1 (en) Systems and methods for medical imagery enhancement by use of image guidance system
Zhou et al. A deep learning-based automatic tool for measuring the lengths of linear scars: forensic applications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant