CN115919462A - Image data processing system, method and operation navigation system - Google Patents

Image data processing system, method and operation navigation system Download PDF

Info

Publication number
CN115919462A
CN115919462A CN202211440823.7A CN202211440823A CN115919462A CN 115919462 A CN115919462 A CN 115919462A CN 202211440823 A CN202211440823 A CN 202211440823A CN 115919462 A CN115919462 A CN 115919462A
Authority
CN
China
Prior art keywords
state
dimensional model
tissue
preoperative
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211440823.7A
Other languages
Chinese (zh)
Inventor
陈保全
庄康乐
姚宏昌
尹新立
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Saina Digital Medical Technology Co ltd
Original Assignee
Zhuhai Saina Digital Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Saina Digital Medical Technology Co ltd filed Critical Zhuhai Saina Digital Medical Technology Co ltd
Priority to CN202211440823.7A priority Critical patent/CN115919462A/en
Publication of CN115919462A publication Critical patent/CN115919462A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention provides an image data processing system, an image data processing method and a surgical navigation system. The image data processing system includes: the operation scheme determining module is used for determining operation schemes, and different operation schemes correspond to different intraoperative tissue states; a data acquisition module for acquiring image data of at least one preoperative tissue state, the preoperative tissue state being different from the intraoperative tissue state; the model reconstruction module is used for establishing a first three-dimensional model corresponding to the preoperative tissue state based on the image data of the preoperative tissue state; and the model conversion module is used for converting the first three-dimensional model corresponding to the preoperative tissue state into a second three-dimensional model corresponding to the intraoperative tissue state. In the embodiment of the invention, the second three-dimensional model can be used for intraoperative navigation, so that the surgical navigation system can be applied to surgical operations on organs which are greatly influenced by breathing and heartbeat.

Description

Image data processing system, method and operation navigation system
[ technical field ] A method for producing a semiconductor device
The embodiment of the invention relates to the technical field of medical instruments, in particular to an image data processing system and method and a surgical navigation system.
[ background of the invention ]
The morbidity and mortality of the current lung cancer are high, the minimally invasive ablation treatment is one of the main means for treating the diseases, and the puncture biopsy and the secondary puncture ablation are needed in the current percutaneous puncture ablation operation process.
In the minimally invasive ablation treatment process, the surgical navigation system based on Mixed Reality (MR) is applied to the surgical operation, so that the success rate of the operation is greatly improved. However, the conventional surgical navigation system is generally applied to an operation in which organs such as neurosurgery and spinal surgery do not have too much deformation, but when a surgical operation is performed on organs (for example, lung, heart, etc.) which are greatly affected by respiration and heartbeat, the conventional surgical navigation system cannot be used for the operation because the operation object has continuous deformation or movement.
[ summary of the invention ]
In view of the above, embodiments of the present invention provide an image data processing system, an image data processing method and a surgical navigation system, so that the surgical navigation system can be applied to a surgical operation performed on an organ that is greatly affected by respiration and heartbeat.
In one aspect, an embodiment of the present invention provides an image data processing system, including:
the operation scheme determining module is used for determining operation schemes, and different operation schemes correspond to different intraoperative tissue states;
a data acquisition module for acquiring image data of at least one preoperative tissue state, the preoperative tissue state being different from the intraoperative tissue state;
the model reconstruction module is used for establishing a first three-dimensional model corresponding to the preoperative tissue state based on the image data of the preoperative tissue state;
and the model conversion module is used for converting the first three-dimensional model corresponding to the preoperative tissue state into a second three-dimensional model corresponding to the intraoperative tissue state.
Optionally, the model conversion module is configured to convert the first three-dimensional model corresponding to the preoperative tissue state according to a model conversion manner corresponding to the surgical plan to generate a second three-dimensional model corresponding to the intraoperative tissue state.
Optionally, the model conversion mode corresponding to the surgical plan is an interpolation mode, and the model conversion module is configured to interpolate between the first three-dimensional models corresponding to different preoperative tissue states to obtain a plurality of second three-dimensional models corresponding to the intraoperative tissue states.
Optionally, the different preoperative tissue states include an end-expiratory breath-holding state and an end-inspiratory breath-holding state, and the model conversion module is configured to perform interpolation between a first three-dimensional model corresponding to the end-inspiratory breath-holding state and a first three-dimensional model corresponding to the end-expiratory breath-holding state, so as to obtain second three-dimensional models at different times in a process from the end-inspiratory breath-holding state to the end-expiratory breath-holding state.
Optionally, the model conversion mode corresponding to the surgical plan is a scaling mode, and the model conversion module is configured to scale the first three-dimensional model corresponding to the preoperative tissue state based on a preset scaling model to obtain a second three-dimensional model corresponding to the intraoperative tissue state.
Optionally, the system further comprises:
and the operation planning module is used for performing operation planning based on the second three-dimensional model or performing operation planning based on the first three-dimensional model and the second three-dimensional model.
Optionally, the surgical planning comprises determining cutting information or determining a puncture path.
Optionally, the system further comprises:
and the operation navigation module is used for performing operation navigation based on the second three-dimensional model or performing operation navigation based on the first three-dimensional model and the second three-dimensional model.
Optionally, the surgical navigation module comprises:
the entity data acquisition unit is used for acquiring entity data, and the entity data comprises intraoperative tissue data in an intraoperative tissue state;
the registration unit is used for registering the second three-dimensional model and the intraoperative tissue data to obtain a registration result;
and the display unit is used for displaying the registration result.
Optionally, the entity data acquisition unit includes a depth camera, and the depth camera is configured to acquire the entity data, which is point cloud model data.
Optionally, the system further comprises: a surgical navigation module;
the operation navigation module is used for acquiring entity data for representing the state of the tissue in the operation process;
the model conversion module is used for converting the first three-dimensional model based on entity data to obtain the second three-dimensional model.
Optionally, the data acquisition module is configured to determine the preoperative tissue state based on the surgical plan and acquire image data of the preoperative tissue state based on the determined preoperative tissue state.
Optionally, the surgical protocol includes a lung nodule biopsy or a lung nodule ablation, the at least one preoperative tissue state includes an end-expiratory screen state and an end-inspiratory screen state, and the intraoperative tissue state is a dynamic state from the end-expiratory screen state to the end-inspiratory screen state.
Optionally, the surgical protocol includes thoracoscopic surgery or thoracotomy, the at least one preoperative tissue state includes an end-expiratory screen state and/or an end-inspiratory screen state, and the intraoperative tissue state is a collapsed state.
In another aspect, an embodiment of the present invention provides an image data processing method, including:
the operation scheme determining module determines operation schemes, wherein different operation schemes correspond to different intraoperative tissue states;
a data acquisition module acquires image data of at least one preoperative tissue state, the preoperative tissue state being different from the intraoperative tissue state;
the model reconstruction module establishes a first three-dimensional model corresponding to the preoperative tissue state based on the image data of the preoperative tissue state;
and the model conversion module converts the first three-dimensional model corresponding to the preoperative tissue state into a second three-dimensional model corresponding to the intraoperative tissue state.
In another aspect, an embodiment of the present invention provides a surgical navigation system, including: the image data processing system.
In the technical scheme provided by the embodiment of the invention, the model reconstruction module establishes the first three-dimensional model corresponding to the preoperative tissue state based on the image data of the preoperative tissue state, the model conversion module converts the first three-dimensional model corresponding to the preoperative tissue state into the second three-dimensional model corresponding to the intraoperative tissue state, and the second three-dimensional model can be used for intraoperative navigation, so that the surgical navigation system can be applied to surgical operations on organs greatly influenced by respiration and heartbeat.
[ description of the drawings ]
Fig. 1 is a schematic structural diagram of an image data processing system according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a surgical plan determination module according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of another surgical plan determination module provided in accordance with an embodiment of the present invention;
fig. 4 is a schematic application diagram of a surgical navigation module according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a surgical navigation module according to an embodiment of the present invention;
fig. 6 is a flowchart of an image data processing method according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of a surgical navigation system according to an embodiment of the present invention.
[ detailed description ] embodiments
For better understanding of the technical solutions of the present invention, the following detailed descriptions of the embodiments of the present invention are provided with reference to the accompanying drawings.
It should be understood that the described embodiments are only some embodiments of the invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be understood that the term "and/or" as used herein is merely one type of associative relationship that describes an associated object, meaning that three types of relationships may exist, e.g., A and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
Fig. 1 is a schematic structural diagram of an image data processing system according to an embodiment of the present invention, and as shown in fig. 1, the image data processing system includes: a surgical plan determination module 10, a data acquisition module 20, a model reconstruction module 30, and a model transformation module 40. The surgical plan determination module 10 is configured to determine a surgical plan, with different surgical plans corresponding to different intraoperative tissue states. The data acquisition module 20 is configured to acquire image data of at least one preoperative tissue state, where the preoperative tissue state is different from the intraoperative tissue state. The model reconstruction module 30 is configured to establish a first three-dimensional model corresponding to a preoperative tissue state based on the image data of the preoperative tissue state. The model conversion module 40 is configured to convert the first three-dimensional model corresponding to the preoperative tissue state into a second three-dimensional model corresponding to the intraoperative tissue state.
In the embodiment of the present invention, the surgical plan determining module 10 may be configured to determine a surgical plan according to the actual situation information. Different surgical protocols correspond to different intraoperative tissue states. The actual condition information may include image data of preoperative tissue state of the patient and/or related information of the patient, and the related information may include at least one of sex, age, weight and symptom. Alternatively, the at least one preoperative tissue state may include an end-expiratory screen state and an end-inspiratory screen state, and the intraoperative tissue state may include a dynamic state from the end-expiratory screen state to the end-inspiratory screen state, e.g., the surgical plan includes a lung nodule biopsy or a lung nodule ablation procedure. As another alternative, the at least one preoperative tissue state includes an end-expiratory screen state and/or an end-inspiratory screen state, and the intraoperative tissue state is a collapsed state, e.g., the surgical protocol includes thoracoscopic surgery or thoracotomy.
As an alternative, the surgical plan determining module 10 is configured to determine the surgical plan according to the selection operation of the operator on the graphical user interface. Specifically, fig. 2 is a schematic structural diagram of a surgical plan determining module according to an embodiment of the present invention, and as shown in fig. 2, the surgical plan determining module 10 includes a memory 101, an interacting unit 102, and a first determining unit 103. The memory 101 is used to store surgical plans for different departments and/or different tissues, wherein the surgical plans for different tissues may include surgical plans for different organs. The interactive unit 102 is used for displaying a graphical user interface, the graphical user interface is used for displaying operation schemes of different departments and/or different tissues and receiving selection operations of operators on the graphical user interface. The operator views the surgical plans of different departments and/or different tissues on the graphical user interface, determines the surgical plan meeting the actual situation information from the surgical plans of the different departments and/or different tissues on the graphical user interface, and selects the surgical plan meeting the actual situation information on the graphical user interface, and the interaction unit 102 is used for receiving the selection operation of the operator for selecting the surgical plan meeting the actual situation information on the graphical user interface. The first determination unit 103 is configured to determine, according to the selection operation, a surgical plan meeting an actual situation from surgical plans of different departments and/or different tissues on the graphical user interface.
Alternatively, the surgical plan determining module 10 is configured to perform diagnosis based on the acquired actual condition information of the patient to obtain a diagnosis result, and determine a surgical plan matching the diagnosis result from the stored surgical plans. Specifically, fig. 3 is a schematic structural diagram of another surgical plan determining module according to an embodiment of the present invention, and as shown in fig. 3, the surgical plan determining module 10 includes a memory 101, a diagnosis unit 104, and a second determining unit 105. The memory 101 is used to store surgical protocols for different departments and/or different tissues. The diagnosis unit 104 is configured to perform a diagnosis according to the actual situation information to obtain a diagnosis result, for example, for a case where the diagnosis unit 104 performs a diagnosis according to the image data in the actual situation information to obtain a diagnosis result, the diagnosis unit 104 may be configured to determine one or more of a position, a shape and a property of the lesion according to the image data, and obtain the diagnosis result based on the one or more of the position, the shape and the property of the lesion. The second determination unit 105 is configured to determine a surgical plan matching the diagnosis result from the stored surgical plans of different departments and/or different tissues.
The following describes the procedure of determining the surgical plan in detail, taking the lesion as a lung nodule as an example. The pulmonary nodule refers to a circular or quasi-circular high-density image found in a lung field region during a chest imaging examination, for example, computed Tomography (CT) may be performed on a lung to obtain image data of a preoperative tissue state, since the lung CT is usually performed in an end-inspiration state, the obtained image data may be image data of an end-inspiration state, and in practical applications, the image data may also be image data of an end-expiration state or image data in other respiratory states, which is not particularly limited in this embodiment of the present invention. However, since the lesion property of the nodule cannot be determined by imaging and blood system examination alone, the lesion property can be clearly diagnosed by obtaining a pathological specimen through a lung nodule needle biopsy, so that the diagnosis unit 104 can perform diagnosis according to the image data to obtain a diagnosis result as a lung nodule, and then the second determination unit 105 can determine a surgical plan matched with the diagnosis result from the stored surgical plans as the lung nodule needle biopsy.
Further, after the diagnosis result is a lung nodule, the diagnosis unit 104 may be configured to determine a second diagnosis result according to the determined property of the lung nodule. The second determination unit 105 may be configured to determine again a surgical plan matching the second diagnosis from the stored surgical plans. Typically, surgical protocols for lung nodules include lung nodule ablation, thoracoscopic surgery, or thoracotomy, for example: the pulmonary nodule ablation is suitable for the condition that the lung function is poor, the operation cannot be tolerated, or the pulmonary nodule resection operation is performed once, and then the life is seriously influenced, or a plurality of pulmonary nodules cannot be completely resected at one time are resected; the thoracoscopic surgery is suitable for early lung cancer with nodules less than or equal to 20mm, and the thoracotomy is suitable for the condition that the lung nodule parts are deep and are not suitable for minimally invasive surgical resection under the thoracoscopic surgery, so the second determining unit 105 can determine the operation scheme matched with the diagnosis result of the second time from the pulmonary nodule ablation, the thoracoscopic surgery and the thoracotomy.
The lung nodule needle biopsy and the lung nodule ablation can be performed under local anesthesia, at the moment, the patient keeps spontaneous respiration during the operation, namely the lung state during the operation is a dynamic process state, the lung state during the operation comprises a plurality of different states during the lung respiration process, namely one or more preoperative tissue states reflected in preoperatively acquired image data cannot completely present a dynamic tissue state during the operation; thoracoscopic surgery and thoracotomy are then gone on under general anesthesia's the condition, at this moment, the patient can't keep spontaneous breathing in the art, need maintain breathing through the breathing machine, the lung is in static in the art promptly, however, because thoracoscopic surgery and thoracotomy need open the thorax in order to expose the tissue in the thorax, the internal pressure decline of thorax can lead to the tissue in the thorax to take place to collapse, the lung tissue also can correspondingly take place to collapse to provide the operating space, the intraoperative tissue state of lung is the collapse state promptly, tissue state in the respiratory process of collection before the art can not reflect the collapse state in the art. Therefore, the surgical plan determined by the surgical plan determining module 10 can reflect the intraoperative tissue state.
In an embodiment of the present invention, the data acquiring module 20 is configured to acquire image data of at least one preoperative tissue state, where the image data may be image data for determining a surgical plan and/or image data acquired after determining the surgical plan. Alternatively, if the surgical plan determining module 10 needs to determine the surgical plan according to the image data of the preoperative tissue state, the data acquiring module 20 is configured to acquire the image data of the preoperative tissue state and use the image data of the preoperative tissue state as the image data for determining the surgical plan, and send the image data for determining the surgical plan to the surgical plan determining module 10. Alternatively, if the tissue state during the operation is dynamic, the data acquiring module 20 is configured to acquire the image data for determining the surgical plan and the image data acquired again after determining the surgical plan, and send the image data for determining the surgical plan and the image data acquired again after determining the surgical plan to the model reconstructing module 30. Alternatively, if the tissue state is static during the operation, the data acquisition module 20 is configured to acquire image data for determining the surgical plan and send the image data for determining the surgical plan to the model reconstruction module 30. In some embodiments, data acquisition module 20 may determine a preoperative tissue state for acquiring the image data based on the surgical plan and acquire image data of the preoperative tissue state based on the determined preoperative tissue state.
For example, the image data used for determining the surgical plan is the image data of the end-inspiratory breath-hold state, the surgical plan is the pulmonary nodule biopsy, since the tissue state during the operation is dynamic, in order to be able to subsequently convert the three-dimensional model of the preoperative tissue state into the three-dimensional model of the tissue state during the operation, the image data of the end-expiratory breath-hold state needs to be further obtained so as to calculate the dynamic breathing process of the patient from the end-inspiratory breath-hold state to the end-expiratory breath-hold state, and at this time, the image data re-obtained after the surgical plan is determined to be the image data of the end-expiratory breath-hold state. For example, since the image data for specifying the surgical plan is the image data of the end-of-inspiration state, the surgical plan is the open-chest resection, and the tissue state during the operation is static, the image data for specifying the end-of-inspiration state of the surgical plan may be used as the image data of the preoperative tissue state directly to perform calculation for converting the three-dimensional model of the preoperative tissue state into the three-dimensional model of the tissue state during the operation. For another example, since the image data for determining the end-expiratory breath state of the surgical plan is the image data of the end-expiratory breath state, and the surgical plan is the open-chest resection, and the tissue state during the surgery is static, the image data for determining the end-expiratory breath state of the surgical plan may be directly used as the image data of the preoperative tissue state to perform the calculation of converting the three-dimensional model of the preoperative tissue state into the three-dimensional model of the tissue state during the surgery
In an embodiment of the present invention, the model reconstruction module 30 is configured to establish a first three-dimensional model based on image data of a preoperative tissue state. For example, when the surgical plan is a lung nodule biopsy, the image data of the preoperative tissue state may include image data of an end-expiratory breath state and image data of an end-inspiratory breath state, and the first three-dimensional model includes a three-dimensional model of the end-expiratory breath state and a three-dimensional model of the end-inspiratory breath state; when the surgical plan is thoracotomy, the image data of the preoperative tissue state can include the image data of the inspiration end breath state, and the first three-dimensional model includes the three-dimensional model of the inspiration end breath state, and the lung relaxes during inspiration, so that the diagnosis is facilitated by adopting the three-dimensional model of the inspiration end breath state. Alternatively, when the surgical plan is an open-chest resection, the image data of the preoperative tissue state may include image data of an end-expiratory breath state, and the first three-dimensional model includes a three-dimensional model of the end-expiratory breath state.
Specifically, the model reconstruction module 30 may be configured to perform tissue segmentation on the image data of the preoperative tissue state to obtain image data of each tissue, perform three-dimensional reconstruction on the image data of each tissue to obtain a three-dimensional model of each tissue, and fuse the three-dimensional models of each tissue to generate the first three-dimensional model. Taking lung nodule surgery as an example, the model reconstruction module 30 may be used to obtain image data of a patient through pulmonary CT; using medical image post-processing software (such as 3Dslicer, mimcs, etc.) to perform tissue segmentation on the image data to obtain image data of each tissue, wherein the tissue segmentation method comprises one or more of automatic segmentation, semi-automatic segmentation, region growing, fast graph segmentation, threshold segmentation, frangi filtering and manual segmentation; performing three-dimensional reconstruction based on the image data of each tissue to obtain a three-dimensional model of each tissue, for example, each tissue may be one or more of skin, blood vessels, heart, lung, ribs, sternum, lung nodules; and fusing the three-dimensional models of the tissues according to the position information corresponding to the image data to obtain a first three-dimensional model. It should be understood that the type of tissue in the first three-dimensional model may only include the type of tissue related to surgical planning or surgical navigation, and specifically, the model reconstruction module 30 may determine the type of tissue in the first three-dimensional model according to the surgical plan, or the operator may select the type of tissue in the first three-dimensional model.
In an embodiment of the present invention, the model transformation module 40 is configured to transform the first three-dimensional model corresponding to the preoperative tissue state according to a model transformation manner corresponding to the surgical plan to generate a second three-dimensional model corresponding to the intraoperative tissue state. Wherein, different operation schemes correspond to different model conversion modes. The second three-dimensional model corresponding to the intraoperative tissue state may be used for surgical planning and surgical navigation.
As an alternative, the model conversion manner corresponding to the surgical plan is an interpolation manner, and the model conversion module 40 may be configured to interpolate between the first three-dimensional models corresponding to different preoperative tissue states to obtain a plurality of second three-dimensional models corresponding to intraoperative tissue states. For example, when the surgical plan is a pulmonary nodule aspiration biopsy, the intraoperative tissue state is a dynamic process from an end-expiratory breath-holding state to an end-inspiratory breath-holding state, and accordingly, the model transformation module 40 is configured to transform, based on a first three-dimensional model corresponding to the end-expiratory breath-holding state and a first three-dimensional model corresponding to the end-inspiratory breath-holding state, to obtain a plurality of second three-dimensional models corresponding to states from the end-inspiratory breath-holding state to the end-expiratory breath-holding state, and specifically, the model transformation module may be configured to interpolate between the first three-dimensional model corresponding to the end-inspiratory breath-holding state and the first three-dimensional model corresponding to the end-expiratory breath-holding state, to obtain the second three-dimensional models at different times during the process from the end-inspiratory breath-holding state to the end-expiratory breath-holding state.
As another alternative, if the model conversion manner corresponding to the surgical plan is a scaling manner, the model conversion module 40 may be configured to scale the first three-dimensional model corresponding to the preoperative tissue state based on a preset scaling model, so as to obtain a second three-dimensional model corresponding to the intraoperative tissue state. For example, when the surgical plan is an open-chest resection, the preoperative tissue state is an inspiratory end breath holding state, and the intraoperative tissue state is a collapsed state, and accordingly, the model conversion module 40 is configured to perform scaling processing on the first three-dimensional model corresponding to the inspiratory end breath holding state based on the preset scaling model to obtain the second three-dimensional model corresponding to the collapsed state, where the preset scaling model may be trained by using a training set to improve the conversion accuracy, and the training set includes the first three-dimensional model corresponding to the inspiratory end breath holding state and the second three-dimensional model corresponding to the collapsed state in the actual operation corresponding to the first three-dimensional model. For another example, when the surgical plan is an open-chest resection, the preoperative tissue state is an end expiratory breath-holding state, and the intraoperative tissue state is a collapsed state, and accordingly, the model transformation module 40 is configured to perform scaling processing on the first three-dimensional model corresponding to the end expiratory breath-holding state based on the preset scaling model to obtain the second three-dimensional model corresponding to the collapsed state, where the preset scaling model may be trained by using a training set to improve the conversion accuracy, and the training set includes the first three-dimensional model corresponding to the end expiratory breath-holding state and the second three-dimensional model corresponding to the collapsed state in the actual operation corresponding to the first three-dimensional model.
In the embodiment of the present invention, the properties of the first three-dimensional model reconstructed by the model reconstruction module 30 and/or the second three-dimensional model converted by the model conversion module 40 are adjustable. Specifically, the attributes of each tissue in the first three-dimensional model and/or the second three-dimensional model can be independently adjusted, for example, the attributes may include color and/or transparency, and the color and/or transparency of each tissue can be freely adjusted in surgical planning and surgical navigation, so that the operator can be helped to quickly and accurately identify each tissue organ, and the operator can be helped to view the internal structure of each tissue in real time, so that the operator can quickly and accurately complete the surgery.
Further, the image data processing system further comprises: a surgical planning module 50. The surgical planning module 50 is configured to perform surgical planning based on the second three-dimensional model, or based on the first three-dimensional model and the second three-dimensional model. The surgical planning may include, among other things, determining cutting information or determining a puncture path.
For the lung nodule biopsy and lung nodule ablation, the surgical planning includes determining a puncture path, where the puncture path may include a needle insertion position and a needle insertion direction, and specifically, the surgical planning module 50 may determine position information of a puncture target, and determine the needle insertion position and the needle insertion direction according to the position information of the puncture target, and in the process of determining the needle insertion position and the needle insertion direction, care needs to be taken to avoid anatomical structures such as specific bones, blood vessels, nerves, and the like, where the puncture target refers to a region determined according to a lesion position, that is, a target position where a needle tip of a puncture instrument is expected to reach in a puncture operation. It is to be understood that in lung nodule biopsy and lung nodule ablation, the intraoperative tissue state is dynamic, and therefore, the puncture paths can be respectively planned in the second three-dimensional model at different times in the dynamic process, and the plurality of puncture paths are analyzed to determine the optimal puncture timing and the optimal puncture path, and particularly, the puncture paths can be analyzed by the distance from the puncture path to a specific anatomical structure.
For thoracoscopic and open-chest resection, surgical planning involves determining cutting information. For example, for lung nodule resection, in order to determine the cutting information, the surgical planning module 50 may perform a segmentation and/or segmentation process on the lung model to obtain a lung lobe model and/or a lung segment model, determine a lung lobe or a lung segment to be resected according to the segmentation and/or segmentation result and the size and position information of the lung nodule, and mark the lung lobe or the lung segment to be resected to obtain the cutting information; alternatively, the surgical planning module 50 may further calculate cutting information based on the lung lobes or lung segments determined to be resected, where the cutting information includes information such as cutting parameters, and the cutting parameters may be information related to the length, volume, and the like of a cutting range, and may also be surgical path information related to the surgical approach.
Further, the image data processing system further comprises: a surgical navigation module 60. The surgical navigation module 60 is configured to perform surgical navigation based on the second three-dimensional model, or perform surgical navigation based on the first three-dimensional model and the second three-dimensional model. Fig. 4 is a schematic application diagram of a surgical navigation module according to an embodiment of the present invention, and as shown in fig. 4, the surgical navigation module 60 may be a head-mounted device, and an operator wears the head-mounted device and performs a surgery according to information displayed by the head-mounted device. The head-mounted equipment can be mixed reality equipment, the mixed reality equipment can fuse virtual digital information in a real environment in a holographic image mode, seamless connection between a physical world and a digital world at any time and any place is achieved, wherein the virtual digital information comprises but is not limited to a 3D model, a 3D animation, two-dimensional information such as images, videos and characters, and for specific three-dimensional information, an operator does not need to convert and reconstruct complex space structure information in the brain and sea, but can directly acquire the three-dimensional digital information more intuitively and stereoscopically, so that the three-dimensional information acquisition efficiency is greatly improved, and understanding errors are reduced.
Specifically, fig. 5 is a schematic structural diagram of a surgical navigation module according to an embodiment of the present invention, and as shown in fig. 5, the surgical navigation module 60 includes: an entity data acquisition unit 601, a registration unit 602 and a display unit 603. The entity data acquisition unit 601 is used for acquiring entity data, wherein the entity data comprises intraoperative tissue data in an intraoperative tissue state; the registration unit 602 is configured to register the second three-dimensional model with the intraoperative tissue data to obtain a registration result; the display unit 603 is used for displaying the registration result.
In an embodiment of the present invention, the head-mounted device may be a Hololens mixed reality glasses. The entity data acquisition unit 601 includes an optical sensor system. The registration unit 602 includes a position sensor system for acquiring position information of the device and the entity, and an onboard computer storing a registration algorithm and registering the virtual data and the entity data based on the position information of the device and the entity by the registration algorithm. The display unit 603 is a see-through stereoscopic display. The optical sensor system may include an optical sensor for detecting physical data in the real world from a line of sight viewed by an operator through a see-through stereoscopic display; or, the optical sensor system can also comprise a depth camera, the depth camera is used for collecting entity data, the entity data is point cloud model data, and the on-board computer can use the point cloud model data to carry out entity data and virtual data registration, so that the registration accuracy is higher. The position sensor system may include one or more position sensors, for example, the position sensors may include accelerometers, gyroscopes, magnetometers, global positioning systems, multi-point location trackers, or other sensors that output position sensor information that may be used as a position, orientation, or motion of the associated sensor. The registration algorithm may be used to superimpose the imported virtual data onto the entity data, and the onboard computer may be specifically used to implement registration of the virtual data and the entity data by using the registration algorithm through steps of mark point identification, spatial coordinate conversion, and the like. The perspective stereoscopic display comprises a lens for realizing holographic projection and a holographic processing subunit, and is used for displaying virtual data, entity data and a registration result thereof. In some embodiments, the virtual data may include guidance information and/or prompt information for guiding an operator to perform a surgery, such as a puncture path, a cutting parameter, and a surgical target determined during a surgical planning procedure.
Further, the surgical navigation module 60 also includes a tracking unit 604. The tracking unit 604 is used to track the position information of the surgical instrument and the position sensor is used to obtain the position information of the entity, wherein the entity may comprise the surgical target and/or the specific anatomical structure. The tracking unit 604 is configured to determine a spatial relationship between the surgical instrument and the entity according to the position information of the surgical instrument and the position information of the entity. The tracking unit 604 may also be configured to obtain prompt information corresponding to the spatial relationship according to the spatial relationship, where the prompt information is used to prompt the reasonableness and risk of the current operation. The display unit 603 is used to display prompt information to prompt the operator, for example, the prompt information may include text, color of the surgical instrument, and the like. In other embodiments, the surgical navigation module 60 may further include an audio prompt unit 605, and the audio prompt unit 605 may display the prompt information in the form of sound.
Further, since the surgical navigation module 60 can acquire entity data representing the tissue state during surgery, when performing model transformation, the model transformation module 40 may transform the first three-dimensional model based on the entity data acquired by the entity data acquisition unit 601 to obtain the second three-dimensional model. For example, when the surgical plan is an open-chest resection, the model transformation module 40 may scale the first three-dimensional model based on the entity data collected by the entity data collection unit 601 to obtain the second three-dimensional model, and specifically, the model transformation module 40 may scale the first three-dimensional model according to the information such as the position and/or size of the tissue in the entity data to obtain the second three-dimensional model corresponding to the collapsed state.
In the embodiment of the present invention, the display unit 603 of the surgical navigation module 60 may be further configured to display the puncture path or the cutting information determined by the surgical planning module 50 and the second three-dimensional model converted by the model conversion module 40, so as to guide the operator to perform the surgery during the surgery.
In the technical scheme of the image data system provided by the embodiment of the invention, the model reconstruction module establishes the first three-dimensional model corresponding to the preoperative tissue state based on the image data of the preoperative tissue state, the model conversion module converts the first three-dimensional model corresponding to the preoperative tissue state into the second three-dimensional model corresponding to the intraoperative tissue state, and the second three-dimensional model can be used for intraoperative navigation, so that the surgical navigation system can be applied to surgical operations on organs greatly influenced by respiration and heartbeat. In the embodiment of the invention, different operation schemes determined by the operation scheme determining module correspond to different intraoperative tissue states, so that the image data processing system and the operation navigation system can be applied to different operation schemes, the operation navigation system is facilitated to be simplified, and the cost of the operation navigation system is reduced.
Fig. 6 is a flowchart of an image data processing method according to an embodiment of the present invention, and as shown in fig. 6, the method includes:
step 102, the surgical plan determining module determines a surgical plan, different surgical plans corresponding to different intraoperative tissue states.
Step 104, the data acquisition module acquires image data of at least one preoperative tissue state, wherein the preoperative tissue state is different from the intraoperative tissue state.
And 106, establishing a first three-dimensional model corresponding to the preoperative tissue state by the model reconstruction module based on the image data of the preoperative tissue state.
Step 108, the model conversion module converts the first three-dimensional model corresponding to the preoperative tissue state into a second three-dimensional model corresponding to the intraoperative tissue state.
In this embodiment of the present invention, step 108 may specifically include: the model conversion module is used for converting the first three-dimensional model corresponding to the preoperative tissue state according to the model conversion mode corresponding to the surgical scheme to generate a second three-dimensional model corresponding to the intraoperative tissue state.
As an alternative, the model conversion mode corresponding to the surgical plan is an interpolation mode, and the model conversion module performs interpolation between the first three-dimensional models corresponding to different preoperative tissue states to obtain a plurality of second three-dimensional models corresponding to intraoperative tissue states. As another alternative, the model conversion mode corresponding to the surgical plan is a scaling mode, and the model conversion module performs scaling processing on the first three-dimensional model corresponding to the preoperative tissue state based on a preset scaling model to obtain the second three-dimensional model corresponding to the intraoperative tissue state.
In the embodiment of the present invention, the method further includes: the operation planning module performs operation planning based on the second three-dimensional model or performs operation planning based on the first three-dimensional model and the second three-dimensional model.
In the embodiment of the present invention, the method further includes: the operation navigation module performs operation navigation based on the second three-dimensional model or performs operation navigation based on the first three-dimensional model and the second three-dimensional model.
In the technical scheme of the image data processing method provided by the embodiment of the invention, the model reconstruction module establishes a first three-dimensional model corresponding to the preoperative tissue state based on the image data of the preoperative tissue state, the model conversion module converts the first three-dimensional model corresponding to the preoperative tissue state into a second three-dimensional model corresponding to the intraoperative tissue state, and the second three-dimensional model can be used for intraoperative navigation, so that the surgical navigation system can be applied to surgical operations on organs greatly influenced by respiration and heartbeat. In the embodiment of the invention, different operation schemes determined by the operation scheme determining module correspond to different intraoperative tissue states, so that the image data processing system and the operation navigation system can be applied to different operation schemes, the operation navigation system is facilitated to be simplified, and the cost of the operation navigation system is reduced.
Fig. 7 is a schematic structural diagram of a surgical navigation system according to an embodiment of the present invention, as shown in fig. 7, the surgical navigation system includes an image data processing system 100, and for specific description of the image data processing system 100, reference may be made to the description in the embodiments shown in fig. 1 to fig. 5, which is not repeated herein.
In the technical scheme of the surgical navigation system provided by the embodiment of the invention, the model reconstruction module establishes the first three-dimensional model corresponding to the preoperative tissue state based on the image data of the preoperative tissue state, the model conversion module converts the first three-dimensional model corresponding to the preoperative tissue state into the second three-dimensional model corresponding to the intraoperative tissue state, and the second three-dimensional model can be used for intraoperative navigation, so that the surgical navigation system can be applied to surgical operations on organs greatly influenced by respiration and heartbeat. In the embodiment of the invention, different operation schemes determined by the operation scheme determining module correspond to different intraoperative tissue states, so that the image data processing system and the operation navigation system can be applied to different operation schemes, the operation navigation system is facilitated to be simplified, and the cost of the operation navigation system is reduced.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions in actual implementation, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or in the form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) or a Processor (Processor) to execute some steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, an optical disk, or other various media capable of storing program codes.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (16)

1. An image data processing system, comprising:
the operation scheme determining module is used for determining operation schemes, and different operation schemes correspond to different intraoperative tissue states;
a data acquisition module for acquiring image data of at least one preoperative tissue state, the preoperative tissue state being different from the intraoperative tissue state;
the model reconstruction module is used for establishing a first three-dimensional model corresponding to the preoperative tissue state based on the image data of the preoperative tissue state;
and the model conversion module is used for converting the first three-dimensional model corresponding to the preoperative tissue state into a second three-dimensional model corresponding to the intraoperative tissue state.
2. The system of claim 1, wherein the model transformation module is configured to transform the first three-dimensional model corresponding to the preoperative tissue state to generate the second three-dimensional model corresponding to the intraoperative tissue state according to a model transformation manner corresponding to the surgical plan.
3. The system of claim 2, wherein the model transformation module is configured to interpolate between first three-dimensional models corresponding to different preoperative tissue states to obtain a plurality of second three-dimensional models corresponding to the intraoperative tissue states.
4. The system of claim 3, wherein the different preoperative tissue states include an end-expiratory breath-hold state and an end-expiratory breath-hold state, and wherein the model transformation module is configured to interpolate between the first three-dimensional model corresponding to the end-inspiratory breath-hold state and the first three-dimensional model corresponding to the end-expiratory breath-hold state to obtain second three-dimensional models at different times during the process from the end-inspiratory breath-hold state to the end-expiratory breath-hold state.
5. The system of claim 2, wherein the model conversion manner corresponding to the surgical plan is a scaling manner, and the model conversion module is configured to scale the first three-dimensional model corresponding to the preoperative tissue state based on a preset scaling model to obtain the second three-dimensional model corresponding to the intraoperative tissue state.
6. The system of claim 1, further comprising:
and the operation planning module is used for performing operation planning based on the second three-dimensional model or performing operation planning based on the first three-dimensional model and the second three-dimensional model.
7. The system of claim 6, wherein the surgical planning comprises determining cutting information or determining a puncture path.
8. The system of claim 1, further comprising:
and the operation navigation module is used for performing operation navigation based on the second three-dimensional model or performing operation navigation based on the first three-dimensional model and the second three-dimensional model.
9. The system of claim 8, wherein the surgical navigation module comprises:
the entity data acquisition unit is used for acquiring entity data, and the entity data comprises intraoperative tissue data in an intraoperative tissue state;
the registration unit is used for registering the second three-dimensional model and the intraoperative tissue data to obtain a registration result;
and the display unit is used for displaying the registration result.
10. The system of claim 9, wherein the entity data acquisition unit comprises a depth camera for acquiring the entity data, wherein the entity data is point cloud model data.
11. The system of claim 1, further comprising: a surgical navigation module;
the operation navigation module is used for acquiring entity data for representing the state of the tissue in the operation process;
the model conversion module is used for converting the first three-dimensional model based on entity data to obtain the second three-dimensional model.
12. The system of claim 1, wherein the data acquisition module is configured to determine the preoperative tissue state based on the surgical plan and acquire image data of the preoperative tissue state based on the determined preoperative tissue state.
13. The system of any of claims 1 to 12, wherein the surgical plan comprises a transpulmonary biopsy or a pulmonary ablation procedure, wherein the at least one preoperative tissue state comprises an end-expiratory screen state and an end-inspiratory screen state, and wherein the intraoperative tissue state is a dynamic state from an end-expiratory screen state to an end-inspiratory screen state.
14. The system of any of claims 1 to 12, wherein the surgical plan includes thoracoscopic surgery or thoracotomy, the at least one preoperative tissue state includes an end-expiratory breath-hold state and/or an end-inspiratory breath-hold state, and the intraoperative tissue state is a collapsed state.
15. An image data processing method, comprising:
the operation scheme determining module determines operation schemes, wherein different operation schemes correspond to different intraoperative tissue states;
a data acquisition module acquires image data of at least one preoperative tissue state, the preoperative tissue state being different from the intraoperative tissue state;
the model reconstruction module establishes a first three-dimensional model corresponding to the preoperative tissue state based on the image data of the preoperative tissue state;
and the model conversion module converts the first three-dimensional model corresponding to the preoperative tissue state into a second three-dimensional model corresponding to the intraoperative tissue state.
16. A surgical navigation system, comprising: the image data processing system of any of the preceding claims 1 to 14.
CN202211440823.7A 2022-11-17 2022-11-17 Image data processing system, method and operation navigation system Pending CN115919462A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211440823.7A CN115919462A (en) 2022-11-17 2022-11-17 Image data processing system, method and operation navigation system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211440823.7A CN115919462A (en) 2022-11-17 2022-11-17 Image data processing system, method and operation navigation system

Publications (1)

Publication Number Publication Date
CN115919462A true CN115919462A (en) 2023-04-07

Family

ID=86556702

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211440823.7A Pending CN115919462A (en) 2022-11-17 2022-11-17 Image data processing system, method and operation navigation system

Country Status (1)

Country Link
CN (1) CN115919462A (en)

Similar Documents

Publication Publication Date Title
US10898057B2 (en) Apparatus and method for airway registration and navigation
US20230346507A1 (en) Augmented reality display for cardiac and vascular procedures with compensation for cardiac motion
US9265468B2 (en) Fluoroscopy-based surgical device tracking method
US20090163800A1 (en) Tools and methods for visualization and motion compensation during electrophysiology procedures
CN112641514B (en) Minimally invasive interventional navigation system and method
RU2594811C2 (en) Visualisation for navigation instruction
CN111588464B (en) Operation navigation method and system
WO2007115825A1 (en) Registration-free augmentation device and method
CN114886560A (en) System and method for local three-dimensional volume reconstruction using standard fluoroscopy
JP2014509895A (en) Diagnostic imaging system and method for providing an image display to assist in the accurate guidance of an interventional device in a vascular intervention procedure
CN116492052B (en) Three-dimensional visual operation navigation system based on mixed reality backbone
Deligianni et al. Nonrigid 2-D/3-D registration for patient specific bronchoscopy simulation with statistical shape modeling: Phantom validation
Stolka et al. A 3D-elastography-guided system for laparoscopic partial nephrectomies
US20240138783A1 (en) Systems and methods for pose estimation of a fluoroscopic imaging device and for three-dimensional imaging of body structures
US11564649B2 (en) System and method for identifying and marking a target in a fluoroscopic three-dimensional reconstruction
CN115919462A (en) Image data processing system, method and operation navigation system
CN115105204A (en) Laparoscope augmented reality fusion display method
CN113940756B (en) Operation navigation system based on mobile DR image
CN117677358A (en) Augmented reality system and method for stereoscopic projection and cross-referencing of intra-operative field X-ray fluoroscopy and C-arm computed tomography imaging

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination