CN113610826A - Puncture positioning method and device, electronic device and storage medium - Google Patents

Puncture positioning method and device, electronic device and storage medium Download PDF

Info

Publication number
CN113610826A
CN113610826A CN202110932297.5A CN202110932297A CN113610826A CN 113610826 A CN113610826 A CN 113610826A CN 202110932297 A CN202110932297 A CN 202110932297A CN 113610826 A CN113610826 A CN 113610826A
Authority
CN
China
Prior art keywords
image
coordinate system
preoperative
preoperative image
intraoperative
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110932297.5A
Other languages
Chinese (zh)
Inventor
王瑜
张欢
余航
刘恩佑
邹彤
黄文豪
张金
陈宽
王少康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Fanxiang Medical Technology Co ltd
Infervision Medical Technology Co Ltd
Original Assignee
Infervision Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Infervision Medical Technology Co Ltd filed Critical Infervision Medical Technology Co Ltd
Priority to CN202110932297.5A priority Critical patent/CN113610826A/en
Publication of CN113610826A publication Critical patent/CN113610826A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone

Abstract

The application discloses a puncture positioning method and device, electronic equipment and a storage medium. The method comprises the following steps: acquiring a preoperative image according to an intraoperative image, wherein the intraoperative image is composed of a part of the preoperative image; generating a simulated puncture needle in a coordinate system of the preoperative image based on the puncture needle in the coordinate system of the navigation device, wherein the coordinate system of the preoperative image is the same as the coordinate system of the intraoperative image; the simulation puncture needle positioned in the coordinate system of the intraoperative image is sent to the front-end equipment, so that the front-end equipment can conveniently position the simulation puncture needle, the number of times of image shooting on a patient in an operation can be reduced, and multiple radiation on the patient is avoided.

Description

Puncture positioning method and device, electronic device and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a puncture positioning method and apparatus, an electronic device, and a storage medium.
Background
The pathological diagnosis of the lesion tissues has important guiding function on disease judgment. Puncture surgery is generally performed clinically under CT (Computed Tomography) image guidance to acquire a pathological specimen. However, in order to accurately locate the target region in the percutaneous puncture surgery, the patient often needs to be imaged in the surgery many times, which not only leads to an excessively complicated surgery process, but also causes multiple radiation to the patient.
Disclosure of Invention
In view of the above, embodiments of the present disclosure are directed to a puncture positioning method and apparatus, an electronic device, and a storage medium, which can reduce the number of times of image capturing performed on a patient during an operation, thereby avoiding multiple radiation to the patient.
According to a first aspect of embodiments of the present application, there is provided a puncture positioning method, including: acquiring a preoperative image according to an intraoperative image, wherein the intraoperative image is composed of a part of the preoperative image; generating a simulated puncture needle in a coordinate system of the preoperative image based on the puncture needle in the coordinate system of the navigation device, wherein the coordinate system of the preoperative image is the same as the coordinate system of the intraoperative image; and sending the simulated puncture needle positioned in the coordinate system of the intraoperative image to a front-end device so as to position the simulated puncture needle at the front-end device.
In some embodiments, the method further comprises: and receiving the intraoperative image sent by the image archiving and communication system.
In some embodiments, the acquiring a preoperative image from an intraoperative image includes: analyzing the intraoperative image to obtain patient information of the intraoperative image; sending the patient information to the image archiving and communication system, and inquiring whether the preoperative image exists; when the pre-operative image is present, extracting the pre-operative image in the image archiving and communication system.
In some embodiments, the analyzing the intraoperative image to obtain patient information for the intraoperative image comprises: performing data screening on the intraoperative image according to a preset configuration rule; and carrying out patient information identification on the screened intraoperative image to obtain the patient information of the intraoperative image.
In some embodiments, the patient information includes a patient name, a patient number, a patient gender, and/or a date of birth.
In some embodiments, the method further comprises: obtaining a segmentation result of a tissue organ in the intraoperative image through a network model according to the intraoperative image; performing three-dimensional reconstruction on the segmentation result to obtain a three-dimensional modeling result of the tissue and organ; and sending the three-dimensional modeling result to the front-end equipment so as to position the position of the simulation puncture needle in the three-dimensional modeling result.
In some embodiments, the generating a simulated needle in the coordinate system of the preoperative image based on the needle in the coordinate system of the navigation device comprises: converting a coordinate system of the navigation equipment and a coordinate system of a marker on the preoperative image to obtain a first conversion matrix; converting a coordinate system of a marker on the preoperative image and a coordinate system of the preoperative image to obtain a second conversion matrix; transforming the puncture needle into a coordinate system of the preoperative image according to the first transformation matrix and the second transformation matrix to generate the simulated puncture needle on the preoperative image.
In some embodiments, the transforming the coordinate system of the marker on the preoperative image and the coordinate system of the preoperative image to obtain a second transformation matrix includes: translating the key points of the markers on the preoperative image and the corresponding key points of the standard segmentation; according to a registration model, obtaining a rotation transformation matrix between the segmentation result of the marker and the standard segmentation; determining the coordinate values of the preset points in the coordinate system of the preoperative image in the coordinate system of the marker on the preoperative image according to the rotation transformation matrix; determining X, Y and the direction of the Z axis of the coordinate system of the preoperative image relative to X, Y and the direction of the Z axis of the coordinate system of the marker on the preoperative image according to the corresponding coordinate value of a preset point in the coordinate system of the preoperative image in the coordinate system of the marker on the preoperative image; and determining the second conversion matrix according to the direction.
In some embodiments, the registration model includes a plurality of cascaded spatial transformation networks, the rotation transformation matrices include a plurality of rotation transformation matrices, one spatial transformation matrix corresponds to one rotation transformation matrix, wherein determining, according to the rotation transformation matrices, corresponding coordinate values of preset points in a coordinate system of the preoperative image in a coordinate system of a marker on the preoperative image includes: calculating an input-to-output intersection ratio for each of the plurality of spatial transform networks; and multiplying the rotation transformation matrix corresponding to the space transformation network with the intersection ratio smaller than the preset threshold value by the coordinate value of the preset point in the coordinate system of the preoperative image to obtain the coordinate value of the preset point in the coordinate system of the preoperative image corresponding to the coordinate value of the marker on the preoperative image.
In some embodiments, the preset points include midpoint, edge, and/or axis points of the marker.
According to a second aspect of embodiments of the present application, there is provided a puncture positioning device including: an acquisition module configured to acquire a preoperative image based on an intraoperative image, wherein the intraoperative image is composed of a portion of the preoperative image; a generation module configured to generate a simulated puncture needle in a coordinate system of the preoperative image based on the puncture needle in the coordinate system of a navigation device, wherein the coordinate system of the preoperative image is the same as the coordinate system of the intraoperative image; the first sending module is configured to send the simulated puncture needle located in the coordinate system of the intra-operative image to a front-end device so as to position the simulated puncture needle at the front-end device.
According to a third aspect of embodiments of the present application, there is provided an electronic apparatus, including: a processor; a memory for storing the processor-executable instructions; the processor is configured to perform the method according to any of the above embodiments.
According to a fourth aspect of embodiments of the present application, there is provided a computer-readable storage medium storing a computer program for executing the method of any of the above embodiments.
The embodiment of the application provides a puncture positioning method, at first according to image in the art, acquires image before the art, then based on the pjncture needle in navigation equipment's the image before the art generates the simulation pjncture needle in the coordinate system, sends to the front end equipment at last and is located in image in the art's coordinate system simulation pjncture needle to it is right that the front end equipment carries out the location of position to simulate the pjncture needle, can reduce the number of times that image shooting is carried out to the patient in the art to avoid causing radiation many times to the patient.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing in more detail embodiments of the present application with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the principles of the application. In the drawings, like reference numbers generally represent like parts or steps.
Fig. 1 is a schematic diagram illustrating an implementation environment provided by an embodiment of the present application.
Fig. 2 is a schematic flow chart of a puncture positioning method according to an embodiment of the present application.
Fig. 3 is a schematic diagram of an intraoperative image provided in accordance with an embodiment of the present application.
Fig. 4 is a schematic view of a preoperative image provided in accordance with an embodiment of the present application.
Fig. 5 is a schematic flow chart illustrating a puncture positioning method according to another embodiment of the present application.
Fig. 6 is a schematic flow chart illustrating a puncture positioning method according to yet another embodiment of the present application.
Fig. 7 is a schematic flow chart illustrating a puncture positioning method according to still another embodiment of the present application.
Fig. 8 is a schematic diagram illustrating a coordinate system conversion process according to an embodiment of the present application.
Fig. 9 is a block diagram of a puncture positioning device according to an embodiment of the present application.
Fig. 10 is a block diagram illustrating an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Summary of the application
The pathological diagnosis of the lesion tissues has important guiding function on disease judgment. Clinically, a puncture surgery is usually performed under the guidance of a Computed Tomography (CT) image of the lung to obtain a pathological specimen so as to diagnose a lesion. In the puncture surgery, a doctor needs to determine a target lesion and a puncture path.
In order to accurately locate a target area in a percutaneous puncture operation, images of a patient in multiple operations are often required to be shot in the operations, so that the operation process is complicated, and multiple radiation is caused to the patient.
Therefore, the embodiment of the application combines the intra-operative image (local image) and the preoperative image (global image), so that under the condition that the intra-operative image is shot once in the operation, the target area can be accurately positioned, and meanwhile, the simulation puncture needle can be generated in the intra-operative image so as to realize puncture positioning of the intra-operative image.
Having described the general principles of the present application, various non-limiting embodiments of the present application will now be described with reference to the accompanying drawings.
Exemplary System
FIG. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application. The implementation environment includes a picture archiving and communication system 110, a data acquisition module 120, a data storage center 130, a prediction module 140, and a front-end 150. The data obtaining module 120 further includes an information pulling unit 121, a preprocessing unit 122, and a data sending unit 123.
The information pulling unit 121 may receive the intraoperative image sent by the image archiving and communication system 110, the preprocessing unit 122 performs data screening on the intraoperative image according to a preset configuration rule, performs patient information identification on the screened intraoperative image to obtain patient information of the intraoperative image, and finally sends the patient information to the image archiving and communication system 110 to query whether a preoperative image exists, and when the preoperative image exists, extracts the preoperative image from the image archiving and communication system 110. The data transmission unit 123 transmits the preoperative image to the data storage center 130, the data storage center 130 transmits the preoperative image and the prediction task corresponding to the preoperative image to the prediction module 140, the prediction module 140 performs the prediction task on the preoperative image to obtain a prediction result, and finally transmits the prediction result to the data storage center 130, and the data storage center 130 transmits the prediction result to the front-end device 150 after obtaining the prediction result.
In addition, the data transmission unit 123 may also transmit the intraoperative image to the data storage center 130, and the data storage center 130 combines the intraoperative image and the prediction result and transmits them to the front-end device 150.
In an embodiment, the front-end device 150 may refer to a user terminal, or may refer to a server front-end device. When the front-end device 150 is a user terminal, the medical staff can directly read the intraoperative image and the prediction result. When the front-end device 150 is referred to as a server front-end device, the medical staff can access the server front-end device by inputting a corresponding website in a browser to view intraoperative images and prediction results.
In an embodiment, at least one network model may be deployed in the prediction module 140 for segmenting different tissues and organs of the preoperative image, detecting a lesion in the preoperative image, registering the preoperative image and the navigation device, and the like, so as to obtain a prediction result corresponding to the intraoperative image. However, it should be noted that the embodiment of the present application does not specifically limit the type and number of the network models in the prediction module 140, and those skilled in the art may make different selections according to actual needs.
Exemplary method
Fig. 2 is a schematic flow chart of a puncture positioning method according to an embodiment of the present application. The method described in fig. 2 is performed by a computing device (e.g., a server), but the embodiments of the present application are not limited thereto. The server may be one server, or may be composed of a plurality of servers, or may be a virtualization platform, or a cloud computing service center, which is not limited in this embodiment of the present application. As shown in fig. 2, the method includes the following.
S210: acquiring a preoperative image according to an intraoperative image, wherein the intraoperative image is composed of a portion of the preoperative image.
In one embodiment, as shown in fig. 3, the intra-operative image may be a partial image of the chest taken during a chest surgery. As shown in fig. 4, the preoperative image may be a global image of the chest taken prior to a chest surgery. Thus, the intra-operative image may be a medical image comprising a local rib region, and the pre-operative image may be a medical image comprising a complete rib region, i.e. the intra-operative image is composed of partial pre-operative images.
For example, the local image and the global image may be Computed Tomography (CT) images, Digital Radiography (DR) images, or Magnetic Resonance Imaging (MRI) images, which is not limited in this embodiment.
However, the embodiments of the present invention do not specifically limit which tissue and organ of the human body the preoperative image and the intraoperative image are, and the images may be the chest cavity images mentioned above, the brain images or the leg images.
S220: generating a simulated puncture needle in a coordinate system of the preoperative image based on the puncture needle in the coordinate system of the navigation device, wherein the coordinate system of the preoperative image is the same as the coordinate system of the intraoperative image.
In one embodiment, the navigation device is an electromagnetic navigation device, such as an NDI standard device, which can position an object in a millimeter scale in real time within a magnetic field range. The coordinate system of the navigation device refers to the coordinate system of the navigation device. The coordinate system of the preoperative image refers to the coordinate system of the image.
In one embodiment, the coordinate system of the preoperative image is the same as the coordinate system of the intraoperative image, since the preoperative image and the intraoperative image are both images formed by imaging the tissue and organ of the human body by using the same device (e.g., CT scanner, etc.).
In an embodiment, after the preoperative image and the navigation device are linked, a simulated puncture needle can be generated in the coordinate system of the preoperative image based on the puncture needle in the coordinate system of the navigation device, that is, the puncture needle in the coordinate system of the navigation device is a puncture needle operated by a medical worker, and the simulated puncture needle in the coordinate system of the preoperative image is a puncture needle in the preoperative image corresponding to the position of the puncture needle operated by the medical worker.
However, it should be noted that the embodiment of the present application does not specifically limit how to link the preoperative image and the navigation device, and those skilled in the art may make different selections according to actual needs.
S230: and sending the simulated puncture needle positioned in the coordinate system of the intraoperative image to a front-end device so as to position the simulated puncture needle at the front-end device.
In one embodiment, since the coordinate system of the preoperative image is the same as the coordinate system of the intraoperative image, the simulated puncture needle in the coordinate system of the preoperative image obtained in step S220 is equivalent to the simulated puncture needle in the coordinate system of the intraoperative image obtained, and the position of the simulated puncture needle in the preoperative image is the same as that in the intraoperative image.
Therefore, the simulated puncture needle positioned in the coordinate system of the image in the operation is sent to the front-end equipment, so that the front-end equipment can position the simulated puncture needle, and medical personnel can compare the simulated puncture needle with the puncture needle in the navigation equipment conveniently to determine the target focus and the puncture path.
Therefore, the image shooting in the operation is not needed to be carried out on the patient in the operation for a plurality of times, the target area can be accurately positioned as long as the image before the operation is passed through, the operation process is simplified, and the radiation to the patient is reduced.
In another embodiment of the present application, the method further comprises: and receiving the intraoperative image sent by the image archiving and communication system.
The embodiment of the present application employs a Picture Archiving and Communication System (PACS), which is a system applied to a hospital image department. The PACS system can store medical images (e.g., CT images) generated in daily life in a digital mass manner through various interfaces (e.g., DICOM), so as to realize automatic monitoring of newly-taken images.
In one embodiment, the intraoperative images may be acquired by a PACS system. Of course, the intraoperative image may also be acquired by a CT direct connection method, and the acquisition method of the intraoperative image is not particularly limited in the embodiment of the present application. Moreover, the acquired intraoperative images can be data in a common medical image format, such as images in a DICOM data format, so as to facilitate subsequent processing and storage of the intraoperative images.
In another embodiment, the extracted intra-operative images (i.e., a sequence of CT images) may also be directly sent to a data storage center using a Hypertext Transfer Protocol (HTTP) POST request, where the data storage center is configured to store the intra-operative images and send the intra-operative images and the simulated puncture needle to a front-end device to present the simulated puncture needle to a medical care provider for viewing together with the intra-operative images.
In another embodiment of the present application, as shown in fig. 5, step S210 shown in fig. 2 includes the following.
S510: analyzing the intraoperative image to obtain patient information of the intraoperative image.
In an embodiment, the intraoperative image is analyzed to obtain patient information included in the intraoperative image, such as a patient name, a patient number, a patient gender, a birth date, and the like, and the patient information is not particularly limited in the embodiment of the present application.
S520: and sending the patient information to the image archiving and communication system, and inquiring whether the preoperative image exists or not.
In an embodiment, since the preoperative image and the intraoperative image are images for one patient, the patient information of the preoperative image and the intraoperative image are the same, and the analyzed patient information is sent to the PACS system, so that whether the preoperative image the same as the patient information included in the intraoperative image exists or not can be inquired.
S530: when the pre-operative image is present, extracting the pre-operative image in the image archiving and communication system.
In one embodiment, the patient information is sent to the PACS system to query whether there is a relevant image, i.e., a preoperative image. When a preoperative image is present, the preoperative image is extracted in the PACS system. When no preoperative image exists, extraction is not performed.
In one embodiment, in order to increase the calculation speed and reduce the calculation time, the preoperative images acquired according to the embodiment of the present application may also be a sequence of acquiring one image, for example, a sequence of acquiring one CT image. It should be understood that this sequence is an optimal sequence for characterizing preoperative images.
In another embodiment, the extracted preoperative image (i.e., a sequence of CT images) may also be directly sent to a data storage center using a POST request of Hypertext Transfer Protocol (HTTP), the data storage center being configured to store the preoperative image and to send the preoperative image to a prediction module to predict the segmentation, detection and registration results of the preoperative image.
Therefore, the preoperative image and the intraoperative image are acquired through the PACS, so that the puncture positioning method provided by the embodiment of the application is automatically connected with the PACS, the data transmission is facilitated, and meanwhile, a doctor can be assisted in comparing a simulation puncture needle of the intraoperative image with a puncture needle in a navigation device, so that the puncture point is positioned.
In another embodiment of the present application, as shown in fig. 6, step S510 shown in fig. 5 includes the following.
S610: and performing data screening on the intraoperative image according to a preset configuration rule.
In one embodiment, before patient information identification is performed on the intra-operative image, data screening or filtering may be performed on the acquired intra-operative image according to a preset configuration rule to obtain a screened intra-operative image meeting requirements.
In one embodiment, the configuration rule may be in the form of a configuration file. The configuration rule may include a format specification (i.e., the effect of the DICOM format), such as, for example, when the intraoperative image format is nii, converting the intraoperative image data format to the DICOM format. The placement rules may also include a specification of image size, e.g., the intraoperative image may be cropped when the size of the intraoperative image exceeds a predetermined size. It should be noted that, in the embodiment of the present application, the configuration rule is not specifically limited, and those skilled in the art may make different selections according to actual needs.
S620: and carrying out patient information identification on the screened intraoperative image to obtain the patient information of the intraoperative image.
In one embodiment, after the intraoperative images are subjected to data screening, screened intraoperative images (e.g., DICOM-formatted and appropriately sized intraoperative images) meeting the requirements are obtained, and patient information identification can be performed on the screened intraoperative images.
In an embodiment, the method for identifying the patient information may use Optical Character Recognition (OCR), and the method for identifying the patient information is not particularly limited in the embodiments of the present application. By adopting the OCR method, the patient information displayed by the intraoperative image can be accurately identified, and the patient information can comprise the name, the number, the sex, the date of birth and the like of the patient.
It can be seen that the patient information for acquiring the intraoperative images provides support for acquiring preoperative images in the PACS system.
In another embodiment of the present application, the method further comprises: obtaining a segmentation result of a tissue organ in the intraoperative image through a network model according to the intraoperative image; performing three-dimensional reconstruction on the segmentation result to obtain a three-dimensional modeling result of the tissue and organ; and sending the three-dimensional modeling result to the front-end equipment so as to position the position of the simulation puncture needle in the three-dimensional modeling result.
The embodiment of the present application does not specifically limit the type and number of the network models, different network models may be adopted for different tissues and organs of the intraoperative image to obtain the segmentation result of the tissue and organ, and the same network model may be adopted for different tissues and organs of the intraoperative image to obtain the segmentation result of the tissue and organ.
In one embodiment, a traditional network model based on a traditional thresholding method or rule and a watershed method, or a network model based on deep learning can be used to obtain the segmentation result of the lung.
In an embodiment, the image corresponding to the lung segmentation result is cut into small blocks, then segmentation is performed by using ResUnet, the bronchial segmentation result corresponding to each small block can be obtained, and finally, the bronchial segmentation results corresponding to all the small blocks are connected by using a region growing mode to form a complete bronchial segmentation result.
In addition, the complete bronchial segmentation result can be utilized to divide the bronchus into 18 segments and 42 sub-segments by combining the topology structure of the bronchus.
In one embodiment, images corresponding to the segmentation result of the lung are cut into small blocks, ResUnet is adopted, all blocks adopt a bottleeck structure, the lengths of input data and output results are increased in the training process, and an LSTM module is added to smooth the interlobal fissure, so that the result of the segmentation of the lung fissure corresponding to each small block is obtained. And then, connecting the results of the lung fissure segmentation corresponding to all the small blocks by using a region growing mode to form three complete lung fissure segmentation results.
In addition, the lung was segmented into 5 lobes by combining the lung segmentation result and the fissure segmentation result. Meanwhile, the lung is divided into 18 lung segments by using the results of 18 segments of the bronchus and combining the characteristics of the drainage basin.
In one embodiment, the segmented bronchi are dilated, the loss of the bronchi is transmitted at the first layer of bronchi, and semi-supervised learning is performed at the intersection of the second layer of bronchi and the vessel mask. And the structural integrity is ensured by adopting a VAE scheme. Input into the model are the arterial results and the venous results (with errors), and the output of the model includes three branches: and an abnormal detection branch, an artery modification branch and a vein modification branch, and outputting a pulmonary vessel segmentation result through the model.
In an embodiment, a traditional network model based on a traditional threshold method or rule and a watershed method, or a network model based on deep learning can be adopted to obtain the segmentation result of the epidermis.
In one embodiment, using the UNet model, bones are segmented, and then classified into 6 kinds of bones (ribs/vertebrae/scapula/clavicle/thoracic vertebrae/pelvis), and the ribs among them are classified into 24 kinds of ribs.
Based on the above, the segmentation results of the tissue and the organ in the image in the operation are obtained, and the MarchingCube algorithm is utilized to carry out three-dimensional reconstruction on the segmentation results of all the tissue and the organ so as to obtain the three-dimensional modeling result of the tissue and the organ. And sending the three-dimensional modeling result to the front-end equipment, so that the position of the simulation puncture needle can be positioned in the three-dimensional modeling result.
In another embodiment of the present application, as shown in fig. 7, step S220 shown in fig. 2 includes the following.
S710: converting the coordinate system of the navigation device and the coordinate system of the marker on the preoperative image to obtain a first conversion matrix.
In one embodiment, the marker is a navigation pad that can be connected to a navigation device. The marker is arranged on a manikin, which is arranged in a space with the navigation device.
In one embodiment, a preoperative image with markers can be formed by imaging a tissue organ of a human body. The coordinate system of the marker on the preoperative image refers to the coordinate system of the marker on the image, and the coordinate system of the marker on the preoperative image is the same as the coordinate system of the marker on the human body model.
In one embodiment, when the locator card is in the magnetic field of the navigation device, the navigation device may return the midpoint of the locator card and the quaternion describing the posture of the locator card, and the first transformation matrix may be obtained by transforming the coordinate system of the navigation device and the coordinate system of the marker on the preoperative image through the midpoint of the locator card and the quaternion.
S720: and converting the coordinate system of the marker on the preoperative image and the coordinate system of the preoperative image to obtain a second conversion matrix.
In one embodiment, the keypoints of the marker on the preoperative image and the corresponding keypoints of the standard segmentation are first subjected to translation transformation to merge the two keypoints together, and then X, Y in the coordinate system of the marker on the preoperative image and X, Y and Z-axes in the coordinate system of the standard segmentation are subjected to rotation transformation by using the registration model to determine the rotation angles of X, Y in the coordinate system of the marker on the preoperative image and the rotation angles of the Z-axes relative to X, Y and the Z-axes in the coordinate system of the preoperative image, so that the second transformation matrix is obtained.
The registration model can be obtained by training a Spatial Transformer Network (STN), and the Spatial transformation Network can perform Spatial transformation on the segmentation result of the marker on the preoperative image by taking the standard segmentation as a reference, and output a new image, thereby realizing the registration of the segmentation result of the marker and the standard segmentation.
In one embodiment, the coordinate system of the preoperative image is the same as the coordinate system of the standard segmentation, i.e., X, Y and the Z-axis in the coordinate system of the preoperative image are in the same direction as X, Y and the Z-axis in the coordinate system of the standard segmentation.
S730: transforming the puncture needle into a coordinate system of the preoperative image according to the first transformation matrix and the second transformation matrix to generate the simulated puncture needle on the preoperative image.
As shown in fig. 8, the navigation coordinate system and the marker coordinate system may be converted using the first conversion matrix, and the marker coordinate system and the preoperative image coordinate system may be converted using the second conversion matrix. Therefore, in order to make the three-dimensional modeling result of the tissue and the organ and the navigation device generate the linkage effect, the coordinate system of the navigation device and the preoperative image coordinate system can be associated, and the intermediate medium can be used for realizing the purpose, namely, the marker coordinate system in the navigation device.
In one embodiment, the location of the needle in the coordinate system of the marker may be determined based on the first transformation matrix, and the needle may be transformed to the coordinate system of the preoperative image based on the second transformation matrix and the location of the needle in the coordinate system of the marker to generate a simulated needle on the preoperative image.
Therefore, the three-dimensional modeling result of the tissue and the organ after the three-dimensional modeling of the image in the operation and the navigation device where the puncture needle is located can be accurately and quickly registered through the registration model.
In another embodiment of the present application, step S720 in the method shown in fig. 7 includes: translating the key points of the markers on the preoperative image and the corresponding key points of the standard segmentation; according to a registration model, obtaining a rotation transformation matrix between the segmentation result of the marker and the standard segmentation; determining the coordinate values of the preset points in the coordinate system of the preoperative image in the coordinate system of the marker on the preoperative image according to the rotation transformation matrix; determining X, Y and the direction of the Z axis of the coordinate system of the preoperative image relative to X, Y and the direction of the Z axis of the coordinate system of the marker on the preoperative image according to the corresponding coordinate value of a preset point in the coordinate system of the preoperative image in the coordinate system of the marker on the preoperative image; and determining the second conversion matrix according to the direction.
The spatial transformation networks in the prior art all perform three-dimensional affine transformations, i.e. translation transformation, scaling transformation and rotation transformation, but in the present application, in order to improve the registration efficiency of the registration model, the above-mentioned translation transformation and scaling transformation are performed before the segmentation result of the markers is input into the registration model, so that the registration model can only learn the rotation transformation, i.e. the output of the positioning network in the spatial transformation network is only the angle of rotation.
In an embodiment, the coordinate values of the preset points in the coordinate system of the preoperative image are multiplied by the rotation transformation matrix, so as to obtain the corresponding coordinate values of the preset points in the coordinate system of the preoperative image in the coordinate system of the marker on the preoperative image.
In an embodiment, the coordinate values of the preset points in the coordinate system of the preoperative image in the coordinate system of the marker on the preoperative image correspond to new coordinate values of the preset points in the coordinate system of the preoperative image, that is, the preset points in the coordinate system of the preoperative image are converted into a new coordinate system through rotation conversion, that is, the coordinate system of the marker on the preoperative image.
However, it should be noted that the embodiment of the present application does not specifically limit the type and number of the preset points, and the preset points may include a midpoint, an edge point and/or an axial point of the marker.
In one embodiment, the midpoint and edge points define the direction of the X-axis, and the midpoint and axis points define the direction of the Y-axis, which is cross-multiplied by the X-axis to obtain the Z-axis. Therefore, from the new coordinate values of the midpoint, edge points and/or axial points, X, Y of the coordinate system of the preoperative image and the direction of the Z-axis relative to X, Y of the coordinate system of the marker on the preoperative image and the Z-axis can be obtained.
In one embodiment, the second transformation matrix is composed of X, Y of the coordinate system of the preoperative image and the direction of the Z-axis relative to X, Y of the coordinate system of the marker on the preoperative image and the Z-axis. Alternatively, the second transformation matrix is composed of X, Y of the coordinate system of the preoperative image and X, Y of the Z-axis relative to the coordinate system of the marker on the preoperative image and the direction of the Z-axis and the midpoint of the marker on the preoperative image. This is not particularly limited in the embodiments of the present application.
In another embodiment of the present application, the registration model includes a plurality of cascaded spatial transformation networks, the rotation transformation matrix includes a plurality of rotation transformation matrices, one spatial transformation network corresponds to one rotation transformation matrix, and according to the rotation transformation matrix, coordinate values of preset points in a coordinate system of the preoperative image in a coordinate system of a marker on the preoperative image are determined, including: calculating an input-to-output intersection ratio for each of the plurality of spatial transform networks; and multiplying the rotation transformation matrix corresponding to the space transformation network with the intersection ratio smaller than the preset threshold value by the coordinate value of the preset point in the coordinate system of the preoperative image to obtain the coordinate value of the preset point in the coordinate system of the preoperative image corresponding to the coordinate value of the marker on the preoperative image.
In one embodiment, if the number of the plurality of spatial transform networks is N, one spatial transform network corresponds to one input and one output, and one rotation transform matrix can be generated for each spatial transform network.
In one embodiment, by calculating the input-to-output cross-over ratios of the N spatial transform networks, N results may be obtained.
In one embodiment, the N results are compared with a predetermined threshold in order of precedence when a certain spatial transform network (e.g., the 10 th one) appearsSpatial transformation network) is greater than or equal to a preset threshold value, stopping comparison, and converting the 1 st spatial transformation network into the 10 th spatial transformation network into a corresponding rotation transformation matrix (A)1To A10) Sequentially multiplied by the coordinate values (x, y, z, 1) of the preset points in the coordinate system of the preoperative image, i.e., (x, y, z, 1) × A1*A2*…*A10And obtaining the coordinate value of the preset point in the coordinate system of the preoperative image corresponding to the coordinate value in the coordinate system of the marker on the preoperative image.
In an embodiment, the N results are respectively compared with a preset threshold, it is determined which spatial transformation networks of the N spatial transformation networks correspond to intersection ratios greater than or equal to the preset threshold, and the rotational transformation matrices corresponding to the spatial transformation networks whose intersection ratios are greater than or equal to the preset threshold are sequentially multiplied by coordinate values of preset points in the coordinate system of the preoperative image, so as to obtain coordinate values corresponding to the preset points in the coordinate system of the preoperative image in the coordinate system of the marker.
However, it should be noted that, in the embodiment of the present application, a selection manner of the spatial transform network with the intersection-to-parallel ratio smaller than the preset threshold is not specifically limited, and a person skilled in the art may make different selections according to actual requirements.
Exemplary devices
The embodiment of the device can be used for executing the embodiment of the method. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Fig. 9 is a block diagram of a puncture positioning device according to an embodiment of the present application. As shown in fig. 9, the apparatus 900 includes:
an obtaining module 910 configured to obtain a preoperative image according to an intraoperative image, wherein the intraoperative image is composed of a portion of the preoperative image;
a generating module 920 configured to generate a simulated puncture needle in a coordinate system of the preoperative image based on the puncture needle in the coordinate system of the navigation device, wherein the coordinate system of the preoperative image is the same as the coordinate system of the intraoperative image;
a first sending module 930 configured to send the simulated puncture needle located in the coordinate system of the intra-operative image to a front-end device so as to position the simulated puncture needle at the front-end device.
In another embodiment of the present application, the apparatus further comprises: and the information pulling module is configured to receive the intraoperative image sent by the image archiving and communication system.
In another embodiment of the present application, the obtaining module 910 is further configured to: analyzing the intraoperative image to obtain patient information of the intraoperative image; sending the patient information to the image archiving and communication system, and inquiring whether the preoperative image exists; when the pre-operative image is present, extracting the pre-operative image in the image archiving and communication system.
In another embodiment of the present application, the obtaining module 910, when analyzing the intra-operative image to obtain the patient information of the intra-operative image, is further configured to: performing data screening on the intraoperative image according to a preset configuration rule; and carrying out patient information identification on the screened intraoperative image to obtain the patient information of the intraoperative image.
In another embodiment of the present application, the patient information includes patient name, patient number, patient gender, and/or date of birth.
In another embodiment of the present application, the apparatus further comprises: the segmentation module is configured to obtain a segmentation result of a tissue organ in the intraoperative image through a network model according to the intraoperative image; a reconstruction module configured to perform three-dimensional reconstruction on the segmentation result to obtain a three-dimensional modeling result of the tissue organ; a second sending module configured to send the three-dimensional modeling result to the front-end device to locate the position of the simulated puncture needle in the three-dimensional modeling result.
In another embodiment of the present application, the generation module 920 is further configured to: converting a coordinate system of the navigation equipment and a coordinate system of a marker on the preoperative image to obtain a first conversion matrix; converting a coordinate system of a marker on the preoperative image and a coordinate system of the preoperative image to obtain a second conversion matrix; transforming the puncture needle into a coordinate system of the preoperative image according to the first transformation matrix and the second transformation matrix to generate the simulated puncture needle on the preoperative image.
In another embodiment of the present application, the generating module 920, when transforming the coordinate system of the marker on the preoperative image and the coordinate system of the preoperative image to obtain the second transformation matrix, is further configured to: translating the key points of the markers on the preoperative image and the corresponding key points of the standard segmentation; according to a registration model, obtaining a rotation transformation matrix between the segmentation result of the marker and the standard segmentation; determining the coordinate values of the preset points in the coordinate system of the preoperative image in the coordinate system of the marker on the preoperative image according to the rotation transformation matrix; determining X, Y and the direction of the Z axis of the coordinate system of the preoperative image relative to X, Y and the direction of the Z axis of the coordinate system of the marker on the preoperative image according to the corresponding coordinate value of a preset point in the coordinate system of the preoperative image in the coordinate system of the marker on the preoperative image; and determining the second conversion matrix according to the direction.
In another embodiment of the present application, the registration model includes a plurality of cascaded spatial transformation networks, the rotation transformation matrix includes a plurality of rotation transformation matrices, one spatial transformation network corresponds to one rotation transformation matrix, and the generation module 920, when determining, according to the rotation transformation matrices, corresponding coordinate values of a preset point in a coordinate system of the preoperative image in a coordinate system of a marker on the preoperative image, is further configured to: calculating an input-to-output intersection ratio for each of the plurality of spatial transform networks; and multiplying the rotation transformation matrix corresponding to the space transformation network with the intersection ratio smaller than the preset threshold value by the coordinate value of the preset point in the coordinate system of the preoperative image to obtain the coordinate value of the preset point in the coordinate system of the preoperative image corresponding to the coordinate value of the marker on the preoperative image.
In another embodiment of the present application, the preset points comprise midpoint, edge points and/or axis points of the marker.
Exemplary electronic device
Next, an electronic apparatus according to an embodiment of the present application is described with reference to fig. 10. FIG. 10 illustrates a block diagram of an electronic device in accordance with an embodiment of the present application.
As shown in fig. 10, the electronic device 1000 includes one or more processors 1010 and memory 1020.
The processor 1010 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 1000 to perform desired functions.
Memory 1020 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer-readable storage medium and executed by processor 1010 to implement the puncture location methods of the various embodiments of the present application described above and/or other desired functions. Various contents such as the first conversion matrix and the second conversion matrix may also be stored in the computer-readable storage medium.
In one example, the electronic device 1000 may further include: an input device 1030 and an output device 1040, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
The input device 1030 may be, for example, a microphone or a microphone array as described above for capturing an input signal of a sound source. When the electronic device is a stand-alone device, the input unit 1030 may be a communication network connector.
The input device 1030 may also include, for example, a keyboard, a mouse, and the like.
The output device 1040 may output various information including the determined rotation transformation matrix to the outside. The output devices 1040 may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, among others.
Of course, for simplicity, only some of the components of the electronic device 1000 relevant to the present application are shown in fig. 10, and components such as buses, input/output interfaces, and the like are omitted. In addition, the electronic device 1000 may include any other suitable components depending on the particular application.
Exemplary computer program product and computer-readable storage Medium
In addition to the above-described methods and apparatus, embodiments of the present application may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps in the puncture location method according to various embodiments of the present application described in the "exemplary methods" section above of this specification.
The computer program product may be written with program code for performing the operations of embodiments of the present application in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present application may also be a computer-readable storage medium having stored thereon computer program instructions, which, when executed by a processor, cause the processor to perform the steps in the puncture location method according to various embodiments of the present application described in the "exemplary methods" section above in this specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description has been presented for purposes of illustration and description. Furthermore, the description is not intended to limit embodiments of the application to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (12)

1. A puncture positioning method, comprising:
acquiring a preoperative image according to an intraoperative image, wherein the intraoperative image is composed of a part of the preoperative image;
generating a simulated puncture needle in a coordinate system of the preoperative image based on the puncture needle in the coordinate system of the navigation device, wherein the coordinate system of the preoperative image is the same as the coordinate system of the intraoperative image;
and sending the simulated puncture needle positioned in the coordinate system of the intraoperative image to a front-end device so as to position the simulated puncture needle at the front-end device.
2. The method of claim 1, further comprising:
receiving the intraoperative image transmitted by an image archiving and communication system,
wherein, according to the image in the art, acquire image before the art, include:
analyzing the intraoperative image to obtain patient information of the intraoperative image;
sending the patient information to the image archiving and communication system, and inquiring whether the preoperative image exists;
when the pre-operative image is present, extracting the pre-operative image in the image archiving and communication system.
3. The method of claim 2, wherein analyzing the intra-operative image to obtain patient information for the intra-operative image comprises:
performing data screening on the intraoperative image according to a preset configuration rule;
and carrying out patient information identification on the screened intraoperative image to obtain the patient information of the intraoperative image.
4. The method of claim 2, wherein the patient information comprises patient name, patient number, patient gender, and/or date of birth.
5. The method of claim 1, further comprising:
obtaining a segmentation result of a tissue organ in the intraoperative image through a network model according to the intraoperative image;
performing three-dimensional reconstruction on the segmentation result to obtain a three-dimensional modeling result of the tissue and organ;
and sending the three-dimensional modeling result to the front-end equipment so as to position the position of the simulation puncture needle in the three-dimensional modeling result.
6. The method of any one of claims 1 to 5, wherein generating a simulated needle in the coordinate system of the preoperative image based on a needle in the coordinate system of a navigation device comprises:
converting a coordinate system of the navigation equipment and a coordinate system of a marker on the preoperative image to obtain a first conversion matrix;
converting a coordinate system of a marker on the preoperative image and a coordinate system of the preoperative image to obtain a second conversion matrix;
transforming the puncture needle into a coordinate system of the preoperative image according to the first transformation matrix and the second transformation matrix to generate the simulated puncture needle on the preoperative image.
7. The method of claim 6, wherein the transforming the coordinate system of the marker on the preoperative image and the coordinate system of the preoperative image to obtain a second transformation matrix comprises:
translating the key points of the markers on the preoperative image and the corresponding key points of the standard segmentation;
according to a registration model, obtaining a rotation transformation matrix between the segmentation result of the marker and the standard segmentation;
determining the coordinate values of the preset points in the coordinate system of the preoperative image in the coordinate system of the marker on the preoperative image according to the rotation transformation matrix;
determining X, Y and the direction of the Z axis of the coordinate system of the preoperative image relative to X, Y and the direction of the Z axis of the coordinate system of the marker on the preoperative image according to the corresponding coordinate value of a preset point in the coordinate system of the preoperative image in the coordinate system of the marker on the preoperative image;
and determining the second conversion matrix according to the direction.
8. The method of claim 7, wherein the registration model comprises a cascaded plurality of spatial transformation matrices, the rotation transformation matrices comprising a plurality of rotation transformation matrices, one spatial transformation matrix for each rotation transformation matrix,
determining the coordinate value of a preset point in the coordinate system of the preoperative image, which corresponds to the coordinate system of the marker on the preoperative image, according to the rotation transformation matrix, wherein the determining comprises the following steps:
calculating an input-to-output intersection ratio for each of the plurality of spatial transform networks;
and multiplying the rotation transformation matrix corresponding to the space transformation network with the intersection ratio smaller than the preset threshold value by the coordinate value of the preset point in the coordinate system of the preoperative image to obtain the coordinate value of the preset point in the coordinate system of the preoperative image corresponding to the coordinate value of the marker on the preoperative image.
9. The method of claim 7, wherein the preset points comprise midpoint, edge, and/or axis points of the marker.
10. A puncture positioning device, comprising:
an acquisition module configured to acquire a preoperative image based on an intraoperative image, wherein the intraoperative image is composed of a portion of the preoperative image;
a generation module configured to generate a simulated puncture needle in a coordinate system of the preoperative image based on the puncture needle in the coordinate system of a navigation device, wherein the coordinate system of the preoperative image is the same as the coordinate system of the intraoperative image;
the first sending module is configured to send the simulated puncture needle located in the coordinate system of the intra-operative image to a front-end device so as to position the simulated puncture needle at the front-end device.
11. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor configured to perform the method of any of the preceding claims 1 to 9.
12. A computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, cause the processor to perform the method of any of claims 1 to 9.
CN202110932297.5A 2021-08-13 2021-08-13 Puncture positioning method and device, electronic device and storage medium Pending CN113610826A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110932297.5A CN113610826A (en) 2021-08-13 2021-08-13 Puncture positioning method and device, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110932297.5A CN113610826A (en) 2021-08-13 2021-08-13 Puncture positioning method and device, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN113610826A true CN113610826A (en) 2021-11-05

Family

ID=78340690

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110932297.5A Pending CN113610826A (en) 2021-08-13 2021-08-13 Puncture positioning method and device, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN113610826A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114129240A (en) * 2021-12-02 2022-03-04 推想医疗科技股份有限公司 Method, system and device for generating guide information and electronic equipment
CN114831731A (en) * 2022-05-17 2022-08-02 真健康(北京)医疗科技有限公司 Method, device and system suitable for lung focus positioning in operating room
CN116433874A (en) * 2021-12-31 2023-07-14 杭州堃博生物科技有限公司 Bronchoscope navigation method, device, equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102999902A (en) * 2012-11-13 2013-03-27 上海交通大学医学院附属瑞金医院 Optical navigation positioning system based on CT (computed tomography) registration results and navigation method thereby
CN103356284A (en) * 2012-04-01 2013-10-23 中国科学院深圳先进技术研究院 Surgical navigation method and system
CN110025379A (en) * 2019-05-07 2019-07-19 新博医疗技术有限公司 A kind of ultrasound image and CT image co-registration real-time navigation system and method
CN110264504A (en) * 2019-06-28 2019-09-20 北京国润健康医学投资有限公司 A kind of three-dimensional registration method and system for augmented reality
CN110464459A (en) * 2019-07-10 2019-11-19 丽水市中心医院 Intervention plan navigation system and its air navigation aid based on CT-MRI fusion
CN111281540A (en) * 2020-03-09 2020-06-16 北京航空航天大学 Real-time visual navigation system based on virtual-actual fusion in minimally invasive surgery of orthopedics department
US20200337777A1 (en) * 2018-01-11 2020-10-29 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for surgical route planning
CN111870344A (en) * 2020-05-29 2020-11-03 中山大学肿瘤防治中心(中山大学附属肿瘤医院、中山大学肿瘤研究所) Preoperative navigation method, system and terminal equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103356284A (en) * 2012-04-01 2013-10-23 中国科学院深圳先进技术研究院 Surgical navigation method and system
CN102999902A (en) * 2012-11-13 2013-03-27 上海交通大学医学院附属瑞金医院 Optical navigation positioning system based on CT (computed tomography) registration results and navigation method thereby
US20200337777A1 (en) * 2018-01-11 2020-10-29 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for surgical route planning
CN110025379A (en) * 2019-05-07 2019-07-19 新博医疗技术有限公司 A kind of ultrasound image and CT image co-registration real-time navigation system and method
CN110264504A (en) * 2019-06-28 2019-09-20 北京国润健康医学投资有限公司 A kind of three-dimensional registration method and system for augmented reality
CN110464459A (en) * 2019-07-10 2019-11-19 丽水市中心医院 Intervention plan navigation system and its air navigation aid based on CT-MRI fusion
CN111281540A (en) * 2020-03-09 2020-06-16 北京航空航天大学 Real-time visual navigation system based on virtual-actual fusion in minimally invasive surgery of orthopedics department
CN111870344A (en) * 2020-05-29 2020-11-03 中山大学肿瘤防治中心(中山大学附属肿瘤医院、中山大学肿瘤研究所) Preoperative navigation method, system and terminal equipment

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114129240A (en) * 2021-12-02 2022-03-04 推想医疗科技股份有限公司 Method, system and device for generating guide information and electronic equipment
CN116433874A (en) * 2021-12-31 2023-07-14 杭州堃博生物科技有限公司 Bronchoscope navigation method, device, equipment and storage medium
CN114831731A (en) * 2022-05-17 2022-08-02 真健康(北京)医疗科技有限公司 Method, device and system suitable for lung focus positioning in operating room
CN114831731B (en) * 2022-05-17 2022-09-02 真健康(北京)医疗科技有限公司 Operation navigation equipment and system suitable for lung focus positioning in operating room

Similar Documents

Publication Publication Date Title
JP6595193B2 (en) Interpretation report creation device and interpretation report creation system
CN113610826A (en) Puncture positioning method and device, electronic device and storage medium
US8744149B2 (en) Medical image processing apparatus and method and computer-readable recording medium for image data from multiple viewpoints
JP6490985B2 (en) Medical image processing apparatus and medical image processing system
US8334878B2 (en) Medical image processing apparatus and medical image processing program
US8953856B2 (en) Method and system for registering a medical image
US8903147B2 (en) Medical report generation apparatus, method and program
WO2021082416A1 (en) Network model training method and device, and focus area determination method and device
US20120183188A1 (en) Medical image display apparatus, method, and program
US20170221204A1 (en) Overlay Of Findings On Image Data
JP2008259682A (en) Section recognition result correcting device, method and program
JP2010075403A (en) Information processing device and method of controlling the same, data processing system
CN109887077B (en) Method and apparatus for generating three-dimensional model
JP2019169049A (en) Medical image specification device, method, and program
CN115005981A (en) Surgical path planning method, system, equipment, medium and surgical operation system
US11327773B2 (en) Anatomy-aware adaptation of graphical user interface
CN111583249B (en) Medical image quality monitoring system and method
CN113610824A (en) Puncture path planning method and device, electronic device and storage medium
JP2023175011A (en) Document creation assistance device, method, and program
JP5539478B2 (en) Information processing apparatus and information processing method
CN113487656A (en) Image registration method and device, training method and device, control method and device
JP2024009342A (en) Document preparation supporting device, method, and program
CN109350059A (en) For ancon self-aligning combined steering engine and boundary mark engine
JP5252263B2 (en) Medical image analysis system interconnecting three-dimensional image display devices with pre-processing devices based on analysis protocols
JP6734111B2 (en) Finding information creation device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20211216

Address after: 430030 unit 12, floor 40, Yuexiu wealth center, No. 1, Zhongshan Avenue, Qiaokou District, Wuhan, Hubei Province (Building 6, plot a of Qiaokou Golden Triangle)

Applicant after: Wuhan Fanxiang Medical Technology Co.,Ltd.

Applicant after: Tuxiang Medical Technology Co., Ltd

Address before: Room B401, 4 / F, building 1, No. 12, shangdixin Road, Haidian District, Beijing 100085

Applicant before: Tuxiang Medical Technology Co., Ltd