CN111466952B - Real-time conversion method and system for ultrasonic endoscope and CT three-dimensional image - Google Patents

Real-time conversion method and system for ultrasonic endoscope and CT three-dimensional image Download PDF

Info

Publication number
CN111466952B
CN111466952B CN202010338096.8A CN202010338096A CN111466952B CN 111466952 B CN111466952 B CN 111466952B CN 202010338096 A CN202010338096 A CN 202010338096A CN 111466952 B CN111466952 B CN 111466952B
Authority
CN
China
Prior art keywords
image
dimensional
ultrasonic endoscope
dimensional image
zero
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010338096.8A
Other languages
Chinese (zh)
Other versions
CN111466952A (en
Inventor
刘晓
刘振
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Chaoyang Hospital
Original Assignee
Beijing Chaoyang Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Chaoyang Hospital filed Critical Beijing Chaoyang Hospital
Priority to CN202010338096.8A priority Critical patent/CN111466952B/en
Publication of CN111466952A publication Critical patent/CN111466952A/en
Application granted granted Critical
Publication of CN111466952B publication Critical patent/CN111466952B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • A61B8/5238Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for combining image data of patient, e.g. merging several images from different acquisition modes into one image
    • A61B8/5261Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for combining image data of patient, e.g. merging several images from different acquisition modes into one image combining images from different diagnostic modalities, e.g. ultrasound and X-ray
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • A61B6/032Transmission computed tomography [CT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5229Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image
    • A61B6/5247Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data combining image data of a patient, e.g. combining a functional image with an anatomical image combining images from an ionising-radiation diagnostic technique and a non-ionising radiation diagnostic technique, e.g. X-ray and ultrasound
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Optics & Photonics (AREA)
  • Pulmonology (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The invention relates to a real-time conversion method and a real-time conversion system for an ultrasonic endoscope and a CT three-dimensional image, which comprise the following steps: acquiring a two-dimensional image of the ultrasonic endoscope, and marking zero scales in the two-dimensional image; acquiring a CT three-dimensional reconstruction image, and marking zero scales in the CT three-dimensional reconstruction image; matching the zero scale of the two-dimensional image of the ultrasonic endoscope with the zero scale of the three-dimensional CT reconstructed image, and establishing a conversion model of the zero scale of the two-dimensional image of the ultrasonic endoscope and the zero scale of the three-dimensional CT reconstructed image; and converting the zero scale of the two-dimensional image of the ultrasonic endoscope into a three-dimensional CT reconstructed image through a conversion model, and converting the two-dimensional image of the ultrasonic endoscope into the three-dimensional CT reconstructed image through the mapping relation between the two-dimensional image and the three-dimensional image to generate the real-time CT three-dimensional image of the ultrasonic endoscope. The method combines a two-dimensional ultrasonic endoscope image with a CT three-dimensional reconstruction technology through deep learning, and realizes real-time imaging and navigation by applying a CT three-dimensional structure when a mediastinum ultrasonic endoscope is in a row.

Description

Real-time conversion method and system for ultrasonic endoscope and CT three-dimensional image
Technical Field
The invention relates to a real-time conversion method and a real-time conversion system for an ultrasonic endoscope and a CT three-dimensional image, and belongs to the technical field of medical equipment.
Background
With the appearance and application of ultrasonic endoscopes, it is possible to directly observe lesions in the wall of the digestive tract and to maximally approach the wall of the esophageal tract so as to visually observe the structure outside the esophageal lumen. Because the ultrasonic endoscope mostly adopts the imaging structure of an oblique endoscope and a side-looking endoscope, the operation difficulty and the image recognition of the ultrasonic endoscope are higher than those of a common endoscope. For lesions outside the cavity of the digestive tract, the anatomical structures of organs outside the cavity and the adjacent relation of all the anatomical structures need to be mastered, and the walking shape of blood vessels and the walking shape of the anatomical structures of organs need to be traced, so that the purposes of auxiliary diagnosis and treatment of the lesions can be achieved. However, this requirement places extremely high demands on the level of the operator's endoscope and on the medical experience.
In addition, in terms of spatial dimension, a two-dimensional ultrasonic image can only display a structure on a certain layer, and in the operation process of the ultrasonic endoscope, the endoscope is slightly rotated, and the direction of the probe is changed, so that certain retrospective landmark blood vessel walking or anatomical structure walking can be displaced, and the difficulty in finding and judging the position of a lesion and accurately judging the relationship between the lesion and the surrounding anatomical structure is increased.
During the dynamic examination process of the ultrasonic endoscopy, due to the variability of the ultrasonic endoscopy, the ultrasonic endoscopy is influenced by the respiration of a patient, the technical level and experience of an operator and the echoed difference of images. At present, the three-dimensional reconstruction of an ultrasonic endoscope image cannot be realized clinically.
At present, clinical diagnosis of mediastinal lesions depends on chest CT and ultrasonic endoscopy, adjacent relation between the lesions and surrounding organs can be clearly observed after chest CT three-dimensional reconstruction, anatomical positions are displayed with absolute advantages, but instant images cannot be provided, and the key point is that biopsy tissues cannot be obtained for pathological diagnosis. The ultrasonic endoscope can visually and immediately obtain a lesion image, but the ultrasonic image can only display a structure on a certain layer, and the image has variability in the operation process of the ultrasonic endoscope, so that the position of a lesion and a three-dimensional image of a surrounding anatomical structure are difficult to accurately and clearly acquire at present.
Disclosure of Invention
Aiming at the defects of the prior art, the invention aims to provide a real-time conversion method and a real-time conversion system for an ultrasonic endoscope and a CT three-dimensional image, which realize model mapping of the ultrasonic endoscope two-dimensional image and the CT three-dimensional image by utilizing the technologies of image segmentation, multi-mode fusion matching and the like based on deep learning, convert the real-time ultrasonic endoscope two-dimensional image into a CT three-dimensional reconstruction image and provide efficient navigation for mediastinal disease diagnosis.
In order to achieve the purpose, the invention provides a real-time conversion method of an ultrasonic endoscope and a CT three-dimensional image, which comprises the following steps: s1, acquiring a two-dimensional image of an ultrasonic endoscope, and marking zero scales in the two-dimensional image; s2, acquiring a CT three-dimensional reconstruction image, and marking zero scales in the CT three-dimensional reconstruction image; s3, matching the zero scale of the two-dimensional image of the ultrasonic endoscope with the zero scale of the three-dimensional CT reconstructed image, and establishing a conversion model of the zero scale of the two-dimensional image of the ultrasonic endoscope and the zero scale of the three-dimensional CT reconstructed image; and S4, converting the zero scale of the two-dimensional image of the ultrasonic endoscope into the three-dimensional CT reconstructed image through the conversion model, and converting the two-dimensional image of the ultrasonic endoscope into the three-dimensional CT reconstructed image through the mapping relation between the two-dimensional image and the three-dimensional image.
Further, after the two-dimensional image of the ultrasonic endoscope is obtained in the step S1, a zero scale structure is manually marked, and deep learning training is carried out by using a deep residual error neural network training model.
Further, the process of the transfer learning training comprises the following steps: s1.1, acquiring an ultrasonic endoscope two-dimensional image; s1.2, dividing an ultrasonic endoscope two-dimensional image into a training set, a verification set and a test set; s1.3, training a convolutional neural network to be trained by utilizing a training set to obtain a trained convolutional neural network; s1.4, testing the trained convolutional neural network by using the test set to obtain the network accuracy, stopping training when the network accuracy reaches a preset requirement, and otherwise, returning to the step S1.3 to continue training; s1.5, carrying out independent verification on the verification set so as to ensure the generalization of the convolutional neural network.
Further, before the transfer learning training, the image needs to be preprocessed, and the image preprocessing step is as follows: and carrying out image normalization processing and/or data enhancement processing on the ultrasonic endoscope two-dimensional image to be processed.
Further, the method for matching the zero scale of the two-dimensional image of the ultrasonic endoscope with the zero scale of the three-dimensional CT reconstructed image in the step S3 comprises the following steps: s3.1, bringing the zero scale of the ultrasonic endoscope two-dimensional image marked with the zero scale and the CT three-dimensional reconstructed image into a Mask-RCNN algorithm model, and respectively extracting zero scale images corresponding to the ultrasonic endoscope two-dimensional image and the CT three-dimensional reconstructed image; s3.2, bringing a zero scale image of the ultrasonic endoscope two-dimensional image and a zero scale image of the CT three-dimensional reconstruction image into a data combined characteristic probability model or a multi-scale hierarchical matching optimization model based on a conditional random field to perform zero scale matching; and S3.3, establishing a conversion model of the zero scale of the two-dimensional image of the ultrasonic endoscope and the zero scale of the three-dimensional CT reconstructed image.
Further, after step S3 is finished, the conversion model needs to be verified, and the verification process is as follows: and acquiring a new ultrasonic endoscope two-dimensional image, marking zero scales in the image, bringing the zero scales into the conversion model to obtain the zero scales of the CT three-dimensional reconstruction image, and comparing the zero scales of the CT three-dimensional reconstruction image with the zero scales of the actual CT three-dimensional reconstruction image to verify the conversion model.
The invention also discloses a real-time conversion system of the ultrasonic endoscope and the CT three-dimensional image, which comprises the following components: the ultrasonic endoscope marking module is used for acquiring a two-dimensional image of the ultrasonic endoscope and marking zero scales in the two-dimensional image; the CT marking module is used for acquiring a CT three-dimensional reconstruction image and marking zero scales in the CT three-dimensional reconstruction image; the matching module is used for matching the zero scale of the two-dimensional image of the ultrasonic endoscope with the zero scale of the three-dimensional CT reconstructed image and establishing a conversion model of the zero scale of the two-dimensional image of the ultrasonic endoscope and the zero scale of the three-dimensional CT reconstructed image; and the conversion module is used for converting the zero scale of the two-dimensional image of the ultrasonic endoscope into the three-dimensional CT reconstructed image through the conversion model and converting the two-dimensional image of the ultrasonic endoscope into the three-dimensional CT reconstructed image through the mapping relation between the two-dimensional image and the three-dimensional image.
Further, the ultrasonic endoscope marking module comprises: the image acquisition sub-module is used for acquiring a two-dimensional image of the ultrasonic endoscope; the image classification submodule is used for classifying the two-dimensional images of the ultrasonic endoscope into a training set, a verification set and a test set; the training submodule is used for training the convolutional neural network to be trained by utilizing the training set to obtain the trained convolutional neural network; the judging module is used for testing the trained convolutional neural network by utilizing the test set to obtain the network accuracy, stopping training when the network accuracy reaches a preset requirement, and returning to the training submodule to continue training if the network accuracy does not reach the preset requirement; and the independent verification module is used for independently verifying the verification set so as to ensure the generalization of the convolutional neural network.
Further, the matching module includes: the zero scale image generation submodule is used for bringing the zero scale of the ultrasonic endoscope two-dimensional image marked with the zero scale and the CT three-dimensional reconstruction image into a Mask-RCNN algorithm model and respectively extracting the zero scale images corresponding to the ultrasonic endoscope two-dimensional image and the CT three-dimensional reconstruction image; the zero scale matching submodule is used for bringing a zero scale image of a two-dimensional image of the ultrasonic endoscope and a zero scale image of a three-dimensional CT reconstruction image into a data combined characteristic probability model or a multi-scale hierarchical matching optimization model based on a conditional random field to perform zero scale matching; and the conversion model generation module is used for establishing a conversion model of the zero scale of the two-dimensional image of the ultrasonic endoscope and the zero scale of the three-dimensional CT reconstruction image.
Further, the matching module further comprises: and the verification module is used for verifying the conversion model.
Due to the adoption of the technical scheme, the invention has the following advantages:
1. through deep learning, a two-dimensional ultrasonic endoscope image is combined with a CT three-dimensional reconstruction technology, and real-time imaging and navigation are realized by applying a CT three-dimensional structure when a line mediastinum ultrasonic endoscope is in motion;
2. can obtain clearer and more accurate imaging of the lesion and the surrounding anatomical adjacent structures, so that an operator can better judge the result.
3. The method and the device have the advantages that the convolutional neural network obtained based on the transfer learning training is used for detecting the ultrasonic endoscope image to be processed to obtain the abnormal lesion area, so that the ultrasonic endoscope image detection accuracy is effectively improved, the workload of a doctor is reduced, the abnormal detection rate is improved, and the abnormal omission rate is reduced.
Drawings
FIG. 1 is a flow chart of a method for real-time transformation of an ultrasonic endoscope and a CT three-dimensional image according to an embodiment of the present invention;
fig. 2 is a structural diagram of an ultrasound endoscope and CT three-dimensional image real-time transformation system according to an embodiment of the present invention.
Detailed Description
The present invention is described in detail by way of specific embodiments in order to better understand the technical direction of the present invention for those skilled in the art. It should be understood, however, that the detailed description is provided for a better understanding of the invention only and that they should not be taken as limiting the invention. In describing the present invention, it is to be understood that the terminology used is for the purpose of description only and is not intended to be interpreted as indicating or implying any relative importance.
Example one
The invention provides a real-time conversion method of an ultrasonic endoscope and a CT three-dimensional image, which comprises the following steps:
s1, acquiring a two-dimensional image of the ultrasonic endoscope, and marking zero scales in the two-dimensional image.
And acquiring an endoscope two-dimensional image to be processed, reading the image by advanced physicians together, screening out an ultrasonic endoscope image with clear aortic arch continued from ascending aorta, and marking the aortic arch and the aortic arch moving part as 0 and 1 by the same two physicians after screening data is finished. The marking of the zero scale of the two-dimensional image is to correspond the zero scale of the two-dimensional image to the zero scale of the three-dimensional image when the two-dimensional image is converted into the three-dimensional image, so as to obtain the mapping relation between the two-dimensional image and the three-dimensional image. In this embodiment, it is preferable to acquire a tissue image at a position such as a digestive tract by using an ultrasonic endoscope as an ultrasonic endoscope image to be processed.
The specific method for acquiring the two-dimensional image of the ultrasonic endoscope comprises the following steps: the to-be-processed endoscopic image marked with zero scale is resized to the same size, and in this embodiment, it is preferable to resize the image to 64 × 64. The method comprises the steps that image normalization processing and/or data enhancement processing are/is carried out on an ultrasonic endoscope image to be processed, wherein the image normalization processing is to unify the data format and the image format of the ultrasonic endoscope image to be processed, and the data enhancement processing, namely data amplification, is to utilize a data enhancement technology to process the ultrasonic endoscope image to be processed to generate more ultrasonic endoscope images so as to realize image amplification, and comprises the operations of turning, cutting, rotating, zooming and the like on the image, so that the purpose is to provide various amplified images of the ultrasonic endoscope image to be processed, and the detection accuracy of the image to be processed in the subsequent detection step is improved through a convolutional neural network; in the embodiment, in order to improve the detection accuracy of the network, the data normalization processing and the data enhancement processing are simultaneously performed on the endoscope image.
And inputting the ultrasonic two-dimensional image subjected to data normalization processing and data enhancement processing into a convolutional neural network obtained based on transfer learning training for training.
And dividing the two-dimensional image of the ultrasonic endoscope into a training set, a verification set and a test set according to the aorta in the image and the aortic arch of the motion. Taking 70% of pictures as a training set and 15% as a verification set, utilizing the training set and optimizing the convolutional neural network to be trained, tracking performance indexes of the algorithm such as convergence Rate, identification accuracy, recall ratio, precision ratio and the like, and continuously adjusting parameters such as EPOCHS, learning Rate, batch Size and the like of the deep residual error neural network, so that finally, each performance index of the obtained trained convolutional neural network reaches the expected requirement. And taking the rest 15% of pictures as a test set, testing the trained convolutional neural network by using the test set to obtain the network accuracy, stopping training when the network accuracy reaches a preset requirement, and returning to the previous step to continue training if the network accuracy does not reach the preset requirement. And (3) carrying out individual verification on the verification set, wherein the part of image samples do not participate in the training and optimization of the current algorithm, but are independently tested by a medical team so as to ensure that the algorithm has better generalization performance. In order to eliminate the algorithm performance difference possibly caused by randomly selecting samples, the random selection process is repeated for ten times, and the average value and the standard deviation of the random selection process are calculated, so that the accuracy of the experiment is ensured to the maximum extent. And (3) generating a final convolutional neural network by using a Generation Antagonistic Network (GAN), expanding the sample size and establishing a data sample learning library with complete clinical and pathological data. The convolutional neural network to be trained includes a Unet network, a Resnet network, a GAN (generation countermeasure network), and the like.
And S2, acquiring a CT three-dimensional reconstruction image, and marking zero scales in the CT three-dimensional reconstruction image. Since acquiring a CT three-dimensional reconstructed image is a mature technique in the art, and marking a zero scale in the CT three-dimensional reconstructed image is only to mark a zero scale in the image and is a relatively conventional operation means, the operation process thereof is not described in detail here.
And S3, matching the zero scale of the two-dimensional image of the ultrasonic endoscope with the zero scale of the three-dimensional CT reconstructed image, and establishing a conversion model of the zero scale of the two-dimensional image of the ultrasonic endoscope and the zero scale of the three-dimensional CT reconstructed image.
The specific operation steps of the step S3 are as follows:
s3.1, bringing the zero scale of the ultrasonic endoscope two-dimensional image marked with the zero scale and the CT three-dimensional reconstruction image into a Mask-RCNN algorithm model, and respectively extracting zero scale images corresponding to the ultrasonic endoscope two-dimensional image and the CT three-dimensional reconstruction image;
s3.2, bringing a zero-scale image of the two-dimensional image of the ultrasonic endoscope and a zero-scale image of the three-dimensional CT reconstructed image into a data combined characteristic probability model or a multi-scale hierarchical matching optimization model based on a conditional random field to perform zero-scale matching;
and S3.3, establishing a conversion model of the zero scale of the two-dimensional image of the ultrasonic endoscope and the zero scale of the three-dimensional CT reconstructed image.
After step S3 is finished, the conversion model needs to be verified, and the verification process is as follows: acquiring a new ultrasonic endoscope two-dimensional image, wherein the ultrasonic endoscope two-dimensional image still needs to be subjected to data normalization processing and data enhancement processing and convolutional neural network training, and the processing processes are described in detail in step S1 and are not described again here. Marking zero scales in the two-dimensional image of the ultrasonic endoscope, bringing the zero scales into the conversion model in the step S3.3 to obtain the zero scales of the CT three-dimensional reconstruction image, and comparing the zero scales of the obtained CT three-dimensional reconstruction image with the zero scales of the actual CT three-dimensional reconstruction image so as to verify the conversion model.
And S4, converting the zero scale of the two-dimensional image of the ultrasonic endoscope into a CT three-dimensional reconstruction image through the conversion model, and converting the two-dimensional image of the ultrasonic endoscope into the CT three-dimensional reconstruction image through the mapping relation between the two-dimensional image and the three-dimensional image to generate the real-time CT three-dimensional image of the ultrasonic endoscope.
In the scheme of the embodiment, a two-dimensional ultrasonic endoscope image is combined with a CT three-dimensional reconstruction technology through deep learning, so that real-time imaging and navigation are performed by applying a CT three-dimensional structure when a line mediastinum ultrasonic endoscope is in a row; can obtain clearer and more accurate imaging of the lesion and the surrounding anatomical adjacent structures, so that an operator can better judge the result.
Example two
Based on the same inventive concept, the embodiment discloses a real-time transformation system for three-dimensional images of an ultrasonic endoscope and a CT, which comprises:
the ultrasonic endoscope marking module is used for acquiring an ultrasonic endoscope two-dimensional image and marking zero scales in the two-dimensional image;
the CT marking module is used for acquiring a CT three-dimensional reconstruction image and marking zero scales in the CT three-dimensional reconstruction image;
the matching module is used for matching the zero scale of the two-dimensional image of the ultrasonic endoscope with the zero scale of the three-dimensional CT reconstructed image and establishing a conversion model of the zero scale of the two-dimensional image of the ultrasonic endoscope and the zero scale of the three-dimensional CT reconstructed image;
and the conversion module is used for converting the zero scale of the two-dimensional image of the ultrasonic endoscope into the three-dimensional CT reconstructed image through the conversion model and converting the two-dimensional image of the ultrasonic endoscope into the three-dimensional CT reconstructed image through the mapping relation between the two-dimensional image and the three-dimensional image.
Wherein, supersound scope mark module includes: the image acquisition sub-module is used for acquiring a two-dimensional image of the ultrasonic endoscope; the image classification submodule is used for classifying the two-dimensional image of the ultrasonic endoscope into a training set, a verification set and a test set; the training submodule is used for training the convolutional neural network to be trained by utilizing the training set to obtain the trained convolutional neural network; the judging module is used for testing the trained convolutional neural network by utilizing the test set to obtain the network accuracy, stopping training when the network accuracy reaches a preset requirement, and returning to the training submodule to continue training if the network accuracy does not reach the preset requirement; and the independent verification module is used for independently verifying the verification set so as to ensure the generalization of the convolutional neural network.
The matching module comprises: the zero scale image generation submodule is used for bringing the zero scale of the ultrasonic endoscope two-dimensional image marked with the zero scale and the CT three-dimensional reconstruction image into a Mask-RCNN algorithm model and respectively extracting the zero scale images corresponding to the ultrasonic endoscope two-dimensional image and the CT three-dimensional reconstruction image; the zero scale matching submodule is used for bringing a zero scale image of a two-dimensional image of the ultrasonic endoscope and a zero scale image of a three-dimensional CT reconstruction image into a data combined characteristic probability model or a multi-scale hierarchical matching optimization model based on a conditional random field to perform zero scale matching; and the conversion model generation module is used for establishing a conversion model of the zero scale of the two-dimensional image of the ultrasonic endoscope and the zero scale of the three-dimensional CT reconstruction image.
The matching module further comprises: and the verification module is used for verifying the conversion model.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (8)

1. An ultrasonic endoscope and CT three-dimensional image real-time conversion method is characterized by comprising the following steps:
s1, acquiring a two-dimensional image of an ultrasonic endoscope, and marking zero scales in the two-dimensional image;
s2, acquiring a CT three-dimensional reconstruction image, and marking zero scales in the CT three-dimensional reconstruction image;
s3, matching the zero scale of the ultrasonic endoscope two-dimensional image with the zero scale of the CT three-dimensional reconstruction image, and establishing a conversion model of the zero scale of the ultrasonic endoscope two-dimensional image and the zero scale of the CT three-dimensional reconstruction image;
s4, converting the zero scale of the ultrasonic endoscope two-dimensional image into the CT three-dimensional reconstruction image through the conversion model, and converting the ultrasonic endoscope two-dimensional image into the CT three-dimensional reconstruction image through the mapping relation between the two-dimensional image and the three-dimensional image;
the method for matching the zero scale of the ultrasonic endoscope two-dimensional image with the zero scale of the CT three-dimensional reconstruction image in the step S3 comprises the following steps:
s3.1, bringing the zero scale of the ultrasonic endoscope two-dimensional image marked with the zero scale and the CT three-dimensional reconstructed image into a Mask-RCNN algorithm model, and respectively extracting zero scale images corresponding to the ultrasonic endoscope two-dimensional image and the CT three-dimensional reconstructed image;
s3.2, bringing a zero-scale image of the two-dimensional image of the ultrasonic endoscope and a zero-scale image of the three-dimensional CT reconstructed image into a data combined characteristic probability model or a multi-scale hierarchical matching optimization model based on a conditional random field to perform zero-scale matching;
and S3.3, establishing a conversion model of the zero scale of the ultrasonic endoscope two-dimensional image and the zero scale of the CT three-dimensional reconstruction image.
2. The real-time transformation method of the ultrasonic endoscope and the CT three-dimensional image as claimed in claim 1, wherein after the ultrasonic endoscope two-dimensional image is obtained in the step S1, a zero scale position is marked on the ultrasonic endoscope two-dimensional image, and the two-dimensional image marked with the zero scale is input into a convolutional neural network obtained based on the transfer learning training for training.
3. The real-time transformation method for the ultrasonic endoscope and the CT three-dimensional image as claimed in claim 2, wherein the process of the transfer learning training comprises the following steps:
s1.1, acquiring a two-dimensional image of an ultrasonic endoscope;
s1.2, dividing the two-dimensional image of the ultrasonic endoscope into a training set, a verification set and a test set;
s1.3, training a convolutional neural network to be trained by utilizing the training set to obtain the trained convolutional neural network;
s1.4, testing the trained convolutional neural network by using the test set to obtain network accuracy, stopping training when the network accuracy reaches a preset requirement, and otherwise, returning to the step S1.3 to continue training;
s1.5, carrying out individual verification on the verification set so as to ensure the generalization of the convolutional neural network.
4. The real-time transformation method of the ultrasonic endoscope and the CT three-dimensional image as claimed in claim 2, wherein before the transfer learning training, the image is required to be preprocessed, and the image preprocessing step is as follows: and carrying out image normalization processing and/or data enhancement processing on the ultrasonic endoscope two-dimensional image to be processed.
5. The real-time transformation method of the ultrasonic endoscope and the CT three-dimensional image according to claim 1, characterized in that after the step S3 is finished, the transformation model is required to be verified, and the verification process is as follows: and acquiring a new two-dimensional image of the ultrasonic endoscope, marking zero scales in the image, bringing the zero scales into the conversion model to obtain the zero scales of the CT three-dimensional reconstruction image, and comparing the obtained zero scales of the CT three-dimensional reconstruction image with the zero scales of the actual CT three-dimensional reconstruction image to verify the conversion model.
6. A real-time conversion system for three-dimensional images of an ultrasonic endoscope and CT is characterized by comprising:
the ultrasonic endoscope marking module is used for acquiring an ultrasonic endoscope two-dimensional image and marking zero scales in the two-dimensional image;
the CT marking module is used for acquiring a CT three-dimensional reconstruction image and marking zero scales in the CT three-dimensional reconstruction image;
the matching module is used for matching the zero scale of the ultrasonic endoscope two-dimensional image with the zero scale of the CT three-dimensional reconstruction image and establishing a conversion model of the zero scale of the ultrasonic endoscope two-dimensional image and the zero scale of the CT three-dimensional reconstruction image;
the conversion module is used for converting the zero scale of the ultrasonic endoscope two-dimensional image into the CT three-dimensional reconstruction image through the conversion model and converting the ultrasonic endoscope two-dimensional image into the CT three-dimensional reconstruction image through the mapping relation between the two-dimensional image and the three-dimensional image;
the matching module includes:
the zero scale image generation submodule is used for bringing the zero scale of the ultrasonic endoscope two-dimensional image marked with the zero scale and the CT three-dimensional reconstruction image into a Mask-RCNN algorithm model and respectively extracting zero scale images corresponding to the ultrasonic endoscope two-dimensional image and the CT three-dimensional reconstruction image;
the zero scale matching submodule is used for bringing a zero scale image of a two-dimensional image of the ultrasonic endoscope and a zero scale image of a three-dimensional CT reconstruction image into a data combined characteristic probability model or a multi-scale hierarchical matching optimization model based on a conditional random field to perform zero scale matching;
and the conversion model generation module is used for establishing a conversion model of the zero scale of the ultrasonic endoscope two-dimensional image and the zero scale of the CT three-dimensional reconstruction image.
7. The system for real-time transformation of endoultrasound and CT three-dimensional images as claimed in claim 6, wherein said endoultrasound marking module comprises:
the image acquisition sub-module is used for acquiring a two-dimensional image of the ultrasonic endoscope;
the image classification submodule is used for classifying the two-dimensional images of the ultrasonic endoscope into a training set, a verification set and a test set;
the training submodule is used for training the convolutional neural network to be trained by utilizing the training set to obtain the trained convolutional neural network;
the judging module is used for testing the trained convolutional neural network by utilizing the test set to obtain network accuracy, stopping training when the network accuracy reaches a preset requirement, and returning to the training submodule to continue training if the network accuracy does not reach the preset requirement;
and the independent verification module is used for independently verifying the verification set so as to ensure the generalization of the convolutional neural network.
8. The system for real-time transformation of an endoultrasound image and a CT three-dimensional image according to claim 6, wherein the matching module further comprises: and the verification module is used for verifying the conversion model.
CN202010338096.8A 2020-04-26 2020-04-26 Real-time conversion method and system for ultrasonic endoscope and CT three-dimensional image Active CN111466952B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010338096.8A CN111466952B (en) 2020-04-26 2020-04-26 Real-time conversion method and system for ultrasonic endoscope and CT three-dimensional image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010338096.8A CN111466952B (en) 2020-04-26 2020-04-26 Real-time conversion method and system for ultrasonic endoscope and CT three-dimensional image

Publications (2)

Publication Number Publication Date
CN111466952A CN111466952A (en) 2020-07-31
CN111466952B true CN111466952B (en) 2023-03-31

Family

ID=71756133

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010338096.8A Active CN111466952B (en) 2020-04-26 2020-04-26 Real-time conversion method and system for ultrasonic endoscope and CT three-dimensional image

Country Status (1)

Country Link
CN (1) CN111466952B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4179954B1 (en) * 2020-10-27 2024-08-14 RIVERFIELD Inc. Surgery assisting device
CN113648060B (en) * 2021-05-14 2024-02-27 上海交通大学 Ultrasonic guided soft tissue deformation tracking method, device, storage medium and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120000729A (en) * 2010-06-28 2012-01-04 주식회사 사이버메드 Position tracking method for vascular treatment micro robot through registration between x ray image and ct 3d model
CN108492340A (en) * 2018-01-31 2018-09-04 倪昕晔 Method based on ultrasound image acquisition puppet CT

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120000729A (en) * 2010-06-28 2012-01-04 주식회사 사이버메드 Position tracking method for vascular treatment micro robot through registration between x ray image and ct 3d model
CN108492340A (en) * 2018-01-31 2018-09-04 倪昕晔 Method based on ultrasound image acquisition puppet CT

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Deformable multimodal registration for navigation in beating-heart cardiac surgery;Jacob J. Peoples等;《International Journal of Computer Assisted Radiology and Surgery》;20190319;第14卷;第955-966页 *
Detection and classification the breast tumors using mask R-CNN on sonograms;Jui-Ying Chiao等;《Medicine》;20190531;第98卷(第19期);第1-5页 *
Joao Gomes-Fonseca等.Surface-based registration between CT and US for image-guided percutaneous renal access -A feasibility study.《Medical Physics》.2018,第1115-1126页. *
Registration of CT and Ultrasound Images of the Spine with Neural Network and Orientation Code Mutual Information;Fang Chen等;《International Conference on Medical Imaging and Augmented Reality》;20160814;第292–301页 *
Surface-based registration between CT and US for image-guided percutaneous renal access -A feasibility study;Joao Gomes-Fonseca等;《Medical Physics》;20181228;第1115-1126页 *

Also Published As

Publication number Publication date
CN111466952A (en) 2020-07-31

Similar Documents

Publication Publication Date Title
CN113506334B (en) Multi-mode medical image fusion method and system based on deep learning
US11633169B2 (en) Apparatus for AI-based automatic ultrasound diagnosis of liver steatosis and remote medical diagnosis method using the same
CN112150524B (en) Two-dimensional and three-dimensional medical image registration method and system based on deep learning
US9480456B2 (en) Image processing apparatus that simultaneously displays two regions of interest on a body mark, processing method thereof and storage medium
JP4171833B2 (en) Endoscope guidance device and method
JP5592796B2 (en) System and method for quantitative 3DCEUS analysis
WO2017027638A1 (en) 3d reconstruction and registration of endoscopic data
JP7218215B2 (en) Image diagnosis device, image processing method and program
CN111227864A (en) Method and apparatus for lesion detection using ultrasound image using computer vision
JP2016531709A (en) Image analysis technology for diagnosing disease
CN114129240A (en) Method, system and device for generating guide information and electronic equipment
Li et al. Image-guided navigation of a robotic ultrasound probe for autonomous spinal sonography using a shadow-aware dual-agent framework
CN111466952B (en) Real-time conversion method and system for ultrasonic endoscope and CT three-dimensional image
CN111588467B (en) Method for converting three-dimensional space coordinates into two-dimensional image coordinates based on medical images
US20240020823A1 (en) Assistance diagnosis system for lung disease based on deep learning and assistance diagnosis method thereof
CN111311626A (en) Skull fracture automatic detection method based on CT image and electronic medium
Li et al. Automatic recognition of abdominal organs in ultrasound images based on deep neural networks and k-nearest-neighbor classification
CN116580033B (en) Multi-mode medical image registration method based on image block similarity matching
Viviers et al. Advancing 6-DoF Instrument Pose Estimation in Variable X-Ray Imaging Geometries
CN111640127A (en) Accurate clinical diagnosis navigation method for orthopedics department
US20220175457A1 (en) Endoscopic image registration system for robotic surgery
CN114708404A (en) End-to-end operation puncture path automatic planning method and system based on machine learning
Morano et al. Deep multimodal fusion of data with heterogeneous dimensionality via projective networks
CN114930390A (en) Method and apparatus for registering a medical image of a living subject with an anatomical model
US10299864B1 (en) Co-localization of multiple internal organs based on images obtained during surgery

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant