CN111507886A - Trachea model reconstruction method and system by utilizing ultrasonic wave and deep learning technology - Google Patents

Trachea model reconstruction method and system by utilizing ultrasonic wave and deep learning technology Download PDF

Info

Publication number
CN111507886A
CN111507886A CN201910332801.0A CN201910332801A CN111507886A CN 111507886 A CN111507886 A CN 111507886A CN 201910332801 A CN201910332801 A CN 201910332801A CN 111507886 A CN111507886 A CN 111507886A
Authority
CN
China
Prior art keywords
image
ultrasonic
module
space
trachea
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910332801.0A
Other languages
Chinese (zh)
Inventor
卢昭全
王友光
陈威廷
许斐凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of CN111507886A publication Critical patent/CN111507886A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T3/08
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06T5/70
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/50ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders

Abstract

The invention relates to a trachea model reconstruction method by utilizing ultrasonic and deep learning technology, which comprises a step of obtaining images and position information of a trachea wall, a step of image space positioning, a step of image processing, a step of image feature acquisition and deep learning image identification, a step of 6DoF space positioning, a step of image space correction, a step of image space conversion and a step of forming a three-dimensional trachea model; therefore, the trachea model reconstruction method capable of reconstructing and recording the three-dimensional trachea model correctly and quickly is provided.

Description

Trachea model reconstruction method and system by utilizing ultrasonic wave and deep learning technology
Technical Field
The invention relates to a trachea model reconstruction method and a trachea model reconstruction system by utilizing ultrasonic waves and deep learning technology, in particular to a trachea model reconstruction method and a trachea model reconstruction system which can accurately and quickly reconstruct and record a three-dimensional trachea model.
Background
When patients are under general anesthesia, cardiopulmonary resuscitation or cannot breathe by themselves during operation, intubation treatment is required to be carried out on the patients so as to insert the artificial airway into the trachea and ensure that medical gas is smoothly sent into the trachea of the patients. When the intubation treatment is carried out, medical staff cannot directly visually observe and adjust the artificial airway, and only can operate by relying on the touch feeling and past experience of the medical staff, so that the trachea of a patient is prevented from being stabbed, the operation is required to be carried out for multiple times, and the time for establishing the unobstructed airway is delayed. Therefore, the rapid and accurate establishment of the three-dimensional tracheal model for medical staff to assist intubation is a problem to be solved at present.
Disclosure of Invention
The present invention is directed to overcome the above-mentioned drawbacks of the prior art, and provides a method and a system for reconstructing a trachea model by using ultrasound and deep learning, which can accurately and rapidly reconstruct and record a three-dimensional trachea model.
The technical scheme adopted by the invention for solving the technical problems is as follows:
a trachea model reconstruction method using ultrasonic and deep learning techniques includes the following steps:
acquiring images and position information of the tracheal tube wall: scanning the oral cavity to the trachea by using a positionable ultrasonic scanner to obtain an ultrasonic image, and synchronously obtaining position information of the ultrasonic image according to the scanning position;
and (3) positioning the space of the map resource: carrying out spatial positioning processing on the ultrasonic image and obtaining spatial positioning information of the ultrasonic image;
image processing: denoising, denoising and cropping the ultrasonic image, and performing image enhancement to enhance details of the ultrasonic image to obtain a clear ultrasonic image;
image feature acquisition and deep learning image identification: capturing and extracting the clear ultrasonic image, storing various image characteristics and continuous tracheal wall images, and then achieving the purposes of assisting in identifying the image characteristics and the tracheal wall images through a training deep learning model and positioning the shape, the curvature and the position of the tracheal wall;
6DoF space positioning: positioning the ultrasonic image and the spatial position information in the image space positioning step to obtain the spatial positioning data of the ultrasonic image;
image space correction: correcting the positioning data of the ultrasonic image space after the positioning processing of the 6DoF space positioning step to obtain the actual size and the actual projection position of the ultrasonic image in the three-dimensional space so as to convert the actual size and the actual projection position into the actual three-dimensional space position, and correcting the correct size of the ultrasonic image output;
image space conversion: projecting the ultrasonic image in the image space correction step to a three-dimensional space to obtain three-dimensional space data and image information of the trachea model;
forming a three-dimensional trachea model: connecting the ultrasonic image obtained in the image feature acquisition and deep learning image identification steps with the three-dimensional space data and image information of the trachea model obtained in the image space conversion step, splicing, reconstructing and recording the three-dimensional space data and the image information into an actual three-dimensional trachea model;
therefore, the trachea model reconstruction method can be used for accurately and quickly reconstructing and recording the three-dimensional trachea model for subsequent medical research or use.
The invention has the beneficial effect that the three-dimensional trachea model can be correctly and quickly reconstructed and recorded.
Drawings
The invention is further illustrated with reference to the following figures and examples.
FIG. 1 is a flow chart of the steps of the present invention.
Fig. 2 is a system block diagram of the present invention.
FIG. 3 is a block diagram of a system incorporating a positionable ultrasound scanner in accordance with the present invention.
The reference numbers in the figures illustrate:
10 figure data loading module
20 image processing module
30 image feature capturing module
40 degree of deep learning image identification module
50 six-degree-of-freedom space positioning module
60 image space correction algorithm module
70 image space conversion algorithm module
80 three-dimensional model reconstruction module
90-position ultrasonic scanner
Detailed Description
The technical means and the structure thereof applied to achieve the object of the present invention will be described in detail with reference to the embodiments shown in fig. 1 to 3 as follows:
referring to the flow chart of FIG. 1, the detailed steps are as follows:
acquiring images and position information of the tracheal tube wall: the ultrasonic image is obtained by scanning the oral cavity to the trachea by using the positionable ultrasonic scanner, and the position information of the ultrasonic image is synchronously obtained according to the scanning position.
And (3) positioning the space of the map resource: and carrying out space positioning processing on the ultrasonic image and obtaining space positioning information of the ultrasonic image.
Image processing: the ultrasonic image is processed by denoising, denoising and cropping, and image enhancement is performed to enhance the details of the ultrasonic image, so as to obtain a clear ultrasonic image.
Image feature acquisition and deep learning image identification: the clear ultrasonic image is captured and extracted, various image characteristics and continuous tracheal wall images are stored, then the auxiliary identification of the image characteristics and the tracheal wall images is achieved through a training deep learning model, and the shape, the curvature and the position of the tracheal wall are located.
6DoF space positioning: and positioning the ultrasonic image and the spatial position information in the image space positioning step to obtain the spatial positioning data of the ultrasonic image.
Image space correction: for warp 6DOAnd F, correcting the positioning data of the ultrasonic image space after the positioning processing in the space positioning step to obtain the actual size and the actual projection position of the ultrasonic image in the three-dimensional space so as to convert the actual size and the actual projection position into the actual three-dimensional space position, and correcting the correct size of the ultrasonic image output.
Image space conversion: and projecting the ultrasonic image in the image space correction step to a three-dimensional space to obtain three-dimensional space data and image information of the trachea model.
Forming a three-dimensional trachea model: and connecting the ultrasonic image obtained in the image feature acquisition and deep learning image identification steps with the three-dimensional space data and image information of the trachea model obtained in the image space conversion step, splicing, reconstructing and recording the three-dimensional space data and the image information into an actual three-dimensional trachea model.
For further explanation of the present invention, it is noted that the following detailed description is made with reference to the system configuration diagrams shown in fig. 2 and fig. 3: as shown in fig. 2, the trachea model reconstruction system using ultrasound and deep learning of the present invention includes an image data loading module 10, an image processing module 20, an image feature capturing module 30, a deep learning image recognition module 40, a six-degree-of-freedom spatial orientation module 50, an image spatial correction algorithm module 60, an image spatial transformation algorithm module 70, and a three-dimensional model reconstruction module 80; wherein:
the image loading module 10 (please refer to fig. 2) is connected to the positionable ultrasound scanner 90, and is used for loading the ultrasound image and the position information obtained by the ultrasound scanner 90 and performing image processing in cooperation with spatial positioning.
The image processing module 20 (please refer to fig. 2) is connected to the image loading module 10, and is configured to receive the ultrasonic image loaded by the image loading module 10, to perform denoising, denoising and cropping on the ultrasonic image, and perform image enhancement to enhance details of the ultrasonic image, so as to obtain a clear ultrasonic image.
The image feature capturing module 30 (also shown in fig. 2) is connected to the image processing module 20 for capturing, extracting and storing various image features of the clear ultrasound image and the continuous tracheal wall image.
The deep learning image recognition module 40 (please refer to fig. 2) is connected to the image feature capturing module 30, and trains the deep learning model to recognize the trachea wall in the auxiliary ultrasound image and locate the shape, curvature and position information of part of the trachea wall in the planar clear ultrasound image according to the various image features and the continuous trachea wall images stored in the image feature capturing module 30.
In view of the above, in the preferred embodiment, the deep learning model of the deep learning image recognition module 40 can be designed to be controlled manually, automatically or semi-automatically to locate the shape, curvature and position information of the tracheal wall.
The six-degree-of-freedom spatial positioning module 50 (please refer to fig. 2) is connected to the image loading module 10 and the positionable ultrasonic scanner 90, and is configured to receive spatial position information obtained by loading the positionable ultrasonic scanner 90, and perform spatial positioning processing on the ultrasonic image loaded by the image loading module 10 to obtain an ultrasonic image and spatial positioning data.
The image space calibration algorithm module 60 (please refer to fig. 2) is connected to the six-degree-of-freedom spatial positioning module 50, and is configured to receive and calibrate the actual size and the actual projection position of the spatial positioning data processed by the six-degree-of-freedom spatial positioning module 50 in the three-dimensional space, so as to convert the actual size and the actual projection position into the actual three-dimensional space position, and calibrate the correct size of the ultrasonic image output.
The image space conversion algorithm module 70 (please refer to fig. 2) is connected to the image space correction algorithm module 60, and is configured to receive the ultrasonic image processed by the image space correction algorithm module 60 and project the ultrasonic image to a three-dimensional space, so as to obtain three-dimensional space data and image information of the trachea model.
The three-dimensional model reconstruction module 80 (please refer to fig. 2) is connected to the deep learning image recognition module 40 and the image space conversion algorithm module 70, and is configured to receive the clear ultrasound image information of the deep learning image recognition module 40, and continuously splice the clear ultrasound images of the continuous tracheal wall according to the three-dimensional space data and the image information of the tracheal model obtained by the image space conversion algorithm module 70, so as to reconstruct and record a complete three-dimensional tracheal model.
In addition, the image feature capturing and deep learning image identification step and deep learning image identification module 40 captures the trachea image data of a plurality of patients to obtain image features, and inputs the image features and ultrasonic images into a deep learning model, wherein the deep learning model can be selected from the types of supervised learning, unsupervised learning, semi-supervised learning, reinforcement learning and the like (such as neural network, random forest, Support Vector Machine (SVM), decision tree or cluster and the like), and the features, the shapes, the curvatures and the positions of the trachea wall are identified by penetrating through the deep learning model.
Therefore, after the continuous ultrasonic images and the corresponding position information are obtained by the positionable ultrasonic scanner, clear ultrasonic images and image characteristics are formed through image processing and characteristic capture, the shape, the curvature and the position of the tracheal wall are identified in an auxiliary way by matching with deep learning, and after the corresponding space information of the ultrasonic images is obtained by 6DoF space positioning, image space correction and conversion, a three-dimensional tracheal model can be reconstructed correctly and quickly to be used for intubation assistance and subsequent medical research or use.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention in any way, and all simple modifications, equivalent variations and modifications made to the above embodiment according to the technical spirit of the present invention still fall within the scope of the technical solution of the present invention.

Claims (2)

1. A trachea model reconstruction method using ultrasonic wave and deep learning technology is characterized by comprising the following steps:
acquiring images and position information of the tracheal tube wall: scanning the oral cavity to the trachea by using a positionable ultrasonic scanner to obtain an ultrasonic image, and synchronously obtaining position information of the ultrasonic image according to the scanning position;
and (3) positioning the space of the map resource: carrying out spatial positioning processing on the ultrasonic image and obtaining spatial positioning information of the ultrasonic image;
image processing: denoising, denoising and cropping the ultrasonic image, and performing image enhancement to enhance details of the ultrasonic image to obtain a clear ultrasonic image;
image feature acquisition and deep learning image identification: capturing and extracting the clear ultrasonic image, storing various image characteristics and images of the trachea continuous wall, and then achieving the purposes of assisting in identifying the image characteristics and the images of the trachea wall and positioning the shape and the position of the trachea wall through a training deep learning model;
6DoF space positioning: positioning the ultrasonic image and the spatial position information in the image space positioning step to obtain the spatial positioning data of the ultrasonic image;
image space correction: correcting the positioning data of the ultrasonic image space after the positioning processing of the 6DoF space positioning step to obtain the actual size and the actual projection position of the ultrasonic image in the three-dimensional space so as to convert the actual size and the actual projection position into the actual three-dimensional space position, and correcting the correct size of the ultrasonic image output;
image space conversion: projecting the ultrasonic image in the image space correction step to a three-dimensional space to obtain three-dimensional space data and image information of the trachea model;
forming a three-dimensional trachea model: and connecting the ultrasonic image obtained in the image feature acquisition and deep learning image identification steps with the three-dimensional space data and image information of the trachea model obtained in the image space conversion step, splicing, reconstructing and recording the three-dimensional space data and the image information into an actual three-dimensional trachea model.
2. A trachea model reconstruction system using ultrasound and deep learning techniques, characterized in that, the trachea model reconstruction method applied to the ultrasound and deep learning techniques of claim 1 comprises an image data loading module, an image processing module, an image feature capturing module, a deep learning image identification module, a six-degree-of-freedom spatial positioning module, an image spatial correction algorithm module, an image spatial conversion algorithm module, and a three-dimensional model reconstruction module; wherein:
the image data loading module is connected with the positionable ultrasonic scanner and used for loading the ultrasonic image and the position information obtained by the ultrasonic scanner and carrying out image processing by cooperation with spatial positioning;
the image processing module is connected with the image loading module and used for receiving the ultrasonic image loaded by the image loading module, denoising and cutting the ultrasonic image, and simultaneously performing image enhancement processing to enhance the details of the ultrasonic image so as to obtain a clear ultrasonic image;
the image characteristic acquisition module is connected with the image processing module and is used for acquiring, extracting and storing various image characteristics of the clear ultrasonic image and the trachea continuous wall image;
the deep learning image identification module is connected with the image feature acquisition module and is used for training the deep learning model to identify the trachea wall in the auxiliary ultrasonic image according to the image features and the trachea continuous wall image stored by the image feature acquisition module and positioning the shape and position information of part of the trachea wall in the planar clear ultrasonic image;
the six-degree-of-freedom spatial positioning module is connected with the image data loading module and the positionable ultrasonic scanner and used for receiving spatial position information obtained by loading the positionable ultrasonic scanner and carrying out spatial positioning processing on an ultrasonic image loaded by the image data loading module so as to obtain an ultrasonic image and spatial positioning data;
the image space correction algorithm module is connected with the six-degree-of-freedom space positioning module and used for receiving and correcting the actual size and the actual projection position of the space positioning data processed by the six-degree-of-freedom space positioning module in a three-dimensional space so as to convert the actual size and the actual projection position into an actual three-dimensional space position and correct the output correct size of the ultrasonic image;
the image space conversion algorithm module is connected with the image space correction algorithm module and used for receiving the ultrasonic image processed by the image space correction algorithm module and projecting the ultrasonic image to a three-dimensional space so as to obtain three-dimensional space data and image information of the trachea model;
and the three-dimensional model reconstruction module is connected with the deep learning image identification module and the image space conversion algorithm module and used for receiving the clear ultrasonic image information of the deep learning image identification module and continuously splicing the clear ultrasonic images of the continuous tracheal walls according to the three-dimensional space data and the image information of the tracheal model obtained by the image space conversion algorithm module so as to reconstruct and record a complete three-dimensional tracheal model.
CN201910332801.0A 2019-01-31 2019-04-24 Trachea model reconstruction method and system by utilizing ultrasonic wave and deep learning technology Pending CN111507886A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
TW108201581 2019-01-31
TW108201581U TWM584009U (en) 2019-01-31 2019-01-31 Trachea model reconstruction system utilizing ultrasound and deep learning technology

Publications (1)

Publication Number Publication Date
CN111507886A true CN111507886A (en) 2020-08-07

Family

ID=68620115

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910332801.0A Pending CN111507886A (en) 2019-01-31 2019-04-24 Trachea model reconstruction method and system by utilizing ultrasonic wave and deep learning technology

Country Status (2)

Country Link
CN (1) CN111507886A (en)
TW (1) TWM584009U (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100998511A (en) * 2006-01-11 2007-07-18 中国科学院自动化研究所 Real-time, freedom-arm, three-D ultrasonic imaging system and method therewith
CN101283929A (en) * 2008-06-05 2008-10-15 华北电力大学 Rebuilding method of blood vessel three-dimensional model
CN102319117A (en) * 2011-06-16 2012-01-18 上海交通大学医学院附属瑞金医院 Arterial intervention implant implanting system capable of fusing real-time ultrasonic information based on magnetic navigation
CN103654856A (en) * 2013-12-23 2014-03-26 中国科学院苏州生物医学工程技术研究所 Small real-time free-arm three-dimensional ultrasound imaging system
CN107019513A (en) * 2017-05-18 2017-08-08 山东大学齐鲁医院 Intravascular virtual endoscope imaging system and its method of work based on electromagnetic location composite conduit
US20180018757A1 (en) * 2016-07-13 2018-01-18 Kenji Suzuki Transforming projection data in tomography by means of machine learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100998511A (en) * 2006-01-11 2007-07-18 中国科学院自动化研究所 Real-time, freedom-arm, three-D ultrasonic imaging system and method therewith
CN101283929A (en) * 2008-06-05 2008-10-15 华北电力大学 Rebuilding method of blood vessel three-dimensional model
CN102319117A (en) * 2011-06-16 2012-01-18 上海交通大学医学院附属瑞金医院 Arterial intervention implant implanting system capable of fusing real-time ultrasonic information based on magnetic navigation
CN103654856A (en) * 2013-12-23 2014-03-26 中国科学院苏州生物医学工程技术研究所 Small real-time free-arm three-dimensional ultrasound imaging system
US20180018757A1 (en) * 2016-07-13 2018-01-18 Kenji Suzuki Transforming projection data in tomography by means of machine learning
CN107019513A (en) * 2017-05-18 2017-08-08 山东大学齐鲁医院 Intravascular virtual endoscope imaging system and its method of work based on electromagnetic location composite conduit

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
么娆: "《超声引导机器人系统实时影像处理与导航定位技术》", 31 October 2013, 国防工业出版社 *
李义兵等: ""三维融合"复合三维超声成像", 《医疗设备信息》 *
陶攀等: "基于深度学习的医学计算机辅助检测方法研究", 《生物医学工程学杂志》 *

Also Published As

Publication number Publication date
TWM584009U (en) 2019-09-21

Similar Documents

Publication Publication Date Title
CN110097557B (en) Medical image automatic segmentation method and system based on 3D-UNet
KR20140096919A (en) Method and Apparatus for medical image registration
CN109785311B (en) Disease diagnosis device, electronic equipment and storage medium
CN108229584A (en) A kind of Multimodal medical image recognition methods and device based on deep learning
CN115564712B (en) Capsule endoscope video image redundant frame removing method based on twin network
CN112925235A (en) Sound source localization method, apparatus and computer-readable storage medium at the time of interaction
Raeesy et al. Automatic segmentation of vocal tract MR images
US20200305847A1 (en) Method and system thereof for reconstructing trachea model using computer-vision and deep-learning techniques
CN114452508A (en) Catheter motion control method, interventional operation system, electronic device, and storage medium
CN113920187A (en) Catheter positioning method, interventional operation system, electronic device, and storage medium
CN111507886A (en) Trachea model reconstruction method and system by utilizing ultrasonic wave and deep learning technology
CN111863230B (en) Infant sucking remote assessment and breast feeding guidance method
Douros et al. Towards a method of dynamic vocal tract shapes generation by combining static 3D and dynamic 2D MRI speech data
CN112089438B (en) Four-dimensional reconstruction method and device based on two-dimensional ultrasonic image
CN111599004B (en) 3D blood vessel imaging system, method and device
Zhang et al. Masseter muscle segmentation from cone-beam ct images using generative adversarial network
KR101635188B1 (en) Unborn child sculpture printing service system and a method thereof
CN108109682A (en) A kind of medical image identifying system and its method
CN114202516A (en) Foreign matter detection method and device, electronic equipment and storage medium
CN111508057A (en) Trachea model reconstruction method and system by using computer vision and deep learning technology
CN113298773A (en) Heart view identification and left ventricle detection device and system based on deep learning
Cao et al. Venibot: Towards autonomous venipuncture with automatic puncture area and angle regression from nir images
WO2020114332A1 (en) Segmentation-network-based ct lung tumor segmentation method, apparatus and device, and medium
CN114820730B (en) CT and CBCT registration method based on pseudo CT
KR102639803B1 (en) Method for detecting pleurl effusion and the apparatus for therof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200807

WD01 Invention patent application deemed withdrawn after publication