CN111815764A - Ultrasonic three-dimensional reconstruction method based on self-supervision 3D full convolution neural network - Google Patents
Ultrasonic three-dimensional reconstruction method based on self-supervision 3D full convolution neural network Download PDFInfo
- Publication number
- CN111815764A CN111815764A CN202010707414.3A CN202010707414A CN111815764A CN 111815764 A CN111815764 A CN 111815764A CN 202010707414 A CN202010707414 A CN 202010707414A CN 111815764 A CN111815764 A CN 111815764A
- Authority
- CN
- China
- Prior art keywords
- network
- dimensional
- human tissue
- neural network
- ultrasonic image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Abstract
The invention provides an ultrasonic three-dimensional reconstruction method based on an automatic supervision 3D full convolution neural network. Firstly, scanning to obtain a three-dimensional ultrasonic image of human tissue; then, respectively and independently training 3D convolution kernels with different scales, and connecting the convolution kernels in series to obtain a cascaded 3D full convolution neural network; and finally, reconstructing a three-dimensional ultrasonic image by utilizing the cascaded network. Because the voxel information in different scale spaces around the voxel to be reconstructed is considered when the convolution kernels with different scales represent the reconstructed voxel, the method has higher reconstruction accuracy compared with the traditional method with fixed receptive field.
Description
Technical Field
The invention belongs to the technical field of intelligent image processing, and particularly relates to an ultrasonic three-dimensional reconstruction method based on an automatic supervision 3D full convolution neural network.
Background
The ultrasonic three-dimensional reconstruction method mainly comprises two main categories: reconstruction methods based on voxel difference values, such as the voxel nearest neighbor method proposed in the document "Rohling R, Gee A, Berman L.A, company of free three-dimensional reconstruction techniques [ J ]. Medical Image Analysis,1999,3(4): 339", by R.W. Prager et al, in the document "Prager R W, Gee A, Berman L.Stradx: real-time acquisition and visualization of free three-dimensional reconstruction [ J ]. Medical Image Analysis,1999,3(2): 129-140". These methods are easy to implement, but the reconstruction accuracy is not high. A reconstruction method based on functions, such as a reconstruction method based on Bessel Interpolation, which is proposed by Huangqing et al in the documents of Huang Q, Huang Y, Hu W, et al Bezier Interpolation for 3-D FreehandUltrasound [ J ]. IEEE Transactions on Human-Machine Systems,2015,45(45):385-392. However, the conventional methods only consider the voxel information in the neighborhood of the voxel to be reconstructed, and have a fixed receptive field, so that the reconstruction accuracy has a significant ceiling. If the voxel to be reconstructed can have a larger receptive field and also can take into account information in a small receptive field in the reconstruction process, the reconstruction accuracy can be improved.
Disclosure of Invention
In order to improve the reconstruction accuracy of the three-dimensional reconstruction method, the invention provides an ultrasonic three-dimensional reconstruction method based on an automatic supervision 3D full convolution neural network. The 3D convolution kernel can estimate a voxel value corresponding to the center of the convolution kernel according to a plurality of voxel values in the space, so that the reconstruction effect is realized. The method adopts a simple to complex network architecture, separately trains the 3D convolution kernels with different scales respectively, and connects the convolution kernels in series when reconstructing the voxel, wherein the convolution kernels with different scales represent the voxel to be reconstructed, and the voxel information in different scale spaces around the voxel to be reconstructed is considered, so that the method has higher accuracy compared with the traditional method with fixed receptive field.
An ultrasonic three-dimensional reconstruction method based on an automatic supervision 3D full convolution neural network is characterized by comprising the following steps:
step 1: scanning a human tissue part by using a three-dimensional ultrasonic automatic scanning robot to obtain a three-dimensional ultrasonic image of the human tissue, wherein each volume element in the three-dimensional ultrasonic image is a four-dimensional vector and comprises an x coordinate, a y coordinate, a z coordinate and an ultrasonic intensity value of the volume element;
step 2: respectively constructing a single-layer 3D convolutional neural network by using convolution kernels of 3 × 3, 5 × 5 and 10 × 10 to obtain three single-layer 3D convolutional neural networks;
and step 3: for each single-layer 3D convolutional neural network, the three-dimensional ultrasonic image of the human tissue obtained in the step 1 is used as input, the convolutional kernel weight of the network is initialized randomly, then back propagation is carried out through a network loss function, the convolutional kernel weight is updated in an iterative mode, and updating is stopped when the iteration times reach one hundred, so that a trained network is obtained; wherein the loss function of the network is set as follows:
Loss=crossentropy(Y,Y′*mask) (1)
wherein Loss represents a Loss function, cross entropy represents binary cross entropy Loss, Y is a human tissue three-dimensional ultrasonic image input by a network, and Y' is a network output value after convolution; the mask is mask information of Y, if the ultrasonic intensity of a certain volume element in the Y is greater than zero, the ultrasonic intensity of the volume element at the same position in the mask is set to be 1, otherwise, the ultrasonic intensity is zero;
and 4, step 4: connecting the trained three single-layer 3D convolutional neural networks in series according to the sequence of convolution kernels from large to small to obtain a cascaded 3D full convolutional neural network;
and 5: and inputting the human tissue three-dimensional ultrasonic image into the cascaded 3D full convolution neural network, and outputting the human tissue three-dimensional ultrasonic image which is the reconstructed human tissue three-dimensional ultrasonic image.
The invention has the beneficial effects that: due to the fact that different convolution kernels are adopted, reconstruction is conducted according to the voxel information of neighborhood space of different scales around the voxel to be reconstructed, the defect that the receptive field is fixed in the traditional method is effectively overcome, and reconstruction accuracy is improved.
Drawings
Fig. 1 is a flow chart of an ultrasonic three-dimensional reconstruction method based on an auto-supervised 3D full convolution neural network according to the present invention.
FIG. 2 is a schematic diagram of the single layer convolutional network training of the present invention.
Fig. 3 is a schematic diagram of a cascaded network model of the present invention.
Detailed Description
The present invention will be further described with reference to the following drawings and examples, which include, but are not limited to, the following examples.
As shown in fig. 1, the present invention provides an ultrasound three-dimensional reconstruction method based on an auto-supervised 3D full convolution neural network. The realization process is as follows:
1. data is collected. Scanning different tissues of different people by using a three-dimensional ultrasonic automatic scanning robot so as to acquire scanning results of a plurality of tissue parts of a human body and establish a human tissue ultrasonic 3D model database. The database comprises a three-dimensional ultrasonic image of human tissue obtained by scanning, and each volume element in the image is a four-dimensional vector and comprises an x coordinate, a y coordinate, a z coordinate and an ultrasonic intensity value of the volume element.
2. And respectively constructing a plurality of single-layer 3D convolutional neural networks as simple networks by using convolutional kernels with different scales.
3. The simple networks were trained separately, as shown in fig. 2, with the loss function as follows:
Loss=crossentropy(Y,Y′*mask) (2)
wherein Loss represents a Loss function, cross entropy represents binary cross entropy Loss, Y is network input, namely original space information to be reconstructed acquired by the robot, and Y' is a network output value, namely reconstructed space information after convolution; the mask is mask information of Y, namely volume elements with the ultrasonic intensity larger than zero in the Y are set to be 1, and the rest volume elements are set to be 0, so that the mask is obtained. Operation represents multiplication of corresponding volume element values. As can be seen from the loss function, the method does not need manual labeling and is an automatic supervision method.
4. The trained three single-layer 3D convolutional neural networks are connected in series in the order that the convolutional kernels of the networks decrease in sequence to obtain a cascaded 3D full convolutional neural network, as shown in fig. 3.
5. And (3) inputting the human tissue three-dimensional ultrasonic image obtained in the step (1) into the cascaded 3D full convolution neural network, and outputting the human tissue three-dimensional ultrasonic image which is the reconstructed human tissue three-dimensional ultrasonic image.
Claims (1)
1. An ultrasonic three-dimensional reconstruction method based on an automatic supervision 3D full convolution neural network is characterized by comprising the following steps:
step 1: scanning a human tissue part by using a three-dimensional ultrasonic automatic scanning robot to obtain a three-dimensional ultrasonic image of the human tissue, wherein each volume element in the three-dimensional ultrasonic image is a four-dimensional vector and comprises an x coordinate, a y coordinate, a z coordinate and an ultrasonic intensity value of the volume element;
step 2: respectively constructing a single-layer 3D convolutional neural network by using convolution kernels of 3 × 3, 5 × 5 and 10 × 10 to obtain three single-layer 3D convolutional neural networks;
and step 3: for each single-layer 3D convolutional neural network, the three-dimensional ultrasonic image of the human tissue obtained in the step 1 is used as input, the convolutional kernel weight of the network is initialized randomly, then back propagation is carried out through a network loss function, the convolutional kernel weight is updated in an iterative mode, and updating is stopped when the iteration times reach one hundred, so that a trained network is obtained; wherein the loss function of the network is set as follows:
Loss=crossentropy(Y,Y′*mask) (1)
wherein Loss represents a Loss function, cross entropy represents binary cross entropy Loss, Y is a human tissue three-dimensional ultrasonic image input by a network, and Y' is a network output value after convolution; the mask is mask information of Y, if the ultrasonic intensity of a certain volume element in the Y is greater than zero, the ultrasonic intensity of the volume element at the same position in the mask is set to be 1, otherwise, the ultrasonic intensity is zero;
and 4, step 4: connecting the trained three single-layer 3D convolutional neural networks in series according to the sequence of convolution kernels from large to small to obtain a cascaded 3D full convolutional neural network;
and 5: and inputting the human tissue three-dimensional ultrasonic image into the cascaded 3D full convolution neural network, and outputting the human tissue three-dimensional ultrasonic image which is the reconstructed human tissue three-dimensional ultrasonic image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010707414.3A CN111815764B (en) | 2020-07-21 | 2020-07-21 | Ultrasonic three-dimensional reconstruction method based on self-supervision 3D full convolution neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010707414.3A CN111815764B (en) | 2020-07-21 | 2020-07-21 | Ultrasonic three-dimensional reconstruction method based on self-supervision 3D full convolution neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111815764A true CN111815764A (en) | 2020-10-23 |
CN111815764B CN111815764B (en) | 2022-07-05 |
Family
ID=72861581
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010707414.3A Active CN111815764B (en) | 2020-07-21 | 2020-07-21 | Ultrasonic three-dimensional reconstruction method based on self-supervision 3D full convolution neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111815764B (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109598727A (en) * | 2018-11-28 | 2019-04-09 | 北京工业大学 | A kind of CT image pulmonary parenchyma three-dimensional semantic segmentation method based on deep neural network |
CN109671086A (en) * | 2018-12-19 | 2019-04-23 | 深圳大学 | A kind of fetus head full-automatic partition method based on three-D ultrasonic |
CN109829855A (en) * | 2019-01-23 | 2019-05-31 | 南京航空航天大学 | A kind of super resolution ratio reconstruction method based on fusion multi-level features figure |
CN110136157A (en) * | 2019-04-09 | 2019-08-16 | 华中科技大学 | A kind of three-dimensional carotid ultrasound image vascular wall dividing method based on deep learning |
US20190378311A1 (en) * | 2018-06-12 | 2019-12-12 | Siemens Healthcare Gmbh | Machine-Learned Network for Fourier Transform in Reconstruction for Medical Imaging |
CN110807829A (en) * | 2019-11-05 | 2020-02-18 | 张东海 | Method for constructing three-dimensional heart model based on ultrasonic imaging |
CN111091616A (en) * | 2019-11-25 | 2020-05-01 | 艾瑞迈迪科技石家庄有限公司 | Method and device for reconstructing three-dimensional ultrasonic image |
CN111192269A (en) * | 2020-01-02 | 2020-05-22 | 腾讯科技(深圳)有限公司 | Model training and medical image segmentation method and device |
US20200211160A1 (en) * | 2018-12-26 | 2020-07-02 | Shanghai United Imaging Intelligence Co., Ltd. | Systems and methods for image reconstruction |
-
2020
- 2020-07-21 CN CN202010707414.3A patent/CN111815764B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190378311A1 (en) * | 2018-06-12 | 2019-12-12 | Siemens Healthcare Gmbh | Machine-Learned Network for Fourier Transform in Reconstruction for Medical Imaging |
CN109598727A (en) * | 2018-11-28 | 2019-04-09 | 北京工业大学 | A kind of CT image pulmonary parenchyma three-dimensional semantic segmentation method based on deep neural network |
CN109671086A (en) * | 2018-12-19 | 2019-04-23 | 深圳大学 | A kind of fetus head full-automatic partition method based on three-D ultrasonic |
US20200211160A1 (en) * | 2018-12-26 | 2020-07-02 | Shanghai United Imaging Intelligence Co., Ltd. | Systems and methods for image reconstruction |
CN109829855A (en) * | 2019-01-23 | 2019-05-31 | 南京航空航天大学 | A kind of super resolution ratio reconstruction method based on fusion multi-level features figure |
CN110136157A (en) * | 2019-04-09 | 2019-08-16 | 华中科技大学 | A kind of three-dimensional carotid ultrasound image vascular wall dividing method based on deep learning |
CN110807829A (en) * | 2019-11-05 | 2020-02-18 | 张东海 | Method for constructing three-dimensional heart model based on ultrasonic imaging |
CN111091616A (en) * | 2019-11-25 | 2020-05-01 | 艾瑞迈迪科技石家庄有限公司 | Method and device for reconstructing three-dimensional ultrasonic image |
CN111192269A (en) * | 2020-01-02 | 2020-05-22 | 腾讯科技(深圳)有限公司 | Model training and medical image segmentation method and device |
Non-Patent Citations (3)
Title |
---|
ARUN ASOKAN NAIR等: "A Fully Convolutional Neural Network for Beamforming Ultrasound Images", 《IEEE》 * |
FENGXIN PAN等: "Classification of liver tumors with CEUS based on 3D-CNN", 《IEEE》 * |
德爱玲等: "基于矢量量化的三维图像自适应分割方法及其应用", 《光学学报》 * |
Also Published As
Publication number | Publication date |
---|---|
CN111815764B (en) | 2022-07-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109993825B (en) | Three-dimensional reconstruction method based on deep learning | |
CN111445390B (en) | Wide residual attention-based three-dimensional medical image super-resolution reconstruction method | |
CN107610194B (en) | Magnetic resonance image super-resolution reconstruction method based on multi-scale fusion CNN | |
CN109035142B (en) | Satellite image super-resolution method combining countermeasure network with aerial image prior | |
JP2023550844A (en) | Liver CT automatic segmentation method based on deep shape learning | |
CN111091616B (en) | Reconstruction method and device of three-dimensional ultrasonic image | |
CN112215755B (en) | Image super-resolution reconstruction method based on back projection attention network | |
CN112258514B (en) | Segmentation method of pulmonary blood vessels of CT (computed tomography) image | |
CN109102485A (en) | Image interfusion method and device based on NSST and adaptive binary channels PCNN | |
Schall et al. | Surface from scattered points | |
CN107610221B (en) | Three-dimensional model generation method based on isomorphic model representation | |
CN113112583B (en) | 3D human body reconstruction method based on infrared thermal imaging | |
CN113376600A (en) | Pedestrian radar echo denoising method based on RSDNet | |
CN111369433B (en) | Three-dimensional image super-resolution reconstruction method based on separable convolution and attention | |
CN110660045B (en) | Lymph node identification semi-supervision method based on convolutional neural network | |
CN106296583B (en) | Based on image block group sparse coding and the noisy high spectrum image ultra-resolution ratio reconstructing method that in pairs maps | |
CN112581626B (en) | Complex curved surface measurement system based on non-parametric and multi-attention force mechanism | |
CN112700508B (en) | Multi-contrast MRI image reconstruction method based on deep learning | |
CN111815764B (en) | Ultrasonic three-dimensional reconstruction method based on self-supervision 3D full convolution neural network | |
CN113822825A (en) | Optical building target three-dimensional reconstruction method based on 3D-R2N2 | |
Li et al. | DPG-Net: Densely progressive-growing network for point cloud completion | |
CN113744132A (en) | MR image depth network super-resolution method based on multiple optimization | |
CN112837420B (en) | Shape complement method and system for terracotta soldiers and horses point cloud based on multi-scale and folding structure | |
Bischoff et al. | Extracting consistent and manifold interfaces from multi-valued volume data sets | |
CN113436335B (en) | Incremental multi-view three-dimensional reconstruction method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |