WO2019100718A1 - 基于深度学习的优化超声成像系统参数的方法 - Google Patents

基于深度学习的优化超声成像系统参数的方法 Download PDF

Info

Publication number
WO2019100718A1
WO2019100718A1 PCT/CN2018/093561 CN2018093561W WO2019100718A1 WO 2019100718 A1 WO2019100718 A1 WO 2019100718A1 CN 2018093561 W CN2018093561 W CN 2018093561W WO 2019100718 A1 WO2019100718 A1 WO 2019100718A1
Authority
WO
WIPO (PCT)
Prior art keywords
network
image
dbm
input
ultrasound
Prior art date
Application number
PCT/CN2018/093561
Other languages
English (en)
French (fr)
Inventor
张智伟
赵明昌
陆坚
Original Assignee
无锡祥生医疗科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 无锡祥生医疗科技股份有限公司 filed Critical 无锡祥生医疗科技股份有限公司
Priority to US16/766,643 priority Critical patent/US11564661B2/en
Priority to EP18880759.8A priority patent/EP3716000A4/en
Publication of WO2019100718A1 publication Critical patent/WO2019100718A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • A61B8/5223Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for extracting a diagnostic or physiological parameter from medical diagnostic data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/54Control of the diagnostic device
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/58Testing, adjusting or calibrating the diagnostic device
    • A61B8/585Automatic set-up of the device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • the present invention relates to medical ultrasound imaging systems, and more particularly to a method of optimizing parameters of an ultrasound imaging system.
  • Ultrasound images have been widely used in clinical practice because of their advantages of no invasion, low price, and fast imaging.
  • due to the physical characteristics of ultrasound imaging there are many parameters of the image and image. In order to obtain a desired ultrasound image, many parameters need to be adjusted, and the adjustment of the user is rather cumbersome.
  • the ultrasonic diagnostic equipments provide preset values (Presets) for setting preset values of imaging parameters.
  • the preset is a collection of entities that contain all controllable imaging parameters.
  • Commonly used imaging parameters can be roughly divided into three categories: image acquisition parameters, display parameters, and signal processing parameters.
  • the image acquisition parameters mainly control the front-end modules such as the transmitting circuit, the receiving circuit, the transducer, and the beam combining. These parameters can control the brightness, contrast, resolution, and transmittance of the image, for example, when the image is dark, it can be appropriate. Increasing the gain parameter makes the image as a whole brighter. If it is necessary to precisely control the brightness of the interval on the image, multiple time compensation gains can be controlled to control the brightness of images in different intervals.
  • the display parameters mainly control the back-end modules such as image processor and display. These parameters mainly affect the brightness, contrast, magnification and reduction, and pseudo-color display of the final image display.
  • the signal processing parameters mainly control the signal processing and image processor modules. The beam-synthesized signal is subjected to various filtering processes, and the values of these parameters have a relatively large influence on the image effect.
  • the object of the present invention is to overcome the deficiencies in the prior art and to provide a method for optimizing parameters of an ultrasound imaging system based on deep learning, which can be used for ultrasound imaging; the present invention trains an artificial neural network in image quality and ultrasound imaging A mapping relationship is established between system parameters to achieve the purpose of improving the quality of the ultrasound image by optimizing the parameters of the ultrasound imaging system.
  • the technical solution adopted by the invention is:
  • a method for optimizing parameters of an ultrasound imaging system based on deep learning includes the following steps:
  • Step 1 collecting samples for training a neural network, the sample includes an ultrasound image sample I, and a corresponding ultrasound imaging system parameter vector sample P used by the ultrasound imaging system when acquiring the ultrasound image samples;
  • Step 2 Establish a neural network model and use the samples collected in step 1 to train the neural network to converge, and obtain a well-trained neural network system onn;
  • the invention has the advantages that the invention optimizes the parameters of the ultrasound imaging system by means of deep learning, and establishes a mapping relationship between the image quality and the parameters of the ultrasound imaging system, thereby achieving the purpose of improving the image quality by optimizing the parameters of the ultrasound imaging system.
  • Figure 1 is a block diagram of the system of the present invention.
  • Figure 2 is a flow chart of the overall implementation of the present invention.
  • FIG. 3 is a schematic diagram of a training DCGAN network according to Embodiment 1 of the present invention.
  • FIG. 4 is a schematic diagram of training a multimode DBM according to Embodiment 1 of the present invention.
  • FIG. 5 is a schematic diagram of a neural network system in an application phase according to Embodiment 1 of the present invention.
  • FIG. 6 is a schematic diagram of training a multimode DBM according to Embodiment 2 of the present invention.
  • FIG. 7 is a schematic diagram of obtaining an optimized preset parameter vector EP in Embodiment 2 of the present invention.
  • FIG. 8 is a schematic diagram of a training fully connected neural network in Embodiment 2 of the present invention.
  • FIG. 9 is a schematic diagram of a training full-connected automatic encoder according to a third embodiment of the present invention.
  • FIG. 10 is a schematic diagram of a training convolution type automatic encoder according to Embodiment 3 of the present invention.
  • FIG. 11 is a schematic diagram of a training fully connected neural network DNN-T according to Embodiment 3 of the present invention.
  • FIG. 12 is a schematic diagram of a set of preset parameter samples EP obtained by a neural network system according to Embodiment 3 of the present invention.
  • FIG. 13 is a schematic diagram of training a fully connected neural network DNN according to Embodiment 3 of the present invention.
  • FIG. 14 is a schematic diagram of training a DCGAN network in Embodiment 4 of the present invention.
  • Figure 15 is a schematic diagram of a training full-connect type automatic encoder in the fourth embodiment of the present invention.
  • Figure 16 is a schematic diagram of a training convolution type automatic encoder in the fourth embodiment of the present invention.
  • FIG. 17 is a schematic diagram of a training fully connected neural network DNN-T according to Embodiment 4 of the present invention.
  • Embodiment 4 of the present invention is a schematic diagram of a combined neural network system in Embodiment 4 of the present invention.
  • FIG. 19 is a schematic diagram of a network for training a DCGAN generator in Embodiment 5 of the present invention.
  • FIG. 20 is a schematic diagram of training a multimode DBM in Embodiment 5 of the present invention.
  • Figure 21 is a schematic diagram of a combined neural network system in Embodiment 5 of the present invention.
  • FIG. 22 is a schematic diagram of acquiring matrices OIO, EIO, MO, and ME in Embodiment 6 of the present invention.
  • Figure 23 is a schematic diagram of a training deconvolution network in Embodiment 6 of the present invention.
  • Figure 24 is a schematic diagram of an optimized input through a deconvolution network in Embodiment 6 of the present invention.
  • FIG. 25 is a schematic diagram of obtaining training data by LeNet in Embodiment 7 of the present invention.
  • Figure 26 is a schematic diagram of a training deconvolution network in Embodiment 7 of the present invention.
  • FIG. 27 is a schematic diagram of preset parameters of an input optimized by a deconvolution network according to Embodiment 7 of the present invention.
  • FIG. 28 is a schematic diagram of parameters related to preset values according to an embodiment of the present invention.
  • FIG. 1 The system block diagram of the technical solution of the present invention is shown in FIG. 1, and FIG. 2 shows a flowchart of the implementation of the technical solution.
  • a method for optimizing parameters of an ultrasound imaging system based on deep learning includes the following steps:
  • Step 1 collecting a sample N group for training a neural network, the sample includes, but is not limited to, an ultrasound image sample I, and a corresponding ultrasound imaging system parameter vector sample P used by the ultrasound imaging system when acquiring the ultrasound image samples;
  • Step 2 Establish a neural network model and use the samples collected in step 1 to train the neural network to converge, and obtain a well-trained neural network system onn;
  • Embodiment 1 The technical solution is specifically divided into a training phase and an application phase, and the steps in each phase are as follows:
  • Step 101 randomly select a preset value of the ultrasound imaging system parameter from the ultrasound device, and each time a preset value is selected, an ultrasound image is acquired under the preset value condition, the ultrasound image is saved, and the ultrasound is collected and recorded together.
  • the preset value parameter used in the image thereby obtaining an image sample set OI (including 1000 images) and a preset value parameter sample set OP (including 1000 sets of preset value parameters);
  • Step 102 Acquire an optimized image sample set EI corresponding to the OI (including 1000 images that are consistent with each image content in the OI but have better quality);
  • Step 103 Train the DCGAN (Deep Convolutional Generation against Network) as shown in FIG. 3 using OI and EI until the generator network of DCGAN can output a corresponding optimized image given the original ultrasound image, thereby obtaining good training.
  • DCGAN Deep Convolutional Generation against Network
  • the DCGAN includes a generator network and a discriminator network; the OI sample is input to the generator network, and an image corresponding to the OI sample is generated through the generator network, and then the discriminator network compares the image generated by the generator network with the corresponding EI sample;
  • Step 104 Using OP and OI to train the multimode DBM (depth Boltzmann machine) as shown in FIG. 4 to converge, thereby obtaining a trained multimode DBM;
  • DBM depth Boltzmann machine
  • the multi-mode DBM includes a convolution DBM, a common DBM, and a shared hidden layer of a convolutional DBM and a common DBM; an OI input convolution DBM, an OP inputting a common DBM, and a shared hidden layer establishing a relationship between OI and OP;
  • step a101 the generator network part of the trained DCGAN is taken in the training stage step 103, and the multi-mode DBM trained in the training stage step 104 is obtained, and the artificial neural network system shown in FIG. 5 is formed; the output of the generator network is connected.
  • step a102 the original ultrasound image is input to the input end of the generator network in FIG. 5, and the parameter vector end of the ordinary DBM outputs the optimized ultrasound imaging system parameter vector (ie, the optimized preset parameter vector).
  • the set of ultrasound imaging system parameters of the specific step 101 is as follows: transmit power p 1 , transmit frequency p 2 , receive frequency p 3 , beam density p 4 , penetration depth p 5 , total gain p 6 , time gain compensation p 7 , focus Position p 8 , pulse repetition frequency p 9 , dynamic range p 10 , image resolution p 11 , edge enhancement p 12 , etc., p 1 —p 12 represent the respective parameter sizes at this time to obtain an ultrasound image under the preset value set at this time .
  • Embodiment 2 The technical solution is specifically divided into a training phase and an application phase, and the steps in each phase are as follows:
  • Step 201 randomly select a preset value of the ultrasound imaging system parameter from the ultrasound device, and each time a preset value is selected, an ultrasound image is acquired under the preset value condition, the ultrasound image is saved, and the ultrasound is collected and recorded together.
  • the preset value parameter used in the image thereby obtaining an image sample set OI (including 2000 images) and a preset value parameter sample set OP (including 2000 sets of preset value parameters);
  • Step 202 Acquire an optimized image sample set EI corresponding to the OI (including 2000 images that are consistent with each image content in the OI but have better quality);
  • Step 203 Using the OI sample set and the OP sample set to train the multimode DBM (depth Boltzmann machine) shown in FIG. 6 to converge, thereby obtaining the trained multimode DBM;
  • the multimode DBM depth Boltzmann machine
  • the multi-mode DBM includes a convolution DBM, a common DBM, and a shared hidden layer of a convolutional DBM and a common DBM; an OI input convolution DBM, an OP inputting a common DBM, and a shared hidden layer establishing a relationship between OI and OP;
  • Step 204 Input the sample set EI to the convolution DBM input end of the trained multimode DBM in step 203.
  • the result of the multimode DBM outputting on the parameter vector end of the normal DBM is the corresponding optimization pre Set the value parameter vector EP, the process is shown in Figure 7;
  • Step 205 training the fully connected neural network DNN as shown in FIG. 8 using the OP as an input and the EP as an output;
  • a preset parameter vector is input to the trained fully connected neural network DNN input obtained in the training phase step 205, and the vector obtained by the output is the optimized ultrasound imaging system parameter vector (ie, the optimized preset). Value parameter vector).
  • Embodiment 3 The technical solution is specifically divided into a training phase and an application phase, and the steps in each phase are as follows:
  • Step 301 randomly select a preset value of the ultrasound imaging system parameter from the ultrasound device, and each time a preset value is selected, an ultrasound image is acquired under the preset value condition, the ultrasound image is saved, and the ultrasound is collected and recorded together.
  • the preset value parameter used in the image thereby obtaining an image sample set OI (including 2000 images) and a preset value parameter sample set OP (including 2000 sets of preset value parameters);
  • Step 302 Acquire an optimized image sample set EI corresponding to the OI (including 2000 images that are consistent with each image content in the OI but have better quality);
  • Step 303 Train a fully connected automatic encoder DNN-AutoEncoder using the preset parameter sample set OP, as shown in FIG. 9;
  • the fully-connected autoencoder includes a cascaded fully-connected encoder and a fully-connected decoder; a fully-connected encoder is used to compress high-dimensional input information into a low-dimensional space, and a fully-connected decoder is used to compress the Low-dimensional spatial information is converted back to the original high-dimensional space;
  • Step 304 Train a convolutional type automatic encoder CNN-AutoEncoder using the image sample set OI, as shown in FIG. 10;
  • the convolutional autoencoder includes a concatenated convolutional encoder and a convolutional decoder; a convolutional encoder is used to compress high-dimensional input information into a low-dimensional space, and a convolutional decoder is used to compress the compressed Low-dimensional spatial information is converted back to the original high-dimensional space;
  • Step 305 Input OI to the convolutional encoder of the trained CNN-AutoEncoder in step 304 to obtain the output MI, and input the OP to the fully connected encoder of the DNN-AutoEncoder in step 303 to obtain the output MP.
  • MI as the input
  • MP as the output training fully connected neural network DNN-T, as shown in Figure 11;
  • Step 306 The CNN-AutoEncoder convolution type encoder part, the DNN-AutoEncoder fully connected type decoder part, and the DNN-T constitute a neural network system as shown in FIG.
  • the CNN-AutoEncoder convolutional encoder part is connected to the DNN-T, and the DNN-T is connected to the DNN-AutoEncoder fully connected decoder part;
  • Step 307 The preset value parameter sample set OP obtained in step 301 and the optimized preset value parameter sample set EP obtained in step 306 are trained as the fully connected neural network DNN shown in FIG. 13 until the network converges;
  • Step a301 Input a preset value parameter vector to the DNN obtained in the training phase step 307, and obtain an optimized ultrasound imaging system parameter vector at the output end.
  • Embodiment 4 The technical solution is specifically divided into a training phase and an application phase, and the steps in each phase are as follows:
  • Step 401 randomly select a preset value of the ultrasound imaging system parameter from the ultrasound device, and each time a preset value is selected, an ultrasound image is acquired under the preset value condition, the ultrasound image is saved, and the ultrasound is collected and recorded together.
  • the preset value parameter used in the image thereby obtaining an image sample set OI (including 2000 images) and a preset value parameter sample set OP (including 2000 sets of preset value parameters);
  • Step 402 Acquire an optimized image sample set EI corresponding to the OI (including 2000 images that are consistent with each image content in the OI but have better quality);
  • Step 403 Using the sample set OI and EI to train the DCGAN as shown in FIG. 14 until the generator network of the DCGAN can output the corresponding optimized image in the case of inputting the original ultrasound image; thereby obtaining the trained DCGAN;
  • the DCGAN includes a generator network and a discriminator network; the OI sample is input to the generator network, and an image corresponding to the OI sample is generated through the generator network, and then the discriminator network compares the image generated by the generator network with the corresponding EI sample;
  • Step 404 Train a fully connected automatic encoder DNN-AutoEncoder using the preset parameter sample set OP, as shown in FIG. 15;
  • the fully-connected autoencoder includes a cascaded fully-connected encoder and a fully-connected decoder; a fully-connected encoder is used to compress high-dimensional input information into a low-dimensional space, and a fully-connected decoder is used to compress the Low-dimensional spatial information is converted back to the original high-dimensional space;
  • Step 405 Train a convolutional type automatic encoder CNN-AutoEncoder using the image sample set OI, as shown in FIG. 16;
  • the convolutional autoencoder includes a concatenated convolutional encoder and a convolutional decoder; a convolutional encoder is used to compress high-dimensional input information into a low-dimensional space, and a convolutional decoder is used to compress the compressed Low-dimensional spatial information is converted back to the original high-dimensional space;
  • Step 406 Input OI to the convolutional encoder of the trained CNN-AutoEncoder in step 405 to obtain the output MI thereof, and input the OP to the fully-connected encoder of the DNN-AutoEncoder in step 404 to obtain the output MP.
  • MI as input
  • MP as an output training fully connected neural network DNN-T, as shown in Figure 17;
  • Step a401 the generator network trained with step 403, the fully connected decoder of DNN-AutoEncoder trained in step 404, and the convolutional encoder of CNN-AutoEncoder trained in step 405 are as shown in FIG. 18.
  • Neural network system
  • the generator network of DCGAN is connected to the convolutional encoder of CNN-AutoEncoder, the convolutional encoder of CNN-AutoEncoder is connected to DNN-T, and the DNN-T is connected to the fully connected encoder of DNN-AutoEncoder;
  • Step a402 Input the original ultrasound image to the neural network system shown in FIG. 18, at which time the output via the neural network system is the optimized ultrasound imaging system parameter vector (ie, the optimized preset parameter vector).
  • the optimized ultrasound imaging system parameter vector ie, the optimized preset parameter vector
  • Embodiment 5 The technical solution is specifically divided into a training phase and an application phase, and the steps in each phase are as follows:
  • Step 501 randomly select a preset value of the ultrasound imaging system parameter from the ultrasound device, and each time a preset value is selected, an ultrasound image is acquired under the preset value condition, the ultrasound image is saved, and the ultrasound is collected and recorded together.
  • the preset value parameter used in the image thereby obtaining an image sample set OI (including 2000 images) and a preset value parameter sample set OP (including 2000 sets of preset value parameters);
  • Step 502 Acquire an optimized image sample set EI corresponding to the OI (including 2000 images that are consistent with each image content in the OI but have better quality);
  • Step 503 Take a generator network part of a DCGAN (Deep Convolutional Generation against Network);
  • Step 504 The DCGAN generator network includes an N-layer convolution layer, and sequentially inputs an OI sample and an EI sample to the generator network, and obtains an output of the nth [1, N] layer as OIO and EIO, respectively;
  • Step 505 Let OIO and EIO obtained in step 504 be m matrices, and all the m matrices are vectorized and merged into matrix MO (corresponding to OIO) and ME (corresponding to EIO);
  • Step 507 Using OP and OI to train the multimode DBM (depth Boltzmann machine) shown in FIG. 20 to converge, thereby obtaining a trained multimode DBM;
  • the multi-mode DBM includes a convolution DBM, a common DBM, and a shared hidden layer of a convolutional DBM and a common DBM; an OI input convolution DBM, an OP inputting a common DBM, and a shared hidden layer establishing a relationship between OI and OP;
  • Step a501 constructing a neural network system as shown in FIG. 21 by using the trained DCGAN generator network and the multi-mode DBM in the training phase steps 506 and 507;
  • the DCGAN generator network connects the convolution DBM in the multimode DBM
  • Step a502 input an ultrasound image to the DCGAN generator network in the neural network system shown in FIG. 21, and the vector obtained by the ordinary DBM output of the neural network system is an optimized ultrasound imaging system parameter vector (ie, an optimized preset). Value parameter vector).
  • an optimized ultrasound imaging system parameter vector ie, an optimized preset. Value parameter vector
  • Embodiment 6 The technical solution is specifically divided into a training phase and an application phase, and the steps in each phase are as follows:
  • Step 601 randomly select a preset value of the ultrasound imaging system parameter from the ultrasound device, and each time a preset value is selected, an ultrasound image is acquired under the preset value condition, the ultrasound image is saved, and the ultrasound is collected and recorded together.
  • the preset value parameter used in the image thereby obtaining an image sample set OI (including 2000 images) and a preset value parameter sample set OP (including 2000 sets of preset value parameters);
  • Step 602 Acquire an optimized image sample set EI corresponding to the OI (including 2000 images that are consistent with each image content in the OI but have better quality);
  • Step 603 Take a convolutional network part of the VGG network; the VGG network is one of the network structures of Deep Learning;
  • Step 604 The VGG convolution network includes an N layer, and the OI sample and the EI sample are sequentially input to the convolution network, and the output of the nth [1, N] layer is respectively OIO and EIO;
  • Step 605 Let OIO and EIO obtained in step 604 are m matrices, vectorize the m matrices, and arrange each vectorized matrix in rows, thereby merging into matrix MO (corresponding to OIO) and ME respectively. (corresponding to EIO), as shown in Figure 22;
  • Step 606 Train a deconvolution network with OP as input, (OIO-EIO) ⁇ 2, and (MO ⁇ 2-ME ⁇ 2) ⁇ 2 as output, as shown in FIG. 23;
  • Step a601 input the ultrasound system preset value parameter vector as input to the trained deconvolution network in the training phase step 606, the network weight is unchanged, and the sum of the two outputs of the deconvolution network is 0.
  • the preset parameter vector of the modified network input is the optimized ultrasound imaging system parameter vector (ie, the optimized preset parameter vector). 24 is shown.
  • Embodiment 7 The technical solution is specifically divided into a training phase and an application phase, and the steps in each phase are as follows:
  • Step 701 randomly select a preset value of the ultrasound imaging system parameter from the ultrasound device, and each time a preset value is selected, an ultrasound image is acquired under the preset value condition, the ultrasound image is saved, and the ultrasound is collected and recorded together.
  • the preset parameter used in the image thereby obtaining an image sample set OI (including 2000 images) and a preset value parameter sample set OP (including 2000 sets of preset value parameters);
  • Step 702 Acquire an optimized image sample set EI corresponding to the OI (including 2000 images that are consistent with each image content in the OI but have better quality);
  • Step 703 Take a convolutional network part of the LeNet network; the LeNet network is one of the network structures of Deep Learning;
  • Step 704 sequentially input the OI sample and the EI sample to the LeNet convolution network, and obtain the output of the last layer as OIO and EIO respectively, and obtain the OIO and EIO as m matrices, as shown in FIG. 25;
  • Application phase input the ultrasound system preset value parameter vector as input to the training deconvolution network in step 705 of the training phase, the network weight is unchanged, and the output of the deconvolution network is 0, and the preset of the network input is optimized.
  • the parameter value is valued until the network converges.
  • the parameter vector modified at the network input during convergence is the optimized ultrasound imaging system parameter vector (ie, the optimized preset parameter vector). The process is shown in FIG.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Pathology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Quality & Reliability (AREA)
  • Physiology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

一种基于深度学习的优化超声成像系统参数的方法,包括以下步骤:步骤1:收集用于训练神经网络的样本,样本包括超声图像样本I、以及采集这些超声图像样本时超声成像系统所使用的相应超声成像系统参数向量样本P;步骤2:建立神经网络模型并使用步骤1收集到的样本训练神经网络至收敛,得到训练好得神经网络系统onn;步骤3:以原始的超声成像系统参数向量p或者原始超声图像作为输入输入到步骤2训练好的神经网络系统onn中,此时从onn输出端获得的参数就是优化的超声成像系统参数向量ep=onn(p)。该方法通过优化超声成像系统参数实现提高超声图像质量的目的。

Description

基于深度学习的优化超声成像系统参数的方法 技术领域
本发明涉及医疗超声成像系统,尤其是一种优化超声成像系统参数的方法。
背景技术
超声图像因具有无侵入、价格低、成像快等优点在临床中得到广泛应用。但由于超声成像的物理特性,影像图像的参数很多,为了获得一张想要的超声图像,需要调整很多参数,使用人员调整起来还是比较繁琐。
目前,超声诊断设备上大都提供了预设值(Presets)对个成像参数进行预设值的设置。预设值是一个包含所有可控的成像参数的实体集合。常用的成像参数大致可以分为三类:图像获取参数、显示参数、信号处理参数。图像获取参数主要控制发射电路、接收电路、换能器、波束合成等前端模块,这些参数可以控制图像的亮度、对比度、分辨率、穿透率等性质,比如当图像偏暗的时候,可以适当增大增益参数,使图像整体变亮,如果需要精确控制图像上区间的亮度,可以控制多个时间补偿增益来控制不同区间的图像的亮度。显示参数主要控制图像处理器、显示器等后端模块,这些参数主要影响最终图像显示的亮度、对比度、放大缩小倍数、伪彩显示等;信号处理参数主要控制信号处理和图像处理器模块,用来对波束合成后的信号作各种滤波处理,这些参数的取值对图像效果有这比较大的影响。
随着社会技术的发展,深度学习技术在其他领域有一定进展,但是在医疗超声领域,由于超声系统的复杂性,深度学习技术在医疗器械领域目前仍存在一些缺陷,仍然不能根据当前的图像快速、准确、智能的调整当前的预设值等缺点。
发明内容
本发明的目的在于克服现有技术中存在的不足,提供一种基于深度学习的优化超声成像系统参数的方法,可被用于超声成像;本发明通过训练人工神经网络,在图像质量与超声成像系统参数之间建立映射关系,从而实现通过优化超声成像系统参数来提高超声图像质量的目的。本发明采用的技术方案是:
一种基于深度学习的优化超声成像系统参数的方法,包括以下步骤:
步骤1:收集用于训练神经网络的样本,样本包括超声图像样本I、以及采集这超声些图像样本时超声成像系统所使用的相应超声成像系统参数向量样本P;
步骤2:建立神经网络模型并使用步骤1收集到的样本训练神经网络至收敛,得到训练好得神经网络系统onn;
步骤3:以原始的超声成像系统参数向量p或者原始超声图像作为输入输入到步骤2训练好的神经网络系统onn中,此时从onn输出端获得的参数就是优化的超声成像系统参数向量ep=onn(p)。
本发明的优点在于:本发明借助深度学习来优化超声成像系统参数,在图像质量与超声成像系统参数之间建立映射关系,从而实现通过优化超声成像系统参数来提高图像质量的目的。
附图说明
图1为本发明的系统框图。
图2为本发明的总体实施流程图。
图3为本发明的实施例一中训练DCGAN网络示意图。
图4为本发明的实施例一中训练多模DBM示意图。
图5为本发明的实施例一中应用阶段神经网络系统示意图。
图6为本发明的实施例二中训练多模DBM示意图。
图7为本发明的实施例二中获得优化预设值参数向量EP示意图。
图8为本发明的实施例二中训练全连接神经网络示意图。
图9为本发明的实施例三中训练全连接型自动编码器示意图。
图10为本发明的实施例三中训练卷积型自动编码器示意图。
图11为本发明的实施例三中训练全连接神经网络DNN-T示意图。
图12为本发明的实施例三中通过神经网络系统获得优化的预设值参数样本集EP示意图。
图13为本发明的实施例三中训练全连接神经网络DNN示意图。
图14为本发明的实施例四中训练DCGAN网络示意图。
图15为本发明的实施例四中训练全连接型自动编码器示意图。
图16为本发明的实施例四中训练卷积型自动编码器示意图。
图17为本发明的实施例四中训练全连接神经网络DNN-T示意图。
图18为本发明的实施例四中组合神经网络系统示意图。
图19为本发明的实施例五中训练DCGAN生成器网络示意图。
图20为本发明的实施例五中训练多模DBM示意图。
图21为本发明的实施例五中组合神经网络系统示意图。
图22为本发明的实施例六中获取矩阵OIO、EIO、MO、ME示意图。
图23为本发明的实施例六中训练解卷积网络示意图。
图24为本发明的实施例六中通过解卷积网络优化输入示意图。
图25为本发明的实施例七中通过LeNet获得训练数据示意图。
图26为本发明的实施例七中训练解卷积网络示意图。
图27为本发明的实施例七中通过解卷积网络优化输入的预设值参数示意图。
图28为本发明的实施例的预设值相关的参数示意图。
具体实施方式
下面结合具体附图和实施例对本发明作进一步说明。
本发明的技术方案的系统框图参见图1所示,图2显示了本技术方案的实施流程图。
一种基于深度学习的优化超声成像系统参数的方法,包括以下步骤:
步骤1:收集用于训练神经网络的样本N组,样本包括但不限于超声图像样本I、以及采集这超声些图像样本时超声成像系统所使用的相应超声成像系统参数向量样本P;
步骤2:建立神经网络模型并使用步骤1收集到的样本训练神经网络至收敛,得到训练好得神经网络系统onn;
步骤3:以原始的超声成像系统参数向量p或者原始超声图像作为输入输入到步骤2训练好的神经网络系统onn中,此时从onn输出端获得的参数就是优化的超声成像系统参数向量ep=onn(p)。
实施例1;将本技术方案具体分为训练阶段和应用阶段依次执行,各个阶段步骤如下:
训练阶段:
步骤101:从超声设备上随机选择超声成像系统参数的预设值,并且每选择一个预设值就在该预设值条件下采集一张超声图像,保存超声图像并一并记录下采集该超声图像时所采用的预设值参数,从而分别获得图像样本集OI(含1000张图像)与预设值参数样本集OP(含1000组预设值参数);
步骤102:获取OI对应的优化图像样本集EI(含1000张与OI中每个图像内容一致但质量更优的图像);
步骤103:使用OI与EI训练如图3所示的DCGAN(深度卷积生成式对抗网络)直至DCGAN的生成器网络能够在给定原始超声图像的情况下输出相应的优化图像,从而获得训练好的DCGAN;
DCGAN包括生成器网络和鉴别器网络;OI样本输入到生成器网络,通过生成器网络生成与OI样本对应的图像,然后鉴别器网络把生成器网络生成的图像与相应EI样本进行一致性比较;
步骤104:使用OP与OI训练如图4所示的多模DBM(深度玻尔兹曼机)至收敛,从而获得训练好的多模DBM;
多模DBM包括卷积DBM、普通DBM,以及联系卷积DBM、普通DBM的共享隐层;OI输入卷积DBM,OP输入普通DBM,共享隐层建立OI与OP两种信息之间的联系;
应用阶段:
步骤a101,取训练阶段步骤103训练好的DCGAN的生成器网络部分,同时取得训练阶段步骤104训练好的多模DBM,组成如图5所示的人工神经网络系统;生成器网络的输出端连接卷积DBM的输入端;
步骤a102,向图5中生成器网络输入端即图像端输入原始超声图像,普通DBM的参数向量端输出优化后的超声成像系统参数向量(即优化的预设值参数向量)。
具体步骤101的超声成像系统参数的集合如下:发射功率p 1、发射频率p 2、接受频率p 3、波束密度p 4、穿透深度p 5、总增益p 6、时间增益补偿p 7、焦点位置p 8、脉冲重复频率p 9、动态范围p 10、图像分辨率p 11、边缘增强p 12等,p 1—p 12代表各自的参数大小此时获得此时预设值集合下的超声图像。
实施例2;将本技术方案具体分为训练阶段和应用阶段依次执行,各个阶段步骤如下:
训练阶段:
步骤201:从超声设备上随机选择超声成像系统参数的预设值,并且每选择一个预设值就在该预设值条件下采集一张超声图像,保存超声图像并一并记录下采集该超声图像时所采用的预设值参数,从而分别获得图像样本集OI(含2000张图像)与预设值参数样本集OP(含2000组预设值参数);
步骤202:获取OI对应的优化图像样本集EI(含2000张与OI中每个图像内容一致但质量更优的图像);
步骤203:使用OI样本集与OP样本集训练如图6所示的多模DBM(深度玻尔兹曼机)至收敛,从而获得训练好的多模DBM;
多模DBM包括卷积DBM、普通DBM,以及联系卷积DBM、普通DBM的共享隐层;OI输入卷积DBM,OP输入普通DBM,共享隐层建立OI与OP两种信息之间的联系;
步骤204:向步骤203中训练好的多模DBM的卷积DBM输入端输入样本集EI,此时多模DBM在预设值参数端即普通DBM的参数向量端输出的结果就是相应的优化预设值参数向量EP,过程如图7所示;
步骤205:使用OP作为输入、EP作为输出训练如图8所示的全连接神经网络DNN;
应用阶段:
步骤a201,向训练阶段步骤205中获得的训练好的全连接神经网络DNN输入端输入预设值参数向量,此时输出端得到的向量即为优化的超声成像系统参数向量(即优化的预设值参数向量)。
实施例3;将本技术方案具体分为训练阶段和应用阶段依次执行,各个阶段步骤如下:
训练阶段:
步骤301:从超声设备上随机选择超声成像系统参数的预设值,并且每选择一个预设值就在该预设值条件下采集一张超声图像,保存超声图像并一并记录下采集该超声图像时所采用的预设值参数,从而分别获得图像样本集OI(含2000张图像)与预设值参数样本集OP(含2000组预设值参数);
步骤302:获取OI对应的优化图像样本集EI(含2000张与OI中每个图像内容一致但质量更优的图像);
步骤303:使用预设值参数样本集OP训练一个全连接型自动编码器DNN-AutoEncoder,如图9所示;
全连接型自动编码器包括级联的全连接型编码器和全连接型解码器;全连接型编码器用于将高维度的输入信息压缩到低维空间,全连接型解码器用于将压缩后的低维空间信息转换回到原来的高维空间;
步骤304:使用图像样本集OI训练一个卷积型自动编码器CNN-AutoEncoder,如图10所示;
卷积型自动编码器包括级联的卷积型编码器和卷积型解码器;卷积型编码器用于将高维度的输入信息压缩到低维空间,卷积型解码器用来将压缩后的低维空间信息转换回到原来的高维空间;
步骤305:分别向步骤304中训练好的CNN-AutoEncoder的卷积型编码器输入OI,获得其输出MI,同时向步骤303中DNN-AutoEncoder的全连接编码器输入OP,获得其输出MP,使用MI作为输入,而MP作为输出训练全连接神经网络DNN-T,如图11所示;
步骤306:由CNN-AutoEncoder卷积型编码器部分、DNN-AutoEncoder全连接型解码器部分以及DNN-T构成如图12所示的神经网络系统,
CNN-AutoEncoder卷积型编码器部分连接DNN-T,DNN-T连接DNN-AutoEncoder全连接型解码器部分;
向该神经网络系统CNN-AutoEncoder的卷积型编码器端输入EI样本集,并在该神经网络系统的DNN-AutoEncoder的全连接型解码器输出端获得优化的预设值参数样本集EP;
步骤307:使用步骤301获得的预设值参数样本集OP和步骤306获得的优化的预设值参数样本集EP训练如图13所示的全连接神经网络DNN,直至网络收敛;
应用阶段:
步骤a301:向训练阶段步骤307获得的DNN输入预设值参数向量,在输出端得到的即是优化的超声成像系统参数向量。
实施例4;将本技术方案具体分为训练阶段和应用阶段依次执行,各个阶段步骤如下:
训练阶段:
步骤401:从超声设备上随机选择超声成像系统参数的预设值,并且每选择一个预设值就在该预设值条件下采集一张超声图像,保存超声图像并一并记录下采集该超声图像时所采用的预设值参数,从而分别获得图像样本集OI(含2000张图像)与预设值参数样本集OP(含2000组预设值参数);
步骤402:获取OI对应的优化图像样本集EI(含2000张与OI中每个图像内容一致但质量更优的图像);
步骤403:使用样本集OI与EI训练如图14所示DCGAN,直至DCGAN的生成器网络能在输入原始超声图像的情况下输出相应的优化图像;从而获得训练好的DCGAN;
DCGAN包括生成器网络和鉴别器网络;OI样本输入到生成器网络,通过生成器网络生成与OI样本对应的图像,然后鉴别器网络把生成器网络生成的图像与相应EI样本进行一致性比较;
步骤404:使用预设值参数样本集OP训练一个全连接型自动编码器DNN-AutoEncoder,如图15所示;
全连接型自动编码器包括级联的全连接型编码器和全连接型解码器;全连接型编码器用于将高维度的输入信息压缩到低维空间,全连接型解码器用于将 压缩后的低维空间信息转换回到原来的高维空间;
步骤405:使用图像样本集OI训练一个卷积型自动编码器CNN-AutoEncoder,如图16所示;
卷积型自动编码器包括级联的卷积型编码器和卷积型解码器;卷积型编码器用于将高维度的输入信息压缩到低维空间,卷积型解码器用来将压缩后的低维空间信息转换回到原来的高维空间;
步骤406:分别向步骤405中训练好的CNN-AutoEncoder的卷积型编码器输入OI,获得其输出MI,同时向步骤404中DNN-AutoEncoder的全连接型编码器输入OP,获得其输出MP,使用MI作为输入,而MP作为输出训练全连接神经网络DNN-T,如图17所示;
应用阶段:
步骤a401:利用步骤403训练好的DCGAN的生成器网络、步骤404训练好的DNN-AutoEncoder的全连接型解码器、以及步骤405训练好的CNN-AutoEncoder的卷积型编码器组成如图18所示的神经网络系统;
DCGAN的生成器网络连接CNN-AutoEncoder的卷积型编码器,CNN-AutoEncoder的卷积型编码器连接DNN-T,DNN-T连接DNN-AutoEncoder的全连接型编码器;
步骤a402:向图18所示的神经网络系统输入原始超声图像,此时经由该神经网络系统的输出即为优化的超声成像系统参数向量(即优化的预设值参数向量)。
实施例5;将本技术方案具体分为训练阶段和应用阶段依次执行,各个阶段步骤如下:
训练阶段:
步骤501:从超声设备上随机选择超声成像系统参数的预设值,并且每选择一个预设值就在该预设值条件下采集一张超声图像,保存超声图像并一并记录下采集该超声图像时所采用的预设值参数,从而分别获得图像样本集OI(含2000张图像)与预设值参数样本集OP(含2000组预设值参数);
步骤502:获取OI对应的优化图像样本集EI(含2000张与OI中每个图像内容一致但质量更优的图像);
步骤503:取DCGAN(深度卷积生成式对抗网络)的生成器网络部分;
步骤504:设DCGAN的生成器网络含有N层卷积层,向生成器网络依次输入OI样本与EI样本,并取得第n∈[1,N]层的输出分别为OIO、EIO;
步骤505:设步骤504中获得的OIO与EIO均为m个矩阵,将这m个矩阵统统向量化并分别合并成为矩阵MO(对应OIO)与ME(对应EIO);
步骤506:以loss=1/m(OIO-EIO)^2+(MO^2-ME^2)^2为优化目标来训练此DCGAN生成器网络直至收敛,如图19所示;loss是损失函数;
步骤507:使用OP与OI训练如图20所示的多模DBM(深度玻尔兹曼机)至收敛,从而获得训练好的多模DBM;
多模DBM包括卷积DBM、普通DBM,以及联系卷积DBM、普通DBM的共享隐 层;OI输入卷积DBM,OP输入普通DBM,共享隐层建立OI与OP两种信息之间的联系;
应用阶段:
步骤a501:利用训练阶段步骤506、507训练好的DCGAN生成器网络和多模DBM构建如图21所示的神经网络系统;
DCGAN生成器网络连接多模DBM中的卷积DBM;
步骤a502:向图21所示的神经网络系统中DCGAN生成器网络输入超声图像,此时该神经网络系统的普通DBM输出端获得的向量即为优化的超声成像系统参数向量(即优化的预设值参数向量)。
实施例6;将本技术方案具体分为训练阶段和应用阶段依次执行,各个阶段步骤如下:
训练阶段:
步骤601:从超声设备上随机选择超声成像系统参数的预设值,并且每选择一个预设值就在该预设值条件下采集一张超声图像,保存超声图像并一并记录下采集该超声图像时所采用的预设值参数,从而分别获得图像样本集OI(含2000张图像)与预设值参数样本集OP(含2000组预设值参数);
步骤602:获取OI对应的优化图像样本集EI(含2000张与OI中每个图像内容一致但质量更优的图像);
步骤603:取VGG网络的卷积网络部分;VGG网络是Deep Learning的网络结构之一;
步骤604:设VGG的卷积网络含有N层,向此卷积网络依次输入OI样本与EI样本,并取得第n∈[1,N]层的输出分别为OIO、EIO;
步骤605:设步骤604中获得的OIO与EIO均为m个矩阵,将这m个矩阵统统向量化并将每个向量化的矩阵按行排列,从而分别合并成为矩阵MO(对应OIO)与ME(对应EIO),如图22所示;
步骤606:以OP作为输入、(OIO-EIO)^2和(MO^2-ME^2)^2作为输出训练一个解卷积网络,如图23所示;
应用阶段:
步骤a601:将超声系统预设值参数向量作为输入输入到训练阶段步骤606训练好的解卷积网络中,网络权值不变,以解卷积网络的两个输出之和为0为目标,优化网络输入端的预设值参数值,直至网络收敛,收敛时网络输入端被修改过的预设值参数向量就是优化的超声成像系统参数向量(即优化的预设值参数向量),过程如图24所示。
实施例7;将本技术方案具体分为训练阶段和应用阶段依次执行,各个阶段步骤如下:
训练阶段:
步骤701:从超声设备上随机选择超声成像系统参数的预设值,并且每选择一个预设值就在该预设值条件下采集一张超声图像,保存超声图像并一并记录下采集该超声图像时所采用的预设值参数,从而分别获得图像样本集OI(含2000 张图像)与预设值参数样本集OP(含2000组预设值参数);
步骤702:获取OI对应的优化图像样本集EI(含2000张与OI中每个图像内容一致但质量更优的图像);
步骤703:取LeNet网络的卷积网络部分;LeNet网络是Deep Learning的网络结构之一;
步骤704:向LeNet的卷积网络依次输入OI样本与EI样本,并取得最后一层的输出分别为OIO、EIO,获得的OIO与EIO均为m个矩阵,如图25所示;
步骤705:以OP作为输入、相应的res=OIO-EIO作为输出训练一个解卷积网络,如图26所示;res是两个矩阵OIO与EIO的差;
应用阶段:将超声系统预设值参数向量作为输入输入到训练阶段步骤705训练好的解卷积网络中,网络权值不变,以解卷积网络输出0为目标,优化网络输入端的预设值参数值,直至网络收敛,收敛时网络输入端被修改过的参数向量就是优化的超声成像系统参数向量(即优化的预设值参数向量),过程如图27所示。
最后所应说明的是,以上具体实施方式仅用以说明本发明的技术方案而非限制,尽管参照实例对本发明进行了详细说明,本领域的普通技术人员应当理解,可以对本发明的技术方案进行修改或者等同替换,而不脱离本发明技术方案的精神和范围,其均应涵盖在本发明的权利要求范围当中。

Claims (8)

  1. 一种基于深度学习的优化超声成像系统参数的方法,其特征在于,包括以下步骤:
    步骤1:收集用于训练神经网络的样本,样本包括超声图像样本I、以及采集这超声些图像样本时超声成像系统所使用的相应超声成像系统参数向量样本P;
    步骤2:建立神经网络模型并使用步骤1收集到的样本训练神经网络至收敛,得到训练好得神经网络系统onn;
    步骤3:以原始的超声成像系统参数向量p或者原始超声图像作为输入输入到步骤2训练好的神经网络系统onn中,此时从onn输出端获得的参数就是优化的超声成像系统参数向量ep=onn(p)。
  2. 如权利要求1所述的基于深度学习的优化超声成像系统参数的方法,其特征在于,该方法具体包括:
    训练阶段:
    步骤101,从超声设备上随机选择超声成像系统参数预设值,并且每选择一个预设值就在该预设值条件下采集一张超声图像,保存超声图像并一并记录下采集该超声图像时所采用的预设值参数,从而分别获得图像样本集OI与预设值参数样本集OP;
    步骤102:获取OI对应的优化图像样本集EI;
    步骤103:使用OI与EI训练一个DCGAN直至DCGAN的生成器网络能够在给定原始超声图像的情况下输出相应的优化图像,从而获得训练好的DCGAN;
    DCGAN包括生成器网络和鉴别器网络;OI样本输入到生成器网络,通过生成器网络生成与OI样本对应的图像,然后鉴别器网络把生成器网络生成的图像与相应EI样本进行一致性比较;
    步骤104:使用OP与OI训练一个多模DBM至收敛,从而获得训练好的多模DBM;
    多模DBM包括卷积DBM、普通DBM,以及联系卷积DBM、普通DBM的共享隐层;OI输入卷积DBM,OP输入普通DBM,共享隐层建立OI与OP两种信息之间的联系;
    应用阶段:
    步骤a101,取训练阶段步骤103训练好的DCGAN的生成器网络部分,同时取得训练阶段步骤104训练好的多模DBM,组成人工神经网络系统;生成器网络的输出端连接卷积DBM的输入端;
    步骤a102,向生成器网络输入端即图像端输入原始超声图像,普通DBM的参数向量端输出优化后的超声成像系统参数向量。
  3. 如权利要求1所述的基于深度学习的优化超声成像系统参数的方法,其特征在于,该方法具体包括:
    训练阶段:
    步骤201:从超声设备上随机选择超声成像系统参数的预设值,并且每选择一个预设值就在该预设值条件下采集一张超声图像,保存超声图像并一并记录下采集该超声图像时所采用的预设值参数,从而分别获得图像样本集OI与预设值参数样本集OP;
    步骤202:获取OI对应的优化图像样本集EI;
    步骤203:使用OI样本集与OP样本集训练一个多模DBM至收敛,从而获得训练好的多模DBM;
    多模DBM包括卷积DBM、普通DBM,以及联系卷积DBM、普通DBM的共享隐层;OI输入卷积DBM,OP输入普通DBM,共享隐层建立OI与OP两种信息之间的联系;
    步骤204:向步骤203中训练好的多模DBM的卷积DBM输入端输入样本集EI,此时多模DBM在预设值参数端即普通DBM的参数向量端输出的结果就是相应的优化预设值参数向量EP;
    步骤205:使用OP作为输入、EP作为输出训练一个全连接神经网络DNN;
    应用阶段:
    步骤a201,向训练阶段步骤205中获得的训练好的全连接神经网络DNN输入端输入预设值参数向量,此时输出端得到的向量即为优化的超声成像系统参数向量。
  4. 如权利要求1所述的基于深度学习的优化超声成像系统参数的方法,其特征在于,该方法具体包括:
    训练阶段:
    步骤301:从超声设备上随机选择超声成像系统参数的预设值,并且每选择一个预设值就在该预设值条件下采集一张超声图像,保存超声图像并一并记录下采集该超声图像时所采用的预设值参数,从而分别获得图像样本集OI与预设值参数样本集OP;
    步骤302:获取OI对应的优化图像样本集EI;
    步骤303:使用预设值参数样本集OP训练一个全连接型自动编码器DNN-AutoEncoder;
    全连接型自动编码器包括级联的全连接型编码器和全连接型解码器;全连接型编码器用于将高维度的输入信息压缩到低维空间,全连接型解码器用于将压缩后的低维空间信息转换回到原来的高维空间;
    步骤304:使用图像样本集OI训练一个卷积型自动编码器CNN-AutoEncoder;
    卷积型自动编码器包括级联的卷积型编码器和卷积型解码器;卷积型编码器用于将高维度的输入信息压缩到低维空间,卷积型解码器用来将压缩后的低维空间信息转换回到原来的高维空间;
    步骤305:分别向步骤304中训练好的CNN-AutoEncoder的卷积型编码器输入OI,获得其输出MI,同时向步骤303中DNN-AutoEncoder的全连接编码器输 入OP,获得其输出MP,使用MI作为输入,而MP作为输出训练全连接神经网络DNN-T;
    步骤306:由CNN-AutoEncoder卷积型编码器部分、DNN-AutoEncoder全连接型解码器部分以及DNN-T构成一个神经网络系统,
    CNN-AutoEncoder卷积型编码器部分连接DNN-T,DNN-T连接DNN-AutoEncoder全连接型解码器部分;
    向该神经网络系统CNN-AutoEncoder的卷积型编码器端输入EI样本集,并在该神经网络系统的DNN-AutoEncoder的全连接型解码器输出端获得优化的预设值参数样本集EP;
    步骤307:使用步骤301获得的预设值参数样本集OP和步骤306获得的优化的预设值参数样本集EP训练一个全连接神经网络DNN,直至网络收敛;
    应用阶段:
    步骤a301:向训练阶段步骤307获得的DNN输入预设值参数向量,在输出端得到的即是优化的超声成像系统参数向量。
  5. 如权利要求1所述的基于深度学习的优化超声成像系统参数的方法,其特征在于,该方法具体包括:
    训练阶段:
    步骤401:从超声设备上随机选择超声成像系统参数的预设值,并且每选择一个预设值就在该预设值条件下采集一张超声图像,保存超声图像并一并记录下采集该超声图像时所采用的预设值参数,从而分别获得图像样本集OI与预设值参数样本集OP;
    步骤402:获取OI对应的优化图像样本集EI;
    步骤403:使用样本集OI与EI训练一个DCGAN,直至DCGAN的生成器网络能在输入原始超声图像的情况下输出相应的优化图像;从而获得训练好的DCGAN;
    DCGAN包括生成器网络和鉴别器网络;OI样本输入到生成器网络,通过生成器网络生成与OI样本对应的图像,然后鉴别器网络把生成器网络生成的图像与相应EI样本进行一致性比较;
    步骤404:使用预设值参数样本集OP训练一个全连接型自动编码器DNN-AutoEncoder;
    全连接型自动编码器包括级联的全连接型编码器和全连接型解码器;全连接型编码器用于将高维度的输入信息压缩到低维空间,全连接型解码器用于将压缩后的低维空间信息转换回到原来的高维空间;
    步骤405:使用图像样本集OI训练一个卷积型自动编码器CNN-AutoEncoder;
    卷积型自动编码器包括级联的卷积型编码器和卷积型解码器;卷积型编码器用于将高维度的输入信息压缩到低维空间,卷积型解码器用来将压缩后的低维空间信息转换回到原来的高维空间;
    步骤406:分别向步骤405中训练好的CNN-AutoEncoder的卷积型编码器输 入OI,获得其输出MI,同时向步骤404中DNN-AutoEncoder的全连接型编码器输入OP,获得其输出MP,使用MI作为输入,而MP作为输出训练全连接神经网络DNN-T;
    应用阶段:
    步骤a401:利用步骤403训练好的DCGAN的生成器网络、步骤404训练好的DNN-AutoEncoder的全连接型解码器、以及步骤405训练好的CNN-AutoEncoder的卷积型编码器组成一个神经网络系统;
    神经网络系统中,DCGAN的生成器网络连接CNN-AutoEncoder的卷积型编码器,CNN-AutoEncoder的卷积型编码器连接DNN-T,DNN-T连接DNN-AutoEncoder的全连接型编码器;
    步骤a402:向步骤a401中组成的神经网络系统输入原始超声图像,此时经由该神经网络系统的输出即为优化的超声成像系统参数向量。
  6. 如权利要求1所述的基于深度学习的优化超声成像系统参数的方法,其特征在于,该方法具体包括:
    训练阶段:
    步骤501:从超声设备上随机选择超声成像系统参数的预设值,并且每选择一个预设值就在该预设值条件下采集一张超声图像,保存超声图像并一并记录下采集该超声图像时所采用的预设值参数,从而分别获得图像样本集OI与预设值参数样本集OP;
    步骤502:获取OI对应的优化图像样本集EI;
    步骤503:取DCGAN的生成器网络部分;
    步骤504:设DCGAN的生成器网络含有N层卷积层,向生成器网络依次输入OI样本与EI样本,并取得第n∈[1,N]层的输出分别为OIO、EIO;
    步骤505:设步骤504中获得的OIO与EIO均为m个矩阵,将这m个矩阵统统向量化并分别合并成为矩阵MO与ME;
    步骤506:以loss=1/m(OIO-EIO)^2+(MO^2-ME^2)^2为优化目标来训练此DCGAN生成器网络直至收敛;loss是损失函数;
    步骤507:使用OP与OI训练多模DBM至收敛,从而获得训练好的多模DBM;
    多模DBM包括卷积DBM、普通DBM,以及联系卷积DBM、普通DBM的共享隐层;OI输入卷积DBM,OP输入普通DBM,共享隐层建立OI与OP两种信息之间的联系;
    应用阶段:
    步骤a501:利用训练阶段步骤506、507训练好的DCGAN生成器网络和多模DBM构建神经网络系统;
    神经网络系统中,DCGAN生成器网络连接多模DBM中的卷积DBM;
    步骤a502:向上述神经网络系统中DCGAN生成器网络输入超声图像,此时该神经网络系统的普通DBM输出端获得的向量即为优化的超声成像系统参数向量。
  7. 如权利要求1所述的基于深度学习的优化超声成像系统参数的方法,其 特征在于,该方法具体包括:
    训练阶段:
    步骤601:从超声设备上随机选择超声成像系统参数的预设值,并且每选择一个预设值就在该预设值条件下采集一张超声图像,保存超声图像并一并记录下采集该超声图像时所采用的预设值参数,从而分别获得图像样本集OI与预设值参数样本集OP;
    步骤602:获取OI对应的优化图像样本集EI;
    步骤603:取VGG网络的卷积网络部分;
    步骤604:设VGG的卷积网络含有N层,向此卷积网络依次输入OI样本与EI样本,并取得第n∈[1,N]层的输出分别为OIO、EIO;
    步骤605:设步骤604中获得的OIO与EIO均为m个矩阵,将这m个矩阵统统向量化并将每个向量化的矩阵按行排列,从而分别合并成为矩阵MO与ME;
    步骤606:以OP作为输入、(OIO-EIO)^2和(MO^2-ME^2)^2作为输出训练一个解卷积网络;
    应用阶段:
    步骤a601:将超声系统预设值参数向量作为输入输入到训练阶段步骤606训练好的解卷积网络中,网络权值不变,以解卷积网络的两个输出之和为0为目标,优化网络输入端的预设值参数值,直至网络收敛,收敛时网络输入端被修改过的预设值参数向量就是优化的超声成像系统参数向量。
  8. 如权利要求1所述的基于深度学习的优化超声成像系统参数的方法,其特征在于,该方法具体包括:
    训练阶段:
    步骤701:从超声设备上随机选择超声成像系统参数的预设值,并且每选择一个预设值就在该预设值条件下采集一张超声图像,保存超声图像并一并记录下采集该超声图像时所采用的预设值参数,从而分别获得图像样本集OI与预设值参数样本集OP;
    步骤702:获取OI对应的优化图像样本集EI;
    步骤703:取LeNet网络的卷积网络部分;LeNet网络是Deep Learning的网络结构之一;
    步骤704:向LeNet的卷积网络依次输入OI样本与EI样本,并取得最后一层的输出分别为OIO、EIO;
    步骤705:以OP作为输入、相应的res=OIO-EIO作为输出训练一个解卷积网络;res是两个矩阵OIO与EIO的差;
    应用阶段:将超声系统预设值参数向量作为输入输入到训练阶段步骤705训练好的解卷积网络中,网络权值不变,以解卷积网络输出0为目标,优化网络输入端的预设值参数值,直至网络收敛,收敛时网络输入端被修改过的参数向量就是优化的超声成像系统参数向量。
PCT/CN2018/093561 2017-11-24 2018-06-29 基于深度学习的优化超声成像系统参数的方法 WO2019100718A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US16/766,643 US11564661B2 (en) 2017-11-24 2018-06-29 Method for optimizing ultrasonic imaging system parameter based on deep learning
EP18880759.8A EP3716000A4 (en) 2017-11-24 2018-06-29 PROCESS FOR OPTIMIZING THE PARAMETERS OF ULTRASONIC IMAGING SYSTEMS BASED ON DEEP LEARNING

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711224993.0A CN109833061B (zh) 2017-11-24 2017-11-24 基于深度学习的优化超声成像系统参数的方法
CN201711224993.0 2017-11-24

Publications (1)

Publication Number Publication Date
WO2019100718A1 true WO2019100718A1 (zh) 2019-05-31

Family

ID=66631311

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/093561 WO2019100718A1 (zh) 2017-11-24 2018-06-29 基于深度学习的优化超声成像系统参数的方法

Country Status (4)

Country Link
US (1) US11564661B2 (zh)
EP (1) EP3716000A4 (zh)
CN (1) CN109833061B (zh)
WO (1) WO2019100718A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110840482A (zh) * 2019-10-28 2020-02-28 苏州佳世达电通有限公司 超音波成像系统及其方法
CN112465786A (zh) * 2020-12-01 2021-03-09 平安科技(深圳)有限公司 模型训练方法、数据处理方法、装置、客户端及存储介质
CN113449737A (zh) * 2021-05-27 2021-09-28 南京大学 一种基于自编码器的单探头声学成像方法及装置

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102656543B1 (ko) * 2018-03-16 2024-04-12 삼성메디슨 주식회사 의료 영상 장치, 그 제어 방법, 및 컴퓨터 프로그램
EP3840663A1 (en) * 2018-08-23 2021-06-30 Koninklijke Philips N.V. Biometric measurement and quality assessment
CN110974305B (zh) * 2019-12-13 2021-04-27 山东大学齐鲁医院 基于深度学习的远程心脏超声三维成像系统及方法
CN110464380B (zh) * 2019-09-12 2021-10-29 李肯立 一种对中晚孕期胎儿的超声切面图像进行质量控制的方法
US11977982B2 (en) 2020-07-02 2024-05-07 International Business Machines Corporation Training of oscillatory neural networks
US20220163665A1 (en) * 2020-11-24 2022-05-26 Olympus NDT Canada Inc. Techniques to reconstruct data from acoustically constructed images using machine learning
US11933765B2 (en) * 2021-02-05 2024-03-19 Evident Canada, Inc. Ultrasound inspection techniques for detecting a flaw in a test object
CN112946081B (zh) * 2021-02-09 2023-08-18 武汉大学 基于缺陷多特征智能提取与融合的超声成像方法
KR102550262B1 (ko) * 2021-02-22 2023-07-03 광주과학기술원 무작위 간섭을 이용한 초음파 이미징 장치 및 그 방법
CN113509208B (zh) * 2021-09-14 2021-11-30 西南石油大学 一种基于相位约束的超高速超声成像的重建方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100998512A (zh) * 2007-01-10 2007-07-18 华中科技大学 三维超声影像的重建方法
CN101664321A (zh) * 2009-09-07 2010-03-10 无锡祥生科技有限公司 组织声速实时可调的超声诊断设备及其波束合成方法
CN101756713A (zh) * 2009-09-09 2010-06-30 西安交通大学 超声造影成像、灌注参量估计和灌注参量功能成像及其集成方法
CN102184330A (zh) * 2011-05-09 2011-09-14 周寅 一种基于影像特征和智能回归模型的优化调强放疗计划的方法

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030045797A1 (en) 2001-08-28 2003-03-06 Donald Christopher Automatic optimization of doppler display parameters
US7627386B2 (en) 2004-10-07 2009-12-01 Zonaire Medical Systems, Inc. Ultrasound imaging system parameter optimization via fuzzy logic
US10667790B2 (en) * 2012-03-26 2020-06-02 Teratech Corporation Tablet ultrasound system
US9918701B2 (en) * 2014-09-03 2018-03-20 Contextvision Ab Methods and systems for automatic control of subjective image quality in imaging of objects
CN104572940B (zh) * 2014-12-30 2017-11-21 中国人民解放军海军航空工程学院 一种基于深度学习与典型相关分析的图像自动标注方法
CN104778659A (zh) * 2015-04-15 2015-07-15 杭州电子科技大学 基于深度学习的单帧图像超分辨率重建方法
CN104933446B (zh) * 2015-07-15 2018-09-18 福州大学 一种用于计算机辅助诊断乳腺b超特征有效性验证的方法
CN105574820A (zh) * 2015-12-04 2016-05-11 南京云石医疗科技有限公司 一种基于深度学习的自适应超声图像增强方法
WO2017122785A1 (en) * 2016-01-15 2017-07-20 Preferred Networks, Inc. Systems and methods for multimodal generative machine learning
CN105787867B (zh) * 2016-04-21 2019-03-05 华为技术有限公司 基于神经网络算法的处理视频图像的方法和装置
CN106204449B (zh) * 2016-07-06 2019-09-10 安徽工业大学 一种基于对称深度网络的单幅图像超分辨率重建方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100998512A (zh) * 2007-01-10 2007-07-18 华中科技大学 三维超声影像的重建方法
CN101664321A (zh) * 2009-09-07 2010-03-10 无锡祥生科技有限公司 组织声速实时可调的超声诊断设备及其波束合成方法
CN101756713A (zh) * 2009-09-09 2010-06-30 西安交通大学 超声造影成像、灌注参量估计和灌注参量功能成像及其集成方法
CN102184330A (zh) * 2011-05-09 2011-09-14 周寅 一种基于影像特征和智能回归模型的优化调强放疗计划的方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3716000A4 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110840482A (zh) * 2019-10-28 2020-02-28 苏州佳世达电通有限公司 超音波成像系统及其方法
CN112465786A (zh) * 2020-12-01 2021-03-09 平安科技(深圳)有限公司 模型训练方法、数据处理方法、装置、客户端及存储介质
CN113449737A (zh) * 2021-05-27 2021-09-28 南京大学 一种基于自编码器的单探头声学成像方法及装置
CN113449737B (zh) * 2021-05-27 2023-11-17 南京大学 一种基于自编码器的单探头声学成像方法及装置

Also Published As

Publication number Publication date
CN109833061A (zh) 2019-06-04
CN109833061B (zh) 2020-08-04
EP3716000A1 (en) 2020-09-30
US11564661B2 (en) 2023-01-31
US20200345330A1 (en) 2020-11-05
EP3716000A4 (en) 2021-09-01

Similar Documents

Publication Publication Date Title
WO2019100718A1 (zh) 基于深度学习的优化超声成像系统参数的方法
Li et al. 2-channel convolutional 3D deep neural network (2CC3D) for fMRI analysis: ASD classification and feature learning
CN105159111B (zh) 基于人工智能的智能交互设备控制方法及系统
CN110874842B (zh) 一种基于级联残差全卷积网络的胸腔多器官分割方法
JP7074460B2 (ja) 画像検査装置および方法
CN111932461B (zh) 一种基于卷积神经网络的自学习图像超分辨率重建方法及系统
CN110458904B (zh) 胶囊式内窥镜图像的生成方法、装置及计算机存储介质
CN112043260B (zh) 基于局部模式变换的心电图分类方法
CN107374657B (zh) 对ct扫描数据进行校正的方法及ct扫描系统
Sengan et al. Images super-resolution by optimal deep AlexNet architecture for medical application: a novel DOCALN
CN116071401A (zh) 基于深度学习的虚拟ct图像的生成方法及装置
CN114190953A (zh) 针对脑电采集设备的脑电信号降噪模型的训练方法和系统
CN109544488B (zh) 一种基于卷积神经网络的图像合成方法
CN113989217A (zh) 一种基于深度学习的人眼屈光度检测方法
CN113838161B (zh) 一种基于图学习的稀疏投影重建方法
CN110415182B (zh) 眼底oct影像增强方法、装置、设备及存储介质
CN116452855A (zh) 基于深度学习的伤口图像分类及激光辅助治疗方法
Huh et al. Unsupervised learning for acoustic shadowing artifact removal in ultrasound imaging
CN112950573A (zh) 医学图像检测方法及相关装置、设备、存储介质
Fan et al. Group feature learning and domain adversarial neural network for aMCI diagnosis system based on EEG
CN110729033A (zh) 一种综合多功能病理诊断系统
KR102352859B1 (ko) 심장질환의 유무를 분류하는 장치 및 방법
CN117556194B (zh) 基于改进的yolo网络的脑电图伪影检测方法
US11361424B2 (en) Neural network-type image processing device, appearance inspection apparatus and appearance inspection method
CN116245756A (zh) 一种快速稳健的医学超声图像增强方法

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018880759

Country of ref document: EP

Effective date: 20200624