CN110074813B - Ultrasonic image reconstruction method and system - Google Patents

Ultrasonic image reconstruction method and system Download PDF

Info

Publication number
CN110074813B
CN110074813B CN201910342064.2A CN201910342064A CN110074813B CN 110074813 B CN110074813 B CN 110074813B CN 201910342064 A CN201910342064 A CN 201910342064A CN 110074813 B CN110074813 B CN 110074813B
Authority
CN
China
Prior art keywords
layer
image
network model
training
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910342064.2A
Other languages
Chinese (zh)
Other versions
CN110074813A (en
Inventor
陈昕
赵万明
谢辰熙
邢运成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN201910342064.2A priority Critical patent/CN110074813B/en
Publication of CN110074813A publication Critical patent/CN110074813A/en
Application granted granted Critical
Publication of CN110074813B publication Critical patent/CN110074813B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • A61B8/5238Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for combining image data of patient, e.g. merging several images from different acquisition modes into one image
    • A61B8/5261Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for combining image data of patient, e.g. merging several images from different acquisition modes into one image combining images from different diagnostic modalities, e.g. ultrasound and X-ray

Abstract

The invention discloses an ultrasonic image reconstruction method and system. The method for establishing the image reconstruction network model comprises the following steps: acquiring a training sample set; the training sample set comprises a plurality of sample pairs, and each sample pair comprises a set of ultrasonic radio frequency training data obtained according to a natural image and/or a medical image and a corresponding training gray-scale image. The method comprises the steps of constructing a generation confrontation network model, wherein the generation of the confrontation network model comprises a generator and a discriminator. Training by utilizing a training sample set to generate a confrontation network model, enabling the generated confrontation network model to reach Nash balance, and taking a generator in a Nash balance state as an image reconstruction network model for ultrasonic image reconstruction. According to the method, the natural image and the medical image with the resolution far higher than that of the ultrasonic image are used as data bases for training and generating the confrontation network model, and then the confrontation network model is generated to automatically learn and optimize parameters, so that the mapping relation from the original data to the ultrasonic image is obtained, and the imaging quality can be effectively improved.

Description

Ultrasonic image reconstruction method and system
Technical Field
The invention relates to the field of imaging, in particular to an ultrasonic image reconstruction method and system.
Background
Ultrasonic imaging is a convenient, real-time, non-destructive imaging method. The method is used for imaging by using acoustic impedance difference information of different parts in the detected object carried in the ultrasonic echo signal. The conventional ultrasound imaging process is divided into two steps, the first step is to acquire an ultrasound Radio Frequency (RF) signal from an ultrasound transducer, and the second step is to generate an ultrasound image by using the RF signal, i.e., to reconstruct the ultrasound image. The ultrasonic radio frequency signal is an original signal in the ultrasonic imaging process, and is easily interfered to influence the imaging quality. Moreover, the ultrasound image reconstruction process involves multiple steps, each step needs to set many parameters, the parameters of each step are mostly set by experience in the existing method, the effects generated by the parameter adjustment of each step affect each other, and it is difficult to find an effective method to carry out the overall planning on the parameters, resulting in poor final imaging quality.
Disclosure of Invention
The invention aims to provide an ultrasonic image reconstruction method and system, which can effectively improve the imaging quality.
In order to achieve the purpose, the invention provides the following scheme:
a method of ultrasound image reconstruction, the method comprising:
acquiring ultrasonic radio frequency signal data of an imaging target;
inputting the ultrasonic radio frequency signal data into an image reconstruction network model to obtain an ultrasonic image of the imaging target; the input of the image reconstruction network model is ultrasonic radio frequency signal data, and the output of the image reconstruction network model is an ultrasonic image; the image reconstruction network model is established based on a deep neural network algorithm for generating a countermeasure network; the method for establishing the image reconstruction network model comprises the following steps:
acquiring a training sample set; the training sample set comprises a plurality of sample pairs, and each sample pair comprises a group of ultrasonic radio frequency training data and a training gray scale image corresponding to the ultrasonic radio frequency training data; wherein the ultrasound radio frequency training data is ultrasound radio frequency data obtained from natural images and/or medical images of a sample; the medical image comprises at least one of a CT image and a magnetic resonance image;
constructing a generating countermeasure network model, wherein the generating countermeasure network model comprises a generator and a discriminator;
training the generated confrontation network model by using the training sample set to enable the generated confrontation network model to reach Nash balance, and taking the generator in a Nash balance state as an image reconstruction network model.
Optionally, the method for acquiring the ultrasound radio frequency training data includes:
acquiring a natural image and/or a medical image of a sample;
and inputting the natural image and/or the medical image into ultrasonic sound field simulation software to obtain ultrasonic radio frequency training data of the sample.
Optionally, the generator comprises a seven-layer structure; wherein the first layer is a fully connected layer; the second layer is a batch normalization layer; the third layer is a full connection layer; the fourth layer is a batch normalization layer; the fifth layer is a two-dimensional deconvolution layer, the size of a convolution kernel is 4 multiplied by 4, and the step length is 2; the sixth layer is a batch normalization layer; the seventh layer is an deconvolution layer, the size of a convolution kernel is 4 multiplied by 4, the step size is 2, and the activation function is a sigmoid function.
Optionally, the discriminator comprises a 5-layer structure; the first layer and the second layer are two-dimensional convolution layers, the size of a convolution kernel is 4 multiplied by 4, the step length is 2, and the activation function is a linear unit function with leakage correction; the third layer, the fourth layer and the fifth layer are all connecting layers.
An ultrasound image reconstruction system, the system comprising:
the radio frequency data acquisition module is used for acquiring ultrasonic radio frequency signal data of an imaging target;
the image reconstruction module is used for inputting the ultrasonic radio frequency signal data into an image reconstruction network model to obtain an ultrasonic image of the imaging target; the input of the image reconstruction network model is ultrasonic radio frequency signal data, and the output of the image reconstruction network model is an ultrasonic image; the image reconstruction network model is established based on a deep neural network algorithm for generating a countermeasure network; the image reconstruction network model establishing subsystem comprises:
the sample set acquisition module is used for acquiring a training sample set; the training sample set comprises a plurality of sample pairs, and each sample pair comprises a group of ultrasonic radio frequency training data and a training gray scale image corresponding to the ultrasonic radio frequency training data; wherein the ultrasound radio frequency training data is ultrasound radio frequency data obtained from natural images and/or medical images of a sample; the medical image comprises at least one of a CT image and a magnetic resonance image;
the system comprises a confrontation network construction module, a judgment module and a data processing module, wherein the confrontation network construction module is used for constructing a generated confrontation network model, and the generated confrontation network model comprises a generator and a discriminator;
and the training module is used for training the generated confrontation network model by using the training sample set, so that the generated confrontation network model reaches Nash balance, and the generator in the Nash balance state is used as an image reconstruction network model.
Optionally, the sample set obtaining module includes:
a sample image acquisition unit for acquiring a natural image and/or a medical image of a sample;
and the training data determining unit is used for inputting the natural image and/or the medical image into ultrasonic sound field simulation software to obtain ultrasonic radio frequency training data of the sample.
Optionally, the generator comprises a seven-layer structure; wherein the first layer is a fully connected layer; the second layer is a batch normalization layer; the third layer is a full connection layer; the fourth layer is a batch normalization layer; the fifth layer is a two-dimensional deconvolution layer, the size of a convolution kernel is 4 multiplied by 4, and the step length is 2; the sixth layer is a batch normalization layer; the seventh layer is an deconvolution layer, the size of a convolution kernel is 4 multiplied by 4, the step size is 2, and the activation function is a sigmoid function.
Optionally, the discriminator comprises a 5-layer structure; the first layer and the second layer are two-dimensional convolution layers, the size of a convolution kernel is 4 multiplied by 4, the step length is 2, and the activation function is a linear unit function with leakage correction; the third layer, the fourth layer and the fifth layer are all connecting layers.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
in the ultrasonic image reconstruction method and system provided by the invention, the method for establishing the image reconstruction network model comprises the following steps: acquiring a training sample set; the training sample set comprises a plurality of sample pairs, and each sample pair comprises a group of ultrasonic radio frequency training data and a training gray-scale image corresponding to the ultrasonic radio frequency training data; wherein the ultrasonic radio frequency training data is ultrasonic radio frequency data obtained according to natural images and/or medical images of the sample; the medical image includes at least one of a CT image and a magnetic resonance image. The method comprises the steps of constructing a generation confrontation network model, wherein the generation of the confrontation network model comprises a generator and a discriminator. Training by utilizing a training sample set to generate a confrontation network model, enabling the generated confrontation network model to reach Nash balance, and taking a generator in a Nash balance state as an image reconstruction network model for ultrasonic image reconstruction. Therefore, the method utilizes the natural images and the medical images with the resolution far higher than that of the ultrasonic images as the data basis for training the generation of the confrontation network model, and then obtains the mapping relation from the original data to the ultrasonic images by generating the automatic learning optimization parameters of the confrontation network model, thereby effectively improving the imaging quality.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
Fig. 1 is a flowchart of an ultrasound image reconstruction method according to an embodiment of the present invention;
fig. 2 is a flowchart of a method for establishing an image reconstruction network model according to an embodiment of the present invention;
fig. 3 is a block diagram of an ultrasound image reconstruction system according to an embodiment of the present invention;
fig. 4 is a structural block diagram of a subsystem for building an image reconstruction network model according to an embodiment of the present invention;
FIG. 5 is a flow chart of a method for acquiring ultrasound RF training data according to an embodiment of the present invention;
FIG. 6 is a network architecture diagram of a generator provided by an embodiment of the present invention;
FIG. 7 is a network structure diagram of an arbiter provided in an embodiment of the present invention;
fig. 8 is a flowchart of training to generate a confrontation network model according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide an ultrasonic image reconstruction method and system, which can effectively improve the imaging quality.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Fig. 1 is a flowchart of an ultrasound image reconstruction method according to an embodiment of the present invention. As shown in fig. 1, a method for reconstructing an ultrasound image, the method comprising:
step 101: acquiring ultrasonic radio frequency signal data of an imaging target.
Step 102: inputting the ultrasonic radio frequency signal data into an image reconstruction network model to obtain an ultrasonic image of the imaging target; the input of the image reconstruction network model is ultrasonic radio frequency signal data, and the output of the image reconstruction network model is an ultrasonic image; the image reconstruction network model is built based on a deep neural network algorithm that generates a countermeasure network.
Fig. 2 is a flowchart of a method for establishing an image reconstruction network model according to an embodiment of the present invention. As shown in fig. 2, the method for building the image reconstruction network model includes:
step 201: acquiring a training sample set; the training sample set comprises a plurality of sample pairs, and each sample pair comprises a group of ultrasonic radio frequency training data and a training gray scale image corresponding to the ultrasonic radio frequency training data; wherein the ultrasound radio frequency training data is ultrasound radio frequency data obtained from natural images and/or medical images of a sample; the medical image includes at least one of a CT image and a magnetic resonance image.
Step 202: a generative confrontation network model is constructed, which includes a generator and an arbiter. The generator comprises a seven-layer structure; wherein the first layer is a fully connected layer; the second layer is a batch normalization layer; the third layer is a full connection layer; the fourth layer is a batch normalization layer; the fifth layer is a two-dimensional deconvolution layer, the size of a convolution kernel is 4 multiplied by 4, the step length is 2, and no activation function is used; the sixth layer is a batch normalization layer; the seventh layer is an deconvolution layer, the size of a convolution kernel is 4 multiplied by 4, the step size is 2, and the activation function is a sigmoid function. The discriminator comprises a 5-layer structure; the first layer and the second layer are two-dimensional convolution layers, the size of a convolution kernel is 4 multiplied by 4, the step length is 2, and the activation function is a leakage correction linear unit (Leaky ReLU) function; the third layer, the fourth layer and the fifth layer are all connecting layers.
Step 203: training the generated confrontation network model by using the training sample set to enable the generated confrontation network model to reach Nash balance, and taking the generator in a Nash balance state as an image reconstruction network model.
Specifically, the method for acquiring the ultrasonic radio frequency training data comprises the following steps:
acquiring a natural image and/or a medical image of a sample;
and inputting the natural image and/or the medical image into ultrasonic sound field simulation software to obtain ultrasonic radio frequency training data of the sample.
Fig. 3 is a block diagram of an ultrasound image reconstruction system according to an embodiment of the present invention. As shown in fig. 3, an ultrasound image reconstruction system, the system comprising:
a radio frequency data obtaining module 301, configured to obtain ultrasonic radio frequency signal data of an imaging target.
An image reconstruction module 302, configured to input the ultrasonic radio-frequency signal data into an image reconstruction network model, so as to obtain an ultrasonic image of the imaging target; the input of the image reconstruction network model is ultrasonic radio frequency signal data, and the output of the image reconstruction network model is an ultrasonic image; the image reconstruction network model is built based on a deep neural network algorithm that generates a countermeasure network.
Fig. 4 is a structural block diagram of a subsystem for building an image reconstruction network model according to an embodiment of the present invention. As shown in fig. 4, the subsystem for building the image reconstruction network model includes:
a sample set obtaining module 401, configured to obtain a training sample set; the training sample set comprises a plurality of sample pairs, and each sample pair comprises a group of ultrasonic radio frequency training data and a training gray scale image corresponding to the ultrasonic radio frequency training data; wherein the ultrasound radio frequency training data is ultrasound radio frequency data obtained from natural images and/or medical images of a sample; the medical image includes at least one of a CT image and a magnetic resonance image.
A confrontation network construction module 402 for constructing a generative confrontation network model, the generative confrontation network model comprising a generator and an arbiter. The generator comprises a seven-layer structure; wherein the first layer is a fully connected layer; the second layer is a batch normalization layer; the third layer is a full connection layer; the fourth layer is a batch normalization layer; the fifth layer is a two-dimensional deconvolution layer, the size of a convolution kernel is 4 multiplied by 4, the step length is 2, and no activation function is used; the sixth layer is a batch normalization layer; the seventh layer is an deconvolution layer, the size of a convolution kernel is 4 multiplied by 4, the step size is 2, and the activation function is a sigmoid function. The discriminator comprises a 5-layer structure; the first layer and the second layer are two-dimensional convolution layers, the size of a convolution kernel is 4 multiplied by 4, the step length is 2, and the activation function is a Leaky ReLU function; the third layer, the fourth layer and the fifth layer are all connecting layers.
A training module 403, configured to train the generated confrontation network model with the training sample set, so that the generated confrontation network model reaches nash balance, and use the generator in nash balance state as an image reconstruction network model.
Specifically, the sample set obtaining module 401 includes:
a sample image acquisition unit for acquiring a natural image and/or a medical image of a sample.
And the training data determining unit is used for inputting the natural image and/or the medical image into ultrasonic sound field simulation software to obtain ultrasonic radio frequency training data of the sample.
The specific implementation process of the invention is as follows:
(1) and acquiring ultrasonic radio frequency training data by utilizing ultrasonic field simulation software.
In the network training stage, a large amount of sample data is required for training, and the sample data is required to have RF data and a high-resolution gray-scale image corresponding to the RF data at the same time, wherein the RF data is used as the input of the network, and the corresponding high-resolution gray-scale image is used as a label graph corresponding to the RF data. In order to improve the Imaging quality of ultrasonic image reconstruction, the invention uses two types of images to generate sample data, one type is a high-resolution natural image, and the other type is a high-resolution medical image such as a CT image and/or a Magnetic Resonance Image (MRI). By using the two types of image data, corresponding RF data can be generated through ultrasonic sound field simulation software. The simulation software selected in the embodiment is sound Field simulation software Field II.
Fig. 5 is a flowchart of acquiring ultrasound rf training data according to an embodiment of the present invention. As shown in fig. 5, by using a FieldII simulation toolkit, a natural image, a CT image or an MRI image is used as a template for setting a phantom parameter, the position and size of a scattering point in the phantom are set according to the position and gray value of a pixel point in the image, and then ultrasonic RF signal data received by a probe after interaction between the phantom and ultrasonic waves is obtained by setting a relevant probe parameter and a scanning manner. The reconstruction of the RF signal data and comparison with the original image can prove the effectiveness of the RF data generation method described above.
(2) And constructing and generating a countermeasure network.
The generation countermeasure network is composed of a generator that generates simulation data and a discriminator that discriminates whether the generated data is real or simulated. The generator for generating the simulation data needs to continuously optimize the discriminator to make the discriminator not judge, the discriminator needs to optimize the discriminator to make the discriminator judge more accurately, and the relationship between the two forms a countermeasure.
The RF two-dimensional data is flattened into a one-dimensional vector z, for example, a two-dimensional matrix of 4 × 4 is flattened into a one-dimensional matrix of 1 × 16, and the generator restores the high-dimensional analog data from the low-dimensional RF data. Since the high-dimensional data is recovered from the low-dimensional data first, a deconvolution function is used in the network structure to simulate the generation of the high-dimensional data. Fig. 6 is a network structure diagram of a generator according to an embodiment of the present invention. As shown in fig. 6, the generator consists of a total of 7 layers, wherein the corresponding structures of the first 4 layers are composed of two fully connected layers, each of which is followed by a Batch Normalization layer. Layer 5 is a two-dimensional deconvolution layer, the convolution kernel size is 4 x 4, the step size is 2, and no activation function is used. Layer 6 is the Batch Normalization layer. The 7 th layer is an deconvolution layer, the size of a convolution kernel is 4 multiplied by 4, the step size is 2, and the activation function is a sigmoid function.
The discriminator is used to determine whether the input data is a real sample or simulated data generated by the generator, i.e. to estimate a conditional probability distribution that a sample belongs to a certain class. Wherein, the real sample is a gray scale image corresponding to the RF data in the training sample set. In the network structure, the convolution is performed twice, the full connection is performed twice, and finally the judgment result generated by the output layer is between 0 and 1. Wherein 1 represents the true sample and 0 represents the simulation data. A value closer to 1 indicates a higher probability that the input data is a true sample, and a value closer to 0 indicates a higher probability that the input data is a simulated sample. Fig. 7 is a network structure diagram of an arbiter according to an embodiment of the present invention. As shown in fig. 7, the discriminator is composed of 5 layers in total, the first two layers are two-dimensional convolution layers, the size of the convolution kernel is 4 × 4, the step length is 2, the activation function is the leakage relu function, the last three layers are full-link layers, and the final full-link layer outputs the discrimination result.
(3) And carrying out network training learning by using the sample data to obtain an image reconstruction network model.
Fig. 8 is a flowchart of training to generate a confrontation network model according to an embodiment of the present invention. As shown in fig. 8, the flattened RF one-dimensional vector is put into a generator to generate an analog sample, and then the analog sample and a real sample (gray-scale image) are input into a discriminator, respectively, to generate a discrimination result. The purpose of the generator is to make the simulated sample approach the real sample, so the discrimination result of the simulated sample is close to 1 in value, and the purpose of the discriminator is to accurately discriminate the simulated sample or the real sample, so the discrimination result of the simulated sample is close to 0 in value, and the discrimination result of the real sample is close to 1 in value. The invention uses Adam optimization algorithm to optimize the training process, the learning rate of the discriminator is 0.0001, and the learning rate of the generator is 0.001.
The Adam optimization algorithm is one of gradient descent methods, and is used for recursively approximating a minimum deviation model in deep learning. In the training process, the loss value of the output value and the true value is obtained every time of forward propagation, and the smaller the loss value is, the better the model is represented.
The generator and the discriminator are trained simultaneously, the generator simulates a sample close to an original ultrasonic image as much as possible through a large number of times of iterative training, the discriminator has the capability of more accurately distinguishing true data from false data, and finally the whole generated countermeasure network can reach Nash balance, namely the discrimination result of the discriminator on the simulated data and the real data is close to 0.5. Through the training process, a generator with reconstruction capability can be obtained, and the generator is used as an image reconstruction network model for subsequent image reconstruction.
(4) And reconstructing the image by using the image reconstruction network model.
And inputting the ultrasonic radio frequency signal data of the imaging target into an image reconstruction network model, and outputting the ultrasonic image of the imaging target by the image reconstruction network model.
According to the method, the natural image and the medical image with high resolution are used as data bases for training and generating the confrontation network model, and then the confrontation network model is generated to automatically learn and optimize parameters, so that the mapping relation from the original data to the ultrasonic image is obtained, and the imaging quality can be effectively improved.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (8)

1. A method for reconstructing an ultrasound image, the method comprising:
acquiring ultrasonic radio frequency signal data of an imaging target;
inputting the ultrasonic radio frequency signal data into an image reconstruction network model to obtain an ultrasonic image of the imaging target; the input of the image reconstruction network model is ultrasonic radio frequency signal data, and the output of the image reconstruction network model is an ultrasonic image; the image reconstruction network model is established based on a deep neural network algorithm for generating a countermeasure network; the method for establishing the image reconstruction network model comprises the following steps:
acquiring a training sample set; the training sample set comprises a plurality of sample pairs, and each sample pair comprises a group of ultrasonic radio frequency training data and a training gray scale image corresponding to the ultrasonic radio frequency training data; wherein the ultrasound radio frequency training data is ultrasound radio frequency data obtained from natural images and/or medical images of a sample; the medical image comprises at least one of a CT image and a magnetic resonance image;
constructing a generating countermeasure network model, wherein the generating countermeasure network model comprises a generator and a discriminator;
flattening the RF two-dimensional data into a one-dimensional vector z, and restoring high-dimensional analog data from the low-dimensional RF data by the generator;
training the generated confrontation network model by using the training sample set to enable the generated confrontation network model to reach Nash balance, and taking the generator in a Nash balance state as an image reconstruction network model.
2. The method of claim 1, wherein the method of acquiring the ultrasound radio frequency training data comprises:
acquiring a natural image and/or a medical image of a sample;
and inputting the natural image and/or the medical image into ultrasonic sound field simulation software to obtain ultrasonic radio frequency training data of the sample.
3. The method of reconstructing an ultrasound image according to claim 1, wherein said generator includes a seven-layer structure; wherein the first layer is a fully connected layer; the second layer is a batch normalization layer; the third layer is a full connection layer; the fourth layer is a batch normalization layer; the fifth layer is a two-dimensional deconvolution layer, the size of a convolution kernel is 4 multiplied by 4, and the step length is 2; the sixth layer is a batch normalization layer; the seventh layer is an deconvolution layer, the size of a convolution kernel is 4 multiplied by 4, the step size is 2, and the activation function is a sigmoid function.
4. The method of reconstructing an ultrasound image according to claim 1, wherein said discriminator includes a five-layer structure; the first layer and the second layer are two-dimensional convolution layers, the size of a convolution kernel is 4 multiplied by 4, the step length is 2, and the activation function is a linear unit function with leakage correction; the third layer, the fourth layer and the fifth layer are all connecting layers.
5. An ultrasound image reconstruction system, the system comprising:
the radio frequency data acquisition module is used for acquiring ultrasonic radio frequency signal data of an imaging target;
the image reconstruction module is used for inputting the ultrasonic radio frequency signal data into an image reconstruction network model to obtain an ultrasonic image of the imaging target; the input of the image reconstruction network model is ultrasonic radio frequency signal data, and the output of the image reconstruction network model is an ultrasonic image; the image reconstruction network model is established based on a deep neural network algorithm for generating a countermeasure network; the image reconstruction network model establishing subsystem comprises:
the sample set acquisition module is used for acquiring a training sample set; the training sample set comprises a plurality of sample pairs, and each sample pair comprises a group of ultrasonic radio frequency training data and a training gray scale image corresponding to the ultrasonic radio frequency training data; wherein the ultrasound radio frequency training data is ultrasound radio frequency data obtained from natural images and/or medical images of a sample; the medical image comprises at least one of a CT image and a magnetic resonance image;
the system comprises a confrontation network construction module, a judgment module and a data processing module, wherein the confrontation network construction module is used for constructing a generated confrontation network model, and the generated confrontation network model comprises a generator and a discriminator;
flattening the RF two-dimensional data into a one-dimensional vector z, and restoring high-dimensional analog data from the low-dimensional RF data by the generator;
and the training module is used for training the generated confrontation network model by using the training sample set, so that the generated confrontation network model reaches Nash balance, and the generator in the Nash balance state is used as an image reconstruction network model.
6. The ultrasound image reconstruction system of claim 5, wherein the sample set acquisition module comprises:
a sample image acquisition unit for acquiring a natural image and/or a medical image of a sample;
and the training data determining unit is used for inputting the natural image and/or the medical image into ultrasonic sound field simulation software to obtain ultrasonic radio frequency training data of the sample.
7. The ultrasound image reconstruction system of claim 5, wherein the generator comprises a seven-layer structure; wherein the first layer is a fully connected layer; the second layer is a batch normalization layer; the third layer is a full connection layer; the fourth layer is a batch normalization layer; the fifth layer is a two-dimensional deconvolution layer, the size of a convolution kernel is 4 multiplied by 4, and the step length is 2; the sixth layer is a batch normalization layer; the seventh layer is an deconvolution layer, the size of a convolution kernel is 4 multiplied by 4, the step size is 2, and the activation function is a sigmoid function.
8. The ultrasound image reconstruction system of claim 5, wherein the discriminator comprises a 5-layer structure; the first layer and the second layer are two-dimensional convolution layers, the size of a convolution kernel is 4 multiplied by 4, the step length is 2, and the activation function is a linear unit function with leakage correction; the third layer, the fourth layer and the fifth layer are all connecting layers.
CN201910342064.2A 2019-04-26 2019-04-26 Ultrasonic image reconstruction method and system Active CN110074813B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910342064.2A CN110074813B (en) 2019-04-26 2019-04-26 Ultrasonic image reconstruction method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910342064.2A CN110074813B (en) 2019-04-26 2019-04-26 Ultrasonic image reconstruction method and system

Publications (2)

Publication Number Publication Date
CN110074813A CN110074813A (en) 2019-08-02
CN110074813B true CN110074813B (en) 2022-03-04

Family

ID=67416866

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910342064.2A Active CN110074813B (en) 2019-04-26 2019-04-26 Ultrasonic image reconstruction method and system

Country Status (1)

Country Link
CN (1) CN110074813B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110517249A (en) * 2019-08-27 2019-11-29 中山大学 Imaging method, device, equipment and the medium of ultrasonic elastic image
CN111091603B (en) * 2019-11-04 2023-04-07 深圳先进技术研究院 Ultrasonic imaging method and device, readable storage medium and terminal equipment
CN114401676A (en) * 2019-12-18 2022-04-26 深圳迈瑞生物医疗电子股份有限公司 Ultrasonic imaging method, ultrasonic imaging system, and computer-readable storage medium
CN111402266A (en) * 2020-03-13 2020-07-10 中国石油大学(华东) Method and system for constructing digital core
CN111489404B (en) * 2020-03-20 2023-09-05 深圳先进技术研究院 Image reconstruction method, image processing device and device with storage function
CN111444830B (en) * 2020-03-25 2023-10-31 腾讯科技(深圳)有限公司 Method and device for imaging based on ultrasonic echo signals, storage medium and electronic device
CN111436936B (en) * 2020-04-29 2021-07-27 浙江大学 CT image reconstruction method based on MRI
CN111798400B (en) * 2020-07-20 2022-10-11 福州大学 Non-reference low-illumination image enhancement method and system based on generation countermeasure network
CN113112559A (en) * 2021-04-07 2021-07-13 中国科学院深圳先进技术研究院 Ultrasonic image segmentation method and device, terminal equipment and storage medium
CN113239978A (en) * 2021-04-22 2021-08-10 科大讯飞股份有限公司 Method and device for correlating medical image preprocessing model and analysis model
CN113436109B (en) * 2021-07-08 2022-10-14 清华大学 Ultrafast high-quality plane wave ultrasonic imaging method based on deep learning
CN114739673B (en) * 2022-03-18 2023-05-26 河北工业大学 Bearing vibration signal characteristic interpretability dimension reduction and fault diagnosis method
CN116681790B (en) * 2023-07-18 2024-03-22 脉得智能科技(无锡)有限公司 Training method of ultrasound contrast image generation model and image generation method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107610193A (en) * 2016-06-23 2018-01-19 西门子保健有限责任公司 Use the image rectification of depth production machine learning model
CN108268870A (en) * 2018-01-29 2018-07-10 重庆理工大学 Multi-scale feature fusion ultrasonoscopy semantic segmentation method based on confrontation study
CN108447049A (en) * 2018-02-27 2018-08-24 中国海洋大学 A kind of digitlization physiology organism dividing method fighting network based on production
CN108550118A (en) * 2018-03-22 2018-09-18 深圳大学 Fuzzy processing method, device, equipment and the storage medium of motion blur image
CN109377520A (en) * 2018-08-27 2019-02-22 西安电子科技大学 Cardiac image registration arrangement and method based on semi-supervised circulation GAN
CN109544517A (en) * 2018-11-06 2019-03-29 中山大学附属第医院 Method and system are analysed in multi-modal ultrasound group credit based on deep learning

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101099681A (en) * 2007-06-07 2008-01-09 复旦大学 Small scale thermo-acoustic imaging de-convolution reconstruction method
US20120220875A1 (en) * 2010-04-20 2012-08-30 Suri Jasjit S Mobile Architecture Using Cloud for Hashimoto's Thyroiditis Disease Classification
US20170337682A1 (en) * 2016-05-18 2017-11-23 Siemens Healthcare Gmbh Method and System for Image Registration Using an Intelligent Artificial Agent
US11120337B2 (en) * 2017-10-20 2021-09-14 Huawei Technologies Co., Ltd. Self-training method and system for semi-supervised learning with generative adversarial networks
CN109087327B (en) * 2018-07-13 2021-07-06 天津大学 Thyroid nodule ultrasonic image segmentation method of cascaded full convolution neural network
CN109242865B (en) * 2018-09-26 2020-09-25 上海联影智能医疗科技有限公司 Medical image automatic partitioning system, method, device and storage medium based on multiple maps
CN109493308B (en) * 2018-11-14 2021-10-26 吉林大学 Medical image synthesis and classification method for generating confrontation network based on condition multi-discrimination
CN109637634B (en) * 2018-12-11 2020-10-02 厦门大学 Medical image synthesis method based on generation countermeasure network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107610193A (en) * 2016-06-23 2018-01-19 西门子保健有限责任公司 Use the image rectification of depth production machine learning model
CN108268870A (en) * 2018-01-29 2018-07-10 重庆理工大学 Multi-scale feature fusion ultrasonoscopy semantic segmentation method based on confrontation study
CN108447049A (en) * 2018-02-27 2018-08-24 中国海洋大学 A kind of digitlization physiology organism dividing method fighting network based on production
CN108550118A (en) * 2018-03-22 2018-09-18 深圳大学 Fuzzy processing method, device, equipment and the storage medium of motion blur image
CN109377520A (en) * 2018-08-27 2019-02-22 西安电子科技大学 Cardiac image registration arrangement and method based on semi-supervised circulation GAN
CN109544517A (en) * 2018-11-06 2019-03-29 中山大学附属第医院 Method and system are analysed in multi-modal ultrasound group credit based on deep learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Towards CT-quality Ultrasound Imaging using Deep Learning;Vedula S , et al.;《arXiv preprint》;20171017;摘要,第1节-第4节 *
生成对抗网络的血管内超声图像超分辨率重建;吴洋洋 等;《南方医科大学学报》;20190131;第39卷(第1期);第82-87页 *
超声图像模拟;GeekZW;《CSDN》;20180503;第1-2页 *

Also Published As

Publication number Publication date
CN110074813A (en) 2019-08-02

Similar Documents

Publication Publication Date Title
CN110074813B (en) Ultrasonic image reconstruction method and system
RU2709437C1 (en) Image processing method, an image processing device and a data medium
CN110739070A (en) brain disease diagnosis method based on 3D convolutional neural network
CN108198154A (en) Image de-noising method, device, equipment and storage medium
US20210134028A1 (en) Method and system for simultaneous quantitative multiparametric magnetic resonance imaging (mri)
JP7367504B2 (en) Electromagnetic field simulation device and method
CN106794007A (en) Network ultrasonic image-forming system
CN109461177B (en) Monocular image depth prediction method based on neural network
CN109875606B (en) Ultrasonic CT sound velocity imaging method based on prior reflection imaging
CN114758230A (en) Underground target body classification and identification method based on attention mechanism
CN117011673B (en) Electrical impedance tomography image reconstruction method and device based on noise diffusion learning
CN110517249A (en) Imaging method, device, equipment and the medium of ultrasonic elastic image
CN107847217A (en) The method and apparatus shown for generating ultrasonic scattering body surface
CN111223162A (en) Deep learning method and system for reconstructing EPAT image
Lee et al. Generating controllable ultrasound images of the fetal head
US20230146192A1 (en) Post processing system and post processing method for reconstructed images and non-transitory computer readable medium
CN115754007A (en) Damage detection method based on acoustic emission technology and tomography technology
CN115346091A (en) Method and device for generating Mura defect image data set
CN110852451B (en) Recursive kernel self-adaptive filtering method based on kernel function
Lv et al. Improved SRCNN for super-resolution reconstruction of retinal images
Chilukuri et al. Analysing Of Image Quality Computation Models Through Convolutional Neural Network
CN110432900A (en) Multi-view learning method and system for rhesus monkey eye movement decision decoding
US20240105328A1 (en) Neural network simulator for ultrasound images and clips
CN116778020B (en) Flexible ultrasonic beam-focusing imaging method and system based on deep learning
US20230360225A1 (en) Systems and methods for medical imaging

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant