CN113344829A - Portable ultrasonic video optimization reconstruction method for multi-channel generation countermeasure network - Google Patents

Portable ultrasonic video optimization reconstruction method for multi-channel generation countermeasure network Download PDF

Info

Publication number
CN113344829A
CN113344829A CN202110430141.7A CN202110430141A CN113344829A CN 113344829 A CN113344829 A CN 113344829A CN 202110430141 A CN202110430141 A CN 202110430141A CN 113344829 A CN113344829 A CN 113344829A
Authority
CN
China
Prior art keywords
ultrasonic
layer
image
portable
ultrasound
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110430141.7A
Other languages
Chinese (zh)
Inventor
郭翌
周子夏
汪源源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fudan University
Original Assignee
Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fudan University filed Critical Fudan University
Priority to CN202110430141.7A priority Critical patent/CN113344829A/en
Publication of CN113344829A publication Critical patent/CN113344829A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration using non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20064Wavelet transform [DWT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)
  • Image Processing (AREA)

Abstract

A portable ultrasonic video optimization reconstruction method based on an ultrasonic perception dynamic information integration multi-channel generation countermeasure network comprises the steps of firstly carrying out ultrasonic perception dynamic information integration decomposition on an ultrasonic B mode image collected by a portable ultrasonic instrument to generate a low-rank part and a sparse part so as to effectively decompose the low-rank part and the sparse part in the image; the method comprises the steps of generating a countermeasure network through multiple paths respectively positioned on an original image sub-channel, a low-rank sub-channel and a sparse sub-channel, respectively learning an ultrasonic B-mode image acquired by a portable ultrasonic instrument, a low-rank part and a sparse part obtained by decomposition through a dynamic/static information cascade migration learning strategy, and finally obtaining a reconstruction result after averaging through an over-fusion layer. The invention captures dynamic information through a cascade migration learning strategy, realizes a learning process from coarse to fine, and obviously improves the reconstruction effect.

Description

Portable ultrasonic video optimization reconstruction method for multi-channel generation countermeasure network
Technical Field
The invention relates to a technology in the field of image processing, in particular to a portable ultrasonic video optimization reconstruction method based on an ultrasonic perception dynamic information integration multi-channel generation countermeasure network.
Background
As a new ultrasonic imaging device, the portable ultrasonic instrument has a wide application prospect in the fields of community medical treatment, village and town medical treatment, remote medical treatment and the like. The portable ultrasonic instrument has the advantages of simple operation, quick imaging, low price, portable equipment and the like. However, the portable ultrasonic instrument is limited by the size of the portable ultrasonic instrument, the imaging resolution is low, and the artifact noise is strong. The diagnostic reliability of the portable ultrasound device is severely reduced by the problem of low quality of imaging, and the improvement of the traditional post-processing method based on gray distribution is restricted by limited acquisition information and can not break through the limitation of hardware.
Disclosure of Invention
Aiming at the problem of low imaging quality of the conventional portable ultrasonic video, the invention provides a portable ultrasonic video optimization reconstruction method for generating a countermeasure network based on ultrasonic perception dynamic information integration multi-channel, wherein an end-to-end ultrasonic video enhancement mapping network is directly constructed by utilizing a deep learning method, and global features and local details can be processed in a multi-channel parallel manner; capturing dynamic information through a cascade transfer learning strategy, and realizing a learning process from coarse to fine; the traditional countermeasure loss, the mean square error loss and the novel ultrasonic specific sensing loss are combined, and the network training state is evaluated by utilizing the depth sensing characteristic, so that the reconstruction effect is improved.
The invention is realized by the following technical scheme:
the invention relates to a portable ultrasonic video optimization reconstruction method for generating a countermeasure network based on ultrasonic perception dynamic information integration multi-channel, which comprises the steps of firstly, carrying out ultrasonic perception dynamic information integration decomposition on an ultrasonic B-mode image acquired by a portable ultrasonic instrument to generate a low-rank part and a sparse part so as to effectively decompose the low-rank part and the sparse part in the image; an anti-network is generated through multiple paths respectively positioned on an original image subchannel, a low-rank subchannel and a sparse subchannel to respectively learn an ultrasonic B-mode image acquired by a portable ultrasonic instrument, a low-rank part and a sparse part obtained by decomposition through a dynamic/static information cascade migration learning strategy, so that a global reconstruction mapping relation is provided in an auxiliary way, the contrast difference between high/low-quality images is captured, tissue structure information in an ultrasonic image is highlighted, speckle textures of the high-quality image are predicted, edge information is reserved, and noise is eliminated; and finally, obtaining a reconstruction result through fusion layer averaging.
Technical effects
The invention integrally solves the defect of low imaging quality of the existing low-end ultrasound or portable ultrasound equipment, overcomes the hardware limitation of the portable ultrasound instrument, improves the imaging quality of the portable ultrasound instrument, and reconstructs the portable ultrasound video with high quality, high stability and high continuity.
Compared with the prior art, the dynamic video information extraction and utilization method has the advantages that the dynamic video information is extracted and utilized through a dynamic to static cascade transfer learning strategy; and (3) providing ultrasonic specific sensing loss, and evaluating a generation model in real time by using a pre-trained loss network to assist the multi-channel generation countermeasure network to quickly converge.
Drawings
FIG. 1 is a schematic diagram of the overall architecture of the present invention;
FIG. 2 is a schematic diagram of a dynamic/static information cascade transition learning strategy;
FIG. 3 is a block diagram of a countermeasure network for each of the multipaths in a subchannel;
FIG. 4 is a schematic diagram of a detailed structure of an attention layer of the adjacent frame in FIG. 3;
FIG. 5 is a diagram of an ultrasound loss network architecture;
FIG. 6 is a comparison graph of the ultrasonic video reconstruction results;
in the figure: (a) in order to intercept a single-frame image from an input video, (b) - (h) respectively represent a DHE method, a noise reduction method based on an ultrasonic image, an aDWT fusion method, an SRCNN method, an ESRGAN method, an SRFBN method, a two-stage GAN method and a reconstruction result of the method;
FIG. 7 is a schematic diagram illustrating the effects of the embodiment;
in the figure: (a) the method comprises the following steps of (a) a blood vessel cross-sectional area change curve of an original video, (b) comparison based on a multi-frame information method, and (c) comparison based on a deep learning method.
Detailed Description
As shown in fig. 1, for the present embodiment, a method for optimizing and reconstructing a portable ultrasound video based on an ultrasound-aware dynamic information integration multi-path generation countermeasure network is provided, which includes performing ultrasound-aware dynamic information integration decomposition on an ultrasound B-mode image acquired by a portable ultrasound apparatus, and generating a low-rank portion and a sparse portion to effectively decompose the low-rank portion and the sparse portion in the image; an anti-network is generated through multiple paths respectively positioned on an original image subchannel, a low-rank subchannel and a sparse subchannel to respectively learn an ultrasonic B-mode image acquired by a portable ultrasonic instrument, a low-rank part and a sparse part obtained by decomposition through a dynamic/static information cascade migration learning strategy, so that a global reconstruction mapping relation is provided in an auxiliary way, the contrast difference between high/low-quality images is captured, tissue structure information in an ultrasonic image is highlighted, speckle textures of the high-quality image are predicted, edge information is reserved, and noise is eliminated; and finally, obtaining a reconstruction result through fusion layer averaging.
The ultrasonic perception dynamic information integration and decomposition refers to the following steps: performing low-rank decomposition on each frame of ultrasonic video input one by one to obtain a low-rank part and a sparse part, specifically:
Figure RE-GDA0003173396910000021
s.t.d. AZ + S, wherein | Z | y circuitry*Convex envelope, minimum solution Z representing rank operation*Is the lowest rank expression of the current input, A and AZ*Respectively representing a known dictionary and its low-rank part, | | S | | non-woven phosphor1Representing the sparse part S passing through I1Normalized matrix, λ is a given parameter greater than zero.
As shown in fig. 2, the dynamic/static information cascade transition learning strategy is: in the dynamic information learning stage, a multi-channel on each sub-channel generates a mapping rule from adjacent frame low-quality images acquired at t-1, t and t +1 moments to high-quality intermediate frame images in the anti-network learning multi-angle plane wave video, the multi-angle plane wave images acquired at t-1, t and t +1 moments are registered and fused to serve as high-quality reference images, three path branches of the multi-channel generation anti-network extract dynamic information characteristics of the single-angle plane wave video, and the relevance between the continuous frames is excavated; in the static information learning stage, a model obtained by plane wave multi/single angle video data training is transferred to assist in learning the mapping relation of an ultrasonic image pair acquired by a portable/high-end device, a countermeasure network is generated through multiple channels according to a training image acquired by the portable/high-end device after overlapping preprocessing, and overlapped high-quality images are generated through a coarse-to-fine learning process.
Said registration, by minimizing mutual information loss C between adjacent frames and intermediate framesMIThe realization specifically is that:
Figure RE-GDA0003173396910000031
Figure RE-GDA0003173396910000032
wherein: h (-) denotes information entropy calculation, H (I)Neighbor,Icenter) Representing adjacent frame pictures INeighborAnd an intermediate frame image ICenterThe joint entropy between.
The fusion adopts a wavelet image fusion method to carry out averaging operation in a wavelet domain to obtain a reference image in a final dynamic information learning stage.
The multi-angle plane wave video is preferably obtained by multi-angle compounding based on a single-angle plane wave video, and specifically comprises the following steps: when the two-dimensional echo signal matrix is:
Figure RE-GDA0003173396910000033
wherein: n and M respectively represent the number of transducer arrays and the number of times of plane wave emission, and the array data of the ith emission is
Figure RE-GDA0003173396910000034
(. T) represents a matrix transpose operator; averaging the echo data to obtain a multi-angle composite image:
Figure RE-GDA0003173396910000035
the dynamic information characteristics are as follows: the special information of the tissue or structure motion such as the heart, the blood flow and the like in the ultrasonic video reflects the structure and the physiological state of the human body.
The relevance between the continuous frames refers to that: because the horizontal movement and the angle transformation in the scanning process of the imaging department doctor can be influenced by the respiratory motion of people and the like, the difference caused by multiple factors exists among the frames of the ultrasonic video, the problem of unconnection among the reconstructed video frames can be solved by considering the multi-frame correlation due to the fact that the information of the tissue motion exists among the frames, and the frames have the related complementary information, so that the reconstructed video has richer details.
In the plane wave single-angle video data in this embodiment, 20 volunteers are collected through a Verasonics ultrasonic system at a scanning angle of 16-16 degrees, three continuous frame images with a scanning angle of 0 degree are selected as input samples, 75-degree radio frequency signals are compounded through multiple angles and then are used as reference teachers, and 80 pairs of single/multiple-angle video pairs are obtained.
The ultrasonic image pair acquired by the portable/high-end device in the embodiment passes through the toshiba Aplio 500 device and the mSonics MU1 portable device, and the center frequencies of the transducers are 7.5MHz and 6MHz, respectively. Carotid and thyroid image data were obtained from 47 healthy volunteers by an experienced sonographer. In the scanning process, in order to reduce deformation errors, a doctor needs to record mark points on a target detection position, and a detected volunteer needs to hold his breath during two scanning intervals of each group of data, so that the detected volunteer comprises 120 groups of in-vivo data.
As shown in fig. 3, the multi-path generation countermeasure network includes: a multi-path generator and arbiter having three path branches, wherein: each path branch is respectively provided with a plurality of residual blocks so as to improve the characteristic decomposition capability of the generator and solve the problem of gradient disappearance; depth feature fusion is carried out on Adjacent Frame Attention (AFA) layers in the multi-path generator, and feature decomposition capability is further improved through processing fusion features of a plurality of residual blocks; the output of the multi-channel generator is the reconstructed image at the time t which can be used for the discriminator to judge whether the image is true or false.
The multi-channel generator specifically comprises: convolution layer, activation function layer, several residual blocks and adjacent frame attention layer, wherein: shallow feature extraction is carried out on the initial convolution and the activation function layer, and the plurality of residual blocks are connected by using a jump layer to deepen the network so as to solve the problem of gradient disappearance or explosion; the attention layer of the adjacent frames combines the three paths, and fully extracts and integrates the global features of the adjacent frames.
The discriminator includes: a plurality of convolutional layers, a per-element summation layer, an intermediate layer, a full-link layer, a Sigmoid layer, wherein: each convolution layer adopts the ReLU as an activation function and carries out batch normalization processing, and information is extracted by using the deep multi-convolution layer structure, so that the discrimination capability is improved.
As shown in fig. 4, the adjacent frame attention layer includes: three convolutional layers, transposers, matrix multipliers, softmax activation functions and summers, each for receiving shallow features extracted from images acquired at different time slots, wherein: after convolution and conversion, the input of the first path branch is multiplied by the convolution result matrix input by the second path branch, and then softmax activation is carried out, and then the multiplication result matrix is multiplied by the convolution result matrix input by the third path branch and then summed with the input of the second path branch to obtain an output result, so that depth feature fusion is realized.
The training of the multi-channel generation countermeasure network, namely the training for solving the extremely small game problem, specifically comprises the following steps:
Figure RE-GDA0003173396910000041
wherein: x represents the uniform sample value obtained by linear interpolation between the reconstructed image and the real sample, mu is the custom weighting parameter,
Figure RE-GDA0003173396910000042
and
Figure RE-GDA0003173396910000043
respectively representing the ith pair of reference images and the input image. G (-) and D (-) respectively represent the output of the generator and the discriminator, | | | | survival2E (-) and
Figure RE-GDA0003173396910000044
respectively represent L2Loss, expectation operator and gradient operator.
The loss function of the generator comprises: fight against loss (l)Gen) Mean square error loss (l)MSE) And ultrasound specific perception loss (l)US) The complete mathematical characterization of the loss function is: lG=αlGen+βlMSE+γlUSWherein: the sum of α, β represents the corresponding term weight.
The ultrasonic specific sensing loss is calculated by an ultrasonic loss network, as shown in fig. 5, the ultrasonic loss network is a two-class network, and specifically includes: the convolution layer is a layer-by-layer activation function layer pooling layer and a full connection layer, wherein: the convolutional layer performs feature extraction and abstraction, and the fully-connected layer maps features to 1 dimension.
The ultrasonic loss network is preferably trained by adopting ultrasonic images of unpaired portable/high-end equipment, the mean square error is used as a loss function, the classification labels of high-quality images acquired by medical equipment and low-quality images acquired by portable equipment are respectively set to be 1 and 0, and the output of the full connection layer of the ultrasonic loss network is selected as an ultrasonic image quality evaluation score.
In the ultrasonic loss network training process, when the quality evaluation score is higher than 0.5, classifying the input image into a high-quality image by the network; otherwise, the input image will be classified as a low quality image. Under the rule, when a high-quality image is tested, the quality evaluation score is close to 1; when a low quality image is tested, its quality score is close to 0. Thus, the ultrasound-specific perceptual loss function is:
Figure RE-GDA0003173396910000051
Figure RE-GDA0003173396910000052
wherein: u (-) denotes the output of the second fully connected layer of the ultrasound loss network,
Figure RE-GDA0003173396910000053
and (3) representing the output of the countermeasure network generated by integrating the ultrasonic perception dynamic information with multiple channels, wherein M represents the batch training size.
The following table shows the comparison of the reconstruction result indexes of the present embodiment and the prior art.
Figure RE-GDA0003173396910000054
Experimental results show that the ultrasonic video with clear tissue structure, complete speckle information and good contrast can be reconstructed by the method. As shown in fig. 6 and table 1, from the evaluation indexes, compared with the original video, the PSNR, SSIM and MI indexes of the reconstructed video of the method are respectively improved by 57.33%, 87.50% and 47.89%, the ultrasound quality score is improved by 15 times, and the NIQE score is reduced by 64.32%. As shown in FIG. 7, the curve of the change of the blood vessel area with time drawn by the method has higher consistency in each systolic phase and each diastolic phase, and the portable ultrasonic video with higher continuity can be reconstructed.
In summary, the method for generating the countermeasure network by integrating the ultrasound-sensing dynamic information and the multiple paths can reconstruct a portable ultrasound video with high quality, high stability and high continuity, and the technology can help the portable device to be widely applied in clinical medicine.
The foregoing embodiments may be modified in many different ways by those skilled in the art without departing from the spirit and scope of the invention, which is defined by the appended claims and all changes that come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.

Claims (11)

1. A portable ultrasonic video optimization reconstruction method based on an ultrasonic perception dynamic information integration multi-channel generation countermeasure network is characterized in that firstly, ultrasonic perception dynamic information integration decomposition is carried out on an ultrasonic B mode image acquired by a portable ultrasonic instrument to generate a low-rank part and a sparse part so as to effectively decompose the low-rank part and the sparse part in the image; an anti-network is generated through multiple paths respectively positioned on an original image subchannel, a low-rank subchannel and a sparse subchannel to respectively learn an ultrasonic B-mode image acquired by a portable ultrasonic instrument, a low-rank part and a sparse part obtained by decomposition through a dynamic/static information cascade migration learning strategy, so that a global reconstruction mapping relation is provided in an auxiliary way, the contrast difference between high/low-quality images is captured, tissue structure information in an ultrasonic image is highlighted, speckle textures of the high-quality image are predicted, edge information is reserved, and noise is eliminated; finally, obtaining a reconstruction result through fusion layer averaging;
the dynamic/static information cascade transfer learning strategy is as follows: in the dynamic information learning stage, a multi-channel on each sub-channel generates a mapping rule from adjacent frame low-quality images acquired at t-1, t and t +1 moments to high-quality intermediate frame images in the anti-network learning multi-angle plane wave video, the multi-angle plane wave images acquired at t-1, t and t +1 moments are registered and fused to serve as high-quality reference images, three path branches of the multi-channel generation anti-network extract dynamic information characteristics of the single-angle plane wave video, and the relevance between the continuous frames is excavated; in the static information learning stage, a model obtained by plane wave multi/single angle video data training is transferred to assist in learning the mapping relation of an ultrasonic image pair acquired by a portable/high-end device, a countermeasure network is generated through multiple channels according to a training image acquired by the portable/high-end device after overlapping preprocessing, and overlapped high-quality images are generated through a coarse-to-fine learning process.
2. The portable ultrasound video optimization and reconstruction method based on ultrasound-aware dynamic information integration multi-path generation countermeasure network as claimed in claim 1, wherein the ultrasound-aware dynamic information integration decomposition means: performing low-rank decomposition on each frame of ultrasonic video input one by one to obtain a low-rank part and a sparse part, specifically:
Figure FDA0003031066890000011
s.t.d. AZ + S, wherein | Z | y circuitry*Convex envelope, minimum solution Z representing rank operation*Is the lowest rank expression of the current input, A and AZ*Respectively representing a known dictionary and its low-rank part, | | S | | non-woven phosphor1Representing the sparse part S passing through l1Normalized momentArray, λ is a given parameter greater than zero.
3. The method as claimed in claim 1, wherein the registration is performed by minimizing mutual information loss C between adjacent frames and intermediate framesMIThe realization specifically is that:
Figure FDA0003031066890000012
wherein: h (-) denotes information entropy calculation, H (I)Neighbor,ICenter) Representing adjacent frame pictures INeighborAnd an intermediate frame image ICenterThe joint entropy between.
4. The portable ultrasonic video optimization reconstruction method based on the ultrasonic perception dynamic information integration multi-channel generation countermeasure network as claimed in claim 1, wherein the fusion is performed by adopting a wavelet image fusion method and performing an averaging operation in a wavelet domain to obtain a reference image in a final dynamic information learning stage.
5. The method for the optimized reconstruction of the portable ultrasonic video based on the ultrasound-aware dynamic information integration multi-path generation countermeasure network as claimed in claim 1, wherein the multi-angle plane wave video, preferably the single-angle plane wave video, is obtained by multi-angle composition, specifically: when the two-dimensional echo signal matrix is:
Figure FDA0003031066890000021
wherein: n and M respectively represent the number of transducer arrays and the number of times of plane wave emission, and the array data of the ith emission is
Figure FDA0003031066890000022
(·)TRepresenting a matrix transposition operator; averaging the echo data to obtain a multi-angle composite image:
Figure FDA0003031066890000023
Figure FDA0003031066890000024
6. the method for the optimized reconstruction of the portable ultrasound video based on the ultrasound-aware dynamic information integration multi-path generation countermeasure network as claimed in claim 1, wherein the multi-path generation countermeasure network comprises: a multi-path generator and arbiter having three path branches, wherein: each path branch is respectively provided with a plurality of residual blocks so as to improve the characteristic decomposition capability of the generator and solve the problem of gradient disappearance; depth feature fusion is carried out on adjacent frame attention layers in the multi-path generator, and feature decomposition capacity is further improved through processing fusion features of a plurality of residual blocks; the output of the multi-channel generator is the reconstructed image at the time t which can be used for the discriminator to judge whether the image is true or false.
7. The method for the optimized reconstruction of the portable ultrasound video based on the ultrasound-aware dynamic information integration multi-path generation countermeasure network as claimed in claim 6, wherein the multi-path generator specifically comprises: convolution layer, activation function layer, several residual blocks and adjacent frame attention layer, wherein: shallow feature extraction is carried out on the initial convolution and the activation function layer, and the plurality of residual blocks are connected by using a jump layer to deepen the network so as to solve the problem of gradient disappearance or explosion; the attention layer of the adjacent frames combines the three paths, and fully extracts and integrates the global features of the adjacent frames.
8. The method as claimed in claim 6, wherein the discriminator comprises: a plurality of convolutional layers, a per-element summation layer, an intermediate layer, a full-link layer, a Sigmoid layer, wherein: each convolution layer adopts the ReLU as an activation function and carries out batch normalization processing, and information is extracted by using the deep multi-convolution layer structure, so that the discrimination capability is improved.
9. The method as claimed in claim 6, wherein the neighboring frame attention layer comprises: three convolutional layers, transposers, matrix multipliers, softmax activation functions and summers, each for receiving shallow features extracted from images acquired at different time slots, wherein: after convolution and conversion, the input of the first path branch is multiplied by the convolution result matrix input by the second path branch, and then softmax activation is carried out, and then the multiplication result matrix is multiplied by the convolution result matrix input by the third path branch and then summed with the input of the second path branch to obtain an output result, so that depth feature fusion is realized.
10. The portable ultrasonic video optimization and reconstruction method based on the ultrasonic perception dynamic information integration multi-channel generation countermeasure network as claimed in any one of the preceding claims, characterized in that the training of the multi-channel generation countermeasure network, that is, the training for solving the problem of the max-min game, specifically comprises:
Figure FDA0003031066890000031
wherein: x represents the uniform sample value obtained by linear interpolation between the reconstructed image and the real sample, mu is the custom weighting parameter,
Figure FDA0003031066890000032
and
Figure FDA0003031066890000033
respectively representing the ith pair of reference image and input image, G (-) and D (-) respectively representing the output of the generator and the discriminator, | | | | | survival2E (-) and
Figure FDA0003031066890000034
respectively represent L2Loss, expectation, and gradient operators;
the loss function of the generator of the multi-path generation countermeasure network comprises the following steps: for damageLose lGenMean square error loss lMSE) And ultrasound specific perceptual loss lUSThe complete mathematical characterization of the loss function is: lG=αlGen+βlMSE+γlUSWherein: the sum of α, β represents the corresponding term weight.
11. The method for optimizing and reconstructing a portable ultrasound video based on an ultrasound-aware dynamic information integration multi-path generation countermeasure network as claimed in claim 10, wherein the ultrasound-specific perceptual loss is calculated by an ultrasound loss network, the ultrasound loss network is a two-class network, and specifically comprises: the convolution layer is a layer-by-layer activation function layer pooling layer and a full connection layer, wherein: extracting and abstracting features of the convolutional layer, and mapping the features to 1 dimension by the full-connection layer;
the ultrasonic loss network is preferably trained by adopting ultrasonic images of unpaired portable/high-end equipment, the mean square error is used as a loss function, the classification labels of high-quality images acquired by medical equipment and low-quality images acquired by portable equipment are respectively set to be 1 and 0, and the output of the full connection layer of the ultrasonic loss network is selected as an ultrasonic image quality evaluation score.
CN202110430141.7A 2021-04-21 2021-04-21 Portable ultrasonic video optimization reconstruction method for multi-channel generation countermeasure network Pending CN113344829A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110430141.7A CN113344829A (en) 2021-04-21 2021-04-21 Portable ultrasonic video optimization reconstruction method for multi-channel generation countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110430141.7A CN113344829A (en) 2021-04-21 2021-04-21 Portable ultrasonic video optimization reconstruction method for multi-channel generation countermeasure network

Publications (1)

Publication Number Publication Date
CN113344829A true CN113344829A (en) 2021-09-03

Family

ID=77468332

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110430141.7A Pending CN113344829A (en) 2021-04-21 2021-04-21 Portable ultrasonic video optimization reconstruction method for multi-channel generation countermeasure network

Country Status (1)

Country Link
CN (1) CN113344829A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107845065A (en) * 2017-09-15 2018-03-27 西北大学 Super-resolution image reconstruction method and device
CN109711283A (en) * 2018-12-10 2019-05-03 广东工业大学 A kind of joint doubledictionary and error matrix block Expression Recognition algorithm
CN112329685A (en) * 2020-11-16 2021-02-05 常州大学 Method for detecting crowd abnormal behaviors through fusion type convolutional neural network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107845065A (en) * 2017-09-15 2018-03-27 西北大学 Super-resolution image reconstruction method and device
CN109711283A (en) * 2018-12-10 2019-05-03 广东工业大学 A kind of joint doubledictionary and error matrix block Expression Recognition algorithm
CN112329685A (en) * 2020-11-16 2021-02-05 常州大学 Method for detecting crowd abnormal behaviors through fusion type convolutional neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZIXIA ZHOU 等: "Handheld Ultrasound Video High-Quality Reconstruction Using a Low-Rank Representation Multipathway Generative Adversarial Network", 《IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS》 *

Similar Documents

Publication Publication Date Title
Nair et al. Deep learning to obtain simultaneous image and segmentation outputs from a single input of raw ultrasound channel data
Van Sloun et al. Deep learning in ultrasound imaging
CN108629816B (en) Method for reconstructing thin-layer magnetic resonance image based on deep learning
Dar et al. Image synthesis in multi-contrast MRI with conditional generative adversarial networks
US11490877B2 (en) System and method of identifying characteristics of ultrasound images
Zhou et al. Image quality improvement of hand-held ultrasound devices with a two-stage generative adversarial network
Zhou et al. High spatial–temporal resolution reconstruction of plane-wave ultrasound images with a multichannel multiscale convolutional neural network
CN101203183B (en) Ultrasound imaging system with pixel oriented processing
Loizou et al. Despeckle filtering for ultrasound imaging and video, volume I: Algorithms and software
US20100004540A1 (en) Dual path processing for optimal speckle tracking
CN104042247A (en) Ultrasound ARFI Displacement Imaging Using an Adaptive Time Instance
KR20080082302A (en) Ultrasound system and method for forming ultrasound image
CN106680825A (en) Acoustic array imaging system and method thereof
CN111429474A (en) Mammary gland DCE-MRI image focus segmentation model establishment and segmentation method based on mixed convolution
Chen et al. ApodNet: Learning for high frame rate synthetic transmit aperture ultrasound imaging
CN109793506A (en) A kind of contactless radial artery Wave shape extracting method
Zhou et al. Ultrafast plane wave imaging with line-scan-quality using an ultrasound-transfer generative adversarial network
US20230281837A1 (en) Method and system for registering images acquired with different modalities for generating fusion images from registered images acquired with different modalities
Hosseinpour et al. Temporal super resolution of ultrasound images using compressive sensing
WO2023000244A1 (en) Image processing method and system, and application of image processing method
Zhou et al. Handheld ultrasound video high-quality reconstruction using a low-rank representation multipathway generative adversarial network
CN113344829A (en) Portable ultrasonic video optimization reconstruction method for multi-channel generation countermeasure network
Jahren et al. Reverberation suppression in echocardiography using a causal convolutional neural network
Zuo et al. Phase constraint improves ultrasound image quality reconstructed using deep neural network
Toffali et al. Improving the Quality of Monostatic Synthetic-Aperture Ultrasound Imaging Through Deep-Learning-Based Beamforming

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210903

RJ01 Rejection of invention patent application after publication