CN113344829A - Portable ultrasonic video optimization reconstruction method for multi-channel generation countermeasure network - Google Patents
Portable ultrasonic video optimization reconstruction method for multi-channel generation countermeasure network Download PDFInfo
- Publication number
- CN113344829A CN113344829A CN202110430141.7A CN202110430141A CN113344829A CN 113344829 A CN113344829 A CN 113344829A CN 202110430141 A CN202110430141 A CN 202110430141A CN 113344829 A CN113344829 A CN 113344829A
- Authority
- CN
- China
- Prior art keywords
- ultrasonic
- layer
- image
- portable
- ultrasound
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 238000005457 optimization Methods 0.000 title claims abstract description 10
- 230000010354 integration Effects 0.000 claims abstract description 19
- 238000000354 decomposition reaction Methods 0.000 claims abstract description 16
- 230000008447 perception Effects 0.000 claims abstract description 13
- 230000003068 static effect Effects 0.000 claims abstract description 10
- 238000012935 Averaging Methods 0.000 claims abstract description 8
- 230000008569 process Effects 0.000 claims abstract description 7
- 230000005012 migration Effects 0.000 claims abstract description 5
- 238000013508 migration Methods 0.000 claims abstract description 5
- 238000002604 ultrasonography Methods 0.000 claims description 23
- 230000006870 function Effects 0.000 claims description 17
- 239000011159 matrix material Substances 0.000 claims description 13
- 230000004913 activation Effects 0.000 claims description 12
- 230000004927 fusion Effects 0.000 claims description 11
- 238000012549 training Methods 0.000 claims description 11
- 238000013507 mapping Methods 0.000 claims description 9
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 8
- 238000012545 processing Methods 0.000 claims description 5
- 230000008034 disappearance Effects 0.000 claims description 4
- 239000000284 extract Substances 0.000 claims description 4
- 238000000605 extraction Methods 0.000 claims description 4
- 238000013441 quality evaluation Methods 0.000 claims description 4
- 238000007500 overflow downdraw method Methods 0.000 claims description 3
- 238000013526 transfer learning Methods 0.000 claims description 3
- OAICVXFJPJFONN-UHFFFAOYSA-N Phosphorus Chemical compound [P] OAICVXFJPJFONN-UHFFFAOYSA-N 0.000 claims description 2
- 238000003491 array Methods 0.000 claims description 2
- 238000004364 calculation method Methods 0.000 claims description 2
- 238000012512 characterization method Methods 0.000 claims description 2
- 238000006243 chemical reaction Methods 0.000 claims description 2
- 239000002131 composite material Substances 0.000 claims description 2
- 238000004880 explosion Methods 0.000 claims description 2
- 238000010606 normalization Methods 0.000 claims description 2
- 238000011176 pooling Methods 0.000 claims description 2
- 238000007781 pre-processing Methods 0.000 claims description 2
- 230000004083 survival effect Effects 0.000 claims description 2
- 239000000203 mixture Substances 0.000 claims 1
- 230000017105 transposition Effects 0.000 claims 1
- 230000000694 effects Effects 0.000 abstract description 4
- 238000003384 imaging method Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 6
- 230000033001 locomotion Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 2
- 210000004204 blood vessel Anatomy 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- 230000017531 blood circulation Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000013329 compounding Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000003205 diastolic effect Effects 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000001727 in vivo Methods 0.000 description 1
- 230000035790 physiological processes and functions Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000000241 respiratory effect Effects 0.000 description 1
- 210000001685 thyroid gland Anatomy 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/10—Image enhancement or restoration using non-spatial domain filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10132—Ultrasound image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20016—Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20048—Transform domain processing
- G06T2207/20064—Wavelet transform [DWT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20192—Edge enhancement; Edge preservation
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
- Image Processing (AREA)
Abstract
A portable ultrasonic video optimization reconstruction method based on an ultrasonic perception dynamic information integration multi-channel generation countermeasure network comprises the steps of firstly carrying out ultrasonic perception dynamic information integration decomposition on an ultrasonic B mode image collected by a portable ultrasonic instrument to generate a low-rank part and a sparse part so as to effectively decompose the low-rank part and the sparse part in the image; the method comprises the steps of generating a countermeasure network through multiple paths respectively positioned on an original image sub-channel, a low-rank sub-channel and a sparse sub-channel, respectively learning an ultrasonic B-mode image acquired by a portable ultrasonic instrument, a low-rank part and a sparse part obtained by decomposition through a dynamic/static information cascade migration learning strategy, and finally obtaining a reconstruction result after averaging through an over-fusion layer. The invention captures dynamic information through a cascade migration learning strategy, realizes a learning process from coarse to fine, and obviously improves the reconstruction effect.
Description
Technical Field
The invention relates to a technology in the field of image processing, in particular to a portable ultrasonic video optimization reconstruction method based on an ultrasonic perception dynamic information integration multi-channel generation countermeasure network.
Background
As a new ultrasonic imaging device, the portable ultrasonic instrument has a wide application prospect in the fields of community medical treatment, village and town medical treatment, remote medical treatment and the like. The portable ultrasonic instrument has the advantages of simple operation, quick imaging, low price, portable equipment and the like. However, the portable ultrasonic instrument is limited by the size of the portable ultrasonic instrument, the imaging resolution is low, and the artifact noise is strong. The diagnostic reliability of the portable ultrasound device is severely reduced by the problem of low quality of imaging, and the improvement of the traditional post-processing method based on gray distribution is restricted by limited acquisition information and can not break through the limitation of hardware.
Disclosure of Invention
Aiming at the problem of low imaging quality of the conventional portable ultrasonic video, the invention provides a portable ultrasonic video optimization reconstruction method for generating a countermeasure network based on ultrasonic perception dynamic information integration multi-channel, wherein an end-to-end ultrasonic video enhancement mapping network is directly constructed by utilizing a deep learning method, and global features and local details can be processed in a multi-channel parallel manner; capturing dynamic information through a cascade transfer learning strategy, and realizing a learning process from coarse to fine; the traditional countermeasure loss, the mean square error loss and the novel ultrasonic specific sensing loss are combined, and the network training state is evaluated by utilizing the depth sensing characteristic, so that the reconstruction effect is improved.
The invention is realized by the following technical scheme:
the invention relates to a portable ultrasonic video optimization reconstruction method for generating a countermeasure network based on ultrasonic perception dynamic information integration multi-channel, which comprises the steps of firstly, carrying out ultrasonic perception dynamic information integration decomposition on an ultrasonic B-mode image acquired by a portable ultrasonic instrument to generate a low-rank part and a sparse part so as to effectively decompose the low-rank part and the sparse part in the image; an anti-network is generated through multiple paths respectively positioned on an original image subchannel, a low-rank subchannel and a sparse subchannel to respectively learn an ultrasonic B-mode image acquired by a portable ultrasonic instrument, a low-rank part and a sparse part obtained by decomposition through a dynamic/static information cascade migration learning strategy, so that a global reconstruction mapping relation is provided in an auxiliary way, the contrast difference between high/low-quality images is captured, tissue structure information in an ultrasonic image is highlighted, speckle textures of the high-quality image are predicted, edge information is reserved, and noise is eliminated; and finally, obtaining a reconstruction result through fusion layer averaging.
Technical effects
The invention integrally solves the defect of low imaging quality of the existing low-end ultrasound or portable ultrasound equipment, overcomes the hardware limitation of the portable ultrasound instrument, improves the imaging quality of the portable ultrasound instrument, and reconstructs the portable ultrasound video with high quality, high stability and high continuity.
Compared with the prior art, the dynamic video information extraction and utilization method has the advantages that the dynamic video information is extracted and utilized through a dynamic to static cascade transfer learning strategy; and (3) providing ultrasonic specific sensing loss, and evaluating a generation model in real time by using a pre-trained loss network to assist the multi-channel generation countermeasure network to quickly converge.
Drawings
FIG. 1 is a schematic diagram of the overall architecture of the present invention;
FIG. 2 is a schematic diagram of a dynamic/static information cascade transition learning strategy;
FIG. 3 is a block diagram of a countermeasure network for each of the multipaths in a subchannel;
FIG. 4 is a schematic diagram of a detailed structure of an attention layer of the adjacent frame in FIG. 3;
FIG. 5 is a diagram of an ultrasound loss network architecture;
FIG. 6 is a comparison graph of the ultrasonic video reconstruction results;
in the figure: (a) in order to intercept a single-frame image from an input video, (b) - (h) respectively represent a DHE method, a noise reduction method based on an ultrasonic image, an aDWT fusion method, an SRCNN method, an ESRGAN method, an SRFBN method, a two-stage GAN method and a reconstruction result of the method;
FIG. 7 is a schematic diagram illustrating the effects of the embodiment;
in the figure: (a) the method comprises the following steps of (a) a blood vessel cross-sectional area change curve of an original video, (b) comparison based on a multi-frame information method, and (c) comparison based on a deep learning method.
Detailed Description
As shown in fig. 1, for the present embodiment, a method for optimizing and reconstructing a portable ultrasound video based on an ultrasound-aware dynamic information integration multi-path generation countermeasure network is provided, which includes performing ultrasound-aware dynamic information integration decomposition on an ultrasound B-mode image acquired by a portable ultrasound apparatus, and generating a low-rank portion and a sparse portion to effectively decompose the low-rank portion and the sparse portion in the image; an anti-network is generated through multiple paths respectively positioned on an original image subchannel, a low-rank subchannel and a sparse subchannel to respectively learn an ultrasonic B-mode image acquired by a portable ultrasonic instrument, a low-rank part and a sparse part obtained by decomposition through a dynamic/static information cascade migration learning strategy, so that a global reconstruction mapping relation is provided in an auxiliary way, the contrast difference between high/low-quality images is captured, tissue structure information in an ultrasonic image is highlighted, speckle textures of the high-quality image are predicted, edge information is reserved, and noise is eliminated; and finally, obtaining a reconstruction result through fusion layer averaging.
The ultrasonic perception dynamic information integration and decomposition refers to the following steps: performing low-rank decomposition on each frame of ultrasonic video input one by one to obtain a low-rank part and a sparse part, specifically:s.t.d. AZ + S, wherein | Z | y circuitry*Convex envelope, minimum solution Z representing rank operation*Is the lowest rank expression of the current input, A and AZ*Respectively representing a known dictionary and its low-rank part, | | S | | non-woven phosphor1Representing the sparse part S passing through I1Normalized matrix, λ is a given parameter greater than zero.
As shown in fig. 2, the dynamic/static information cascade transition learning strategy is: in the dynamic information learning stage, a multi-channel on each sub-channel generates a mapping rule from adjacent frame low-quality images acquired at t-1, t and t +1 moments to high-quality intermediate frame images in the anti-network learning multi-angle plane wave video, the multi-angle plane wave images acquired at t-1, t and t +1 moments are registered and fused to serve as high-quality reference images, three path branches of the multi-channel generation anti-network extract dynamic information characteristics of the single-angle plane wave video, and the relevance between the continuous frames is excavated; in the static information learning stage, a model obtained by plane wave multi/single angle video data training is transferred to assist in learning the mapping relation of an ultrasonic image pair acquired by a portable/high-end device, a countermeasure network is generated through multiple channels according to a training image acquired by the portable/high-end device after overlapping preprocessing, and overlapped high-quality images are generated through a coarse-to-fine learning process.
Said registration, by minimizing mutual information loss C between adjacent frames and intermediate framesMIThe realization specifically is that: wherein: h (-) denotes information entropy calculation, H (I)Neighbor,Icenter) Representing adjacent frame pictures INeighborAnd an intermediate frame image ICenterThe joint entropy between.
The fusion adopts a wavelet image fusion method to carry out averaging operation in a wavelet domain to obtain a reference image in a final dynamic information learning stage.
The multi-angle plane wave video is preferably obtained by multi-angle compounding based on a single-angle plane wave video, and specifically comprises the following steps: when the two-dimensional echo signal matrix is:wherein: n and M respectively represent the number of transducer arrays and the number of times of plane wave emission, and the array data of the ith emission is(. T) represents a matrix transpose operator; averaging the echo data to obtain a multi-angle composite image:
the dynamic information characteristics are as follows: the special information of the tissue or structure motion such as the heart, the blood flow and the like in the ultrasonic video reflects the structure and the physiological state of the human body.
The relevance between the continuous frames refers to that: because the horizontal movement and the angle transformation in the scanning process of the imaging department doctor can be influenced by the respiratory motion of people and the like, the difference caused by multiple factors exists among the frames of the ultrasonic video, the problem of unconnection among the reconstructed video frames can be solved by considering the multi-frame correlation due to the fact that the information of the tissue motion exists among the frames, and the frames have the related complementary information, so that the reconstructed video has richer details.
In the plane wave single-angle video data in this embodiment, 20 volunteers are collected through a Verasonics ultrasonic system at a scanning angle of 16-16 degrees, three continuous frame images with a scanning angle of 0 degree are selected as input samples, 75-degree radio frequency signals are compounded through multiple angles and then are used as reference teachers, and 80 pairs of single/multiple-angle video pairs are obtained.
The ultrasonic image pair acquired by the portable/high-end device in the embodiment passes through the toshiba Aplio 500 device and the mSonics MU1 portable device, and the center frequencies of the transducers are 7.5MHz and 6MHz, respectively. Carotid and thyroid image data were obtained from 47 healthy volunteers by an experienced sonographer. In the scanning process, in order to reduce deformation errors, a doctor needs to record mark points on a target detection position, and a detected volunteer needs to hold his breath during two scanning intervals of each group of data, so that the detected volunteer comprises 120 groups of in-vivo data.
As shown in fig. 3, the multi-path generation countermeasure network includes: a multi-path generator and arbiter having three path branches, wherein: each path branch is respectively provided with a plurality of residual blocks so as to improve the characteristic decomposition capability of the generator and solve the problem of gradient disappearance; depth feature fusion is carried out on Adjacent Frame Attention (AFA) layers in the multi-path generator, and feature decomposition capability is further improved through processing fusion features of a plurality of residual blocks; the output of the multi-channel generator is the reconstructed image at the time t which can be used for the discriminator to judge whether the image is true or false.
The multi-channel generator specifically comprises: convolution layer, activation function layer, several residual blocks and adjacent frame attention layer, wherein: shallow feature extraction is carried out on the initial convolution and the activation function layer, and the plurality of residual blocks are connected by using a jump layer to deepen the network so as to solve the problem of gradient disappearance or explosion; the attention layer of the adjacent frames combines the three paths, and fully extracts and integrates the global features of the adjacent frames.
The discriminator includes: a plurality of convolutional layers, a per-element summation layer, an intermediate layer, a full-link layer, a Sigmoid layer, wherein: each convolution layer adopts the ReLU as an activation function and carries out batch normalization processing, and information is extracted by using the deep multi-convolution layer structure, so that the discrimination capability is improved.
As shown in fig. 4, the adjacent frame attention layer includes: three convolutional layers, transposers, matrix multipliers, softmax activation functions and summers, each for receiving shallow features extracted from images acquired at different time slots, wherein: after convolution and conversion, the input of the first path branch is multiplied by the convolution result matrix input by the second path branch, and then softmax activation is carried out, and then the multiplication result matrix is multiplied by the convolution result matrix input by the third path branch and then summed with the input of the second path branch to obtain an output result, so that depth feature fusion is realized.
The training of the multi-channel generation countermeasure network, namely the training for solving the extremely small game problem, specifically comprises the following steps:wherein: x represents the uniform sample value obtained by linear interpolation between the reconstructed image and the real sample, mu is the custom weighting parameter,andrespectively representing the ith pair of reference images and the input image. G (-) and D (-) respectively represent the output of the generator and the discriminator, | | | | survival2E (-) andrespectively represent L2Loss, expectation operator and gradient operator.
The loss function of the generator comprises: fight against loss (l)Gen) Mean square error loss (l)MSE) And ultrasound specific perception loss (l)US) The complete mathematical characterization of the loss function is: lG=αlGen+βlMSE+γlUSWherein: the sum of α, β represents the corresponding term weight.
The ultrasonic specific sensing loss is calculated by an ultrasonic loss network, as shown in fig. 5, the ultrasonic loss network is a two-class network, and specifically includes: the convolution layer is a layer-by-layer activation function layer pooling layer and a full connection layer, wherein: the convolutional layer performs feature extraction and abstraction, and the fully-connected layer maps features to 1 dimension.
The ultrasonic loss network is preferably trained by adopting ultrasonic images of unpaired portable/high-end equipment, the mean square error is used as a loss function, the classification labels of high-quality images acquired by medical equipment and low-quality images acquired by portable equipment are respectively set to be 1 and 0, and the output of the full connection layer of the ultrasonic loss network is selected as an ultrasonic image quality evaluation score.
In the ultrasonic loss network training process, when the quality evaluation score is higher than 0.5, classifying the input image into a high-quality image by the network; otherwise, the input image will be classified as a low quality image. Under the rule, when a high-quality image is tested, the quality evaluation score is close to 1; when a low quality image is tested, its quality score is close to 0. Thus, the ultrasound-specific perceptual loss function is: wherein: u (-) denotes the output of the second fully connected layer of the ultrasound loss network,and (3) representing the output of the countermeasure network generated by integrating the ultrasonic perception dynamic information with multiple channels, wherein M represents the batch training size.
The following table shows the comparison of the reconstruction result indexes of the present embodiment and the prior art.
Experimental results show that the ultrasonic video with clear tissue structure, complete speckle information and good contrast can be reconstructed by the method. As shown in fig. 6 and table 1, from the evaluation indexes, compared with the original video, the PSNR, SSIM and MI indexes of the reconstructed video of the method are respectively improved by 57.33%, 87.50% and 47.89%, the ultrasound quality score is improved by 15 times, and the NIQE score is reduced by 64.32%. As shown in FIG. 7, the curve of the change of the blood vessel area with time drawn by the method has higher consistency in each systolic phase and each diastolic phase, and the portable ultrasonic video with higher continuity can be reconstructed.
In summary, the method for generating the countermeasure network by integrating the ultrasound-sensing dynamic information and the multiple paths can reconstruct a portable ultrasound video with high quality, high stability and high continuity, and the technology can help the portable device to be widely applied in clinical medicine.
The foregoing embodiments may be modified in many different ways by those skilled in the art without departing from the spirit and scope of the invention, which is defined by the appended claims and all changes that come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.
Claims (11)
1. A portable ultrasonic video optimization reconstruction method based on an ultrasonic perception dynamic information integration multi-channel generation countermeasure network is characterized in that firstly, ultrasonic perception dynamic information integration decomposition is carried out on an ultrasonic B mode image acquired by a portable ultrasonic instrument to generate a low-rank part and a sparse part so as to effectively decompose the low-rank part and the sparse part in the image; an anti-network is generated through multiple paths respectively positioned on an original image subchannel, a low-rank subchannel and a sparse subchannel to respectively learn an ultrasonic B-mode image acquired by a portable ultrasonic instrument, a low-rank part and a sparse part obtained by decomposition through a dynamic/static information cascade migration learning strategy, so that a global reconstruction mapping relation is provided in an auxiliary way, the contrast difference between high/low-quality images is captured, tissue structure information in an ultrasonic image is highlighted, speckle textures of the high-quality image are predicted, edge information is reserved, and noise is eliminated; finally, obtaining a reconstruction result through fusion layer averaging;
the dynamic/static information cascade transfer learning strategy is as follows: in the dynamic information learning stage, a multi-channel on each sub-channel generates a mapping rule from adjacent frame low-quality images acquired at t-1, t and t +1 moments to high-quality intermediate frame images in the anti-network learning multi-angle plane wave video, the multi-angle plane wave images acquired at t-1, t and t +1 moments are registered and fused to serve as high-quality reference images, three path branches of the multi-channel generation anti-network extract dynamic information characteristics of the single-angle plane wave video, and the relevance between the continuous frames is excavated; in the static information learning stage, a model obtained by plane wave multi/single angle video data training is transferred to assist in learning the mapping relation of an ultrasonic image pair acquired by a portable/high-end device, a countermeasure network is generated through multiple channels according to a training image acquired by the portable/high-end device after overlapping preprocessing, and overlapped high-quality images are generated through a coarse-to-fine learning process.
2. The portable ultrasound video optimization and reconstruction method based on ultrasound-aware dynamic information integration multi-path generation countermeasure network as claimed in claim 1, wherein the ultrasound-aware dynamic information integration decomposition means: performing low-rank decomposition on each frame of ultrasonic video input one by one to obtain a low-rank part and a sparse part, specifically:s.t.d. AZ + S, wherein | Z | y circuitry*Convex envelope, minimum solution Z representing rank operation*Is the lowest rank expression of the current input, A and AZ*Respectively representing a known dictionary and its low-rank part, | | S | | non-woven phosphor1Representing the sparse part S passing through l1Normalized momentArray, λ is a given parameter greater than zero.
3. The method as claimed in claim 1, wherein the registration is performed by minimizing mutual information loss C between adjacent frames and intermediate framesMIThe realization specifically is that:wherein: h (-) denotes information entropy calculation, H (I)Neighbor,ICenter) Representing adjacent frame pictures INeighborAnd an intermediate frame image ICenterThe joint entropy between.
4. The portable ultrasonic video optimization reconstruction method based on the ultrasonic perception dynamic information integration multi-channel generation countermeasure network as claimed in claim 1, wherein the fusion is performed by adopting a wavelet image fusion method and performing an averaging operation in a wavelet domain to obtain a reference image in a final dynamic information learning stage.
5. The method for the optimized reconstruction of the portable ultrasonic video based on the ultrasound-aware dynamic information integration multi-path generation countermeasure network as claimed in claim 1, wherein the multi-angle plane wave video, preferably the single-angle plane wave video, is obtained by multi-angle composition, specifically: when the two-dimensional echo signal matrix is:wherein: n and M respectively represent the number of transducer arrays and the number of times of plane wave emission, and the array data of the ith emission is(·)TRepresenting a matrix transposition operator; averaging the echo data to obtain a multi-angle composite image:
6. the method for the optimized reconstruction of the portable ultrasound video based on the ultrasound-aware dynamic information integration multi-path generation countermeasure network as claimed in claim 1, wherein the multi-path generation countermeasure network comprises: a multi-path generator and arbiter having three path branches, wherein: each path branch is respectively provided with a plurality of residual blocks so as to improve the characteristic decomposition capability of the generator and solve the problem of gradient disappearance; depth feature fusion is carried out on adjacent frame attention layers in the multi-path generator, and feature decomposition capacity is further improved through processing fusion features of a plurality of residual blocks; the output of the multi-channel generator is the reconstructed image at the time t which can be used for the discriminator to judge whether the image is true or false.
7. The method for the optimized reconstruction of the portable ultrasound video based on the ultrasound-aware dynamic information integration multi-path generation countermeasure network as claimed in claim 6, wherein the multi-path generator specifically comprises: convolution layer, activation function layer, several residual blocks and adjacent frame attention layer, wherein: shallow feature extraction is carried out on the initial convolution and the activation function layer, and the plurality of residual blocks are connected by using a jump layer to deepen the network so as to solve the problem of gradient disappearance or explosion; the attention layer of the adjacent frames combines the three paths, and fully extracts and integrates the global features of the adjacent frames.
8. The method as claimed in claim 6, wherein the discriminator comprises: a plurality of convolutional layers, a per-element summation layer, an intermediate layer, a full-link layer, a Sigmoid layer, wherein: each convolution layer adopts the ReLU as an activation function and carries out batch normalization processing, and information is extracted by using the deep multi-convolution layer structure, so that the discrimination capability is improved.
9. The method as claimed in claim 6, wherein the neighboring frame attention layer comprises: three convolutional layers, transposers, matrix multipliers, softmax activation functions and summers, each for receiving shallow features extracted from images acquired at different time slots, wherein: after convolution and conversion, the input of the first path branch is multiplied by the convolution result matrix input by the second path branch, and then softmax activation is carried out, and then the multiplication result matrix is multiplied by the convolution result matrix input by the third path branch and then summed with the input of the second path branch to obtain an output result, so that depth feature fusion is realized.
10. The portable ultrasonic video optimization and reconstruction method based on the ultrasonic perception dynamic information integration multi-channel generation countermeasure network as claimed in any one of the preceding claims, characterized in that the training of the multi-channel generation countermeasure network, that is, the training for solving the problem of the max-min game, specifically comprises:wherein: x represents the uniform sample value obtained by linear interpolation between the reconstructed image and the real sample, mu is the custom weighting parameter,andrespectively representing the ith pair of reference image and input image, G (-) and D (-) respectively representing the output of the generator and the discriminator, | | | | | survival2E (-) andrespectively represent L2Loss, expectation, and gradient operators;
the loss function of the generator of the multi-path generation countermeasure network comprises the following steps: for damageLose lGenMean square error loss lMSE) And ultrasound specific perceptual loss lUSThe complete mathematical characterization of the loss function is: lG=αlGen+βlMSE+γlUSWherein: the sum of α, β represents the corresponding term weight.
11. The method for optimizing and reconstructing a portable ultrasound video based on an ultrasound-aware dynamic information integration multi-path generation countermeasure network as claimed in claim 10, wherein the ultrasound-specific perceptual loss is calculated by an ultrasound loss network, the ultrasound loss network is a two-class network, and specifically comprises: the convolution layer is a layer-by-layer activation function layer pooling layer and a full connection layer, wherein: extracting and abstracting features of the convolutional layer, and mapping the features to 1 dimension by the full-connection layer;
the ultrasonic loss network is preferably trained by adopting ultrasonic images of unpaired portable/high-end equipment, the mean square error is used as a loss function, the classification labels of high-quality images acquired by medical equipment and low-quality images acquired by portable equipment are respectively set to be 1 and 0, and the output of the full connection layer of the ultrasonic loss network is selected as an ultrasonic image quality evaluation score.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110430141.7A CN113344829A (en) | 2021-04-21 | 2021-04-21 | Portable ultrasonic video optimization reconstruction method for multi-channel generation countermeasure network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110430141.7A CN113344829A (en) | 2021-04-21 | 2021-04-21 | Portable ultrasonic video optimization reconstruction method for multi-channel generation countermeasure network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113344829A true CN113344829A (en) | 2021-09-03 |
Family
ID=77468332
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110430141.7A Pending CN113344829A (en) | 2021-04-21 | 2021-04-21 | Portable ultrasonic video optimization reconstruction method for multi-channel generation countermeasure network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113344829A (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107845065A (en) * | 2017-09-15 | 2018-03-27 | 西北大学 | Super-resolution image reconstruction method and device |
CN109711283A (en) * | 2018-12-10 | 2019-05-03 | 广东工业大学 | A kind of joint doubledictionary and error matrix block Expression Recognition algorithm |
CN112329685A (en) * | 2020-11-16 | 2021-02-05 | 常州大学 | Method for detecting crowd abnormal behaviors through fusion type convolutional neural network |
-
2021
- 2021-04-21 CN CN202110430141.7A patent/CN113344829A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107845065A (en) * | 2017-09-15 | 2018-03-27 | 西北大学 | Super-resolution image reconstruction method and device |
CN109711283A (en) * | 2018-12-10 | 2019-05-03 | 广东工业大学 | A kind of joint doubledictionary and error matrix block Expression Recognition algorithm |
CN112329685A (en) * | 2020-11-16 | 2021-02-05 | 常州大学 | Method for detecting crowd abnormal behaviors through fusion type convolutional neural network |
Non-Patent Citations (1)
Title |
---|
ZIXIA ZHOU 等: "Handheld Ultrasound Video High-Quality Reconstruction Using a Low-Rank Representation Multipathway Generative Adversarial Network", 《IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS》 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Nair et al. | Deep learning to obtain simultaneous image and segmentation outputs from a single input of raw ultrasound channel data | |
Van Sloun et al. | Deep learning in ultrasound imaging | |
CN108629816B (en) | Method for reconstructing thin-layer magnetic resonance image based on deep learning | |
Dar et al. | Image synthesis in multi-contrast MRI with conditional generative adversarial networks | |
US11490877B2 (en) | System and method of identifying characteristics of ultrasound images | |
Zhou et al. | Image quality improvement of hand-held ultrasound devices with a two-stage generative adversarial network | |
Zhou et al. | High spatial–temporal resolution reconstruction of plane-wave ultrasound images with a multichannel multiscale convolutional neural network | |
CN101203183B (en) | Ultrasound imaging system with pixel oriented processing | |
Loizou et al. | Despeckle filtering for ultrasound imaging and video, volume I: Algorithms and software | |
US20100004540A1 (en) | Dual path processing for optimal speckle tracking | |
CN104042247A (en) | Ultrasound ARFI Displacement Imaging Using an Adaptive Time Instance | |
KR20080082302A (en) | Ultrasound system and method for forming ultrasound image | |
CN106680825A (en) | Acoustic array imaging system and method thereof | |
CN111429474A (en) | Mammary gland DCE-MRI image focus segmentation model establishment and segmentation method based on mixed convolution | |
Chen et al. | ApodNet: Learning for high frame rate synthetic transmit aperture ultrasound imaging | |
CN109793506A (en) | A kind of contactless radial artery Wave shape extracting method | |
Zhou et al. | Ultrafast plane wave imaging with line-scan-quality using an ultrasound-transfer generative adversarial network | |
US20230281837A1 (en) | Method and system for registering images acquired with different modalities for generating fusion images from registered images acquired with different modalities | |
Hosseinpour et al. | Temporal super resolution of ultrasound images using compressive sensing | |
WO2023000244A1 (en) | Image processing method and system, and application of image processing method | |
Zhou et al. | Handheld ultrasound video high-quality reconstruction using a low-rank representation multipathway generative adversarial network | |
CN113344829A (en) | Portable ultrasonic video optimization reconstruction method for multi-channel generation countermeasure network | |
Jahren et al. | Reverberation suppression in echocardiography using a causal convolutional neural network | |
Zuo et al. | Phase constraint improves ultrasound image quality reconstructed using deep neural network | |
Toffali et al. | Improving the Quality of Monostatic Synthetic-Aperture Ultrasound Imaging Through Deep-Learning-Based Beamforming |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210903 |
|
RJ01 | Rejection of invention patent application after publication |