CN115601572A - Ultrasonic phased array image optimization reconstruction method and system based on semi-supervised CycleGan network - Google Patents

Ultrasonic phased array image optimization reconstruction method and system based on semi-supervised CycleGan network Download PDF

Info

Publication number
CN115601572A
CN115601572A CN202211337748.1A CN202211337748A CN115601572A CN 115601572 A CN115601572 A CN 115601572A CN 202211337748 A CN202211337748 A CN 202211337748A CN 115601572 A CN115601572 A CN 115601572A
Authority
CN
China
Prior art keywords
image
domain
supervised
loss function
semi
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211337748.1A
Other languages
Chinese (zh)
Inventor
李兵
高飞
陈磊
尚中昱
刘春满
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN202211337748.1A priority Critical patent/CN115601572A/en
Publication of CN115601572A publication Critical patent/CN115601572A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Investigating Or Analyzing Materials By The Use Of Ultrasonic Waves (AREA)

Abstract

The invention discloses an ultrasonic phased array image optimization reconstruction method and system based on a semi-supervised CycleGan network, which generate training samples corresponding to each other in two domains of an ultrasonic image and a reconstructed image; overlapping the antagonism loss function, the cycle consistency loss function, the individual loss function and the real difference loss function to obtain a loss function; and constructing a semi-supervised CycleGan network structure based on the loss function, training the semi-supervised CycleGan network structure by using a training sample, and inputting an ultrasonic image actually detected by the ultrasonic phased array into the trained semi-supervised CycleGan network structure after network training to obtain a reconstructed image of the detected defect. The method can realize the heterogeneous reconstruction of the small target image on the premise of not cutting the original image; for the ultrasonic image, the position information of the defect in the image is reserved, the defect artifact in the image is eliminated, and the appearance of the defect is more accurate.

Description

Ultrasonic phased array image optimization reconstruction method and system based on semi-supervised CycleGan network
Technical Field
The invention belongs to the technical field of industrial ultrasonic nondestructive testing, and particularly relates to an ultrasonic phased array image optimization reconstruction method and system based on a semi-supervised CycleGan network.
Background
Ultrasonic and ultrasonic phased array detection are one of the important means for realizing the industrial nondestructive detection technology at present. The detection result shows the interior of the detected object visually in the form of a two-dimensional image, and mainly relates to the position, the geometric morphology and other characteristics of the defect. At present, in order to accurately and quantitatively characterize the internal defects of the object to be measured, the hardware performance of the equipment needs to be improved, so that a huge hardware development cost is spent. And due to the scattering influence of echo signals, the imaged defect appearance has artifacts, which are specifically represented by defect edge blurring, defect characterization distortion and the like. With the development of the field of deep learning, related network models are also gradually transplanted to the ultrasonic phased array, and researchers hope to perform artifact-free optimized reconstruction on an ultrasonic phased array image by using an image processing method in the deep learning.
At present, in the field of medical ultrasonic phased array imaging, a method for realizing ultrasonic image reconstruction by means of deep learning has been realized. But compared with an industrial ultrasonic phased array, the ultrasonic phased array has rich image content and strong correlation, and in industrial detection, the defects are less in the internal part of a detected object. This results in the content of the resulting image being background rich (in black) and the characterized defect being less correlated with the image background.
The existing method optimizes the performance of a discriminator in a CycleGan network by using the characteristic comparison of an input image and a reconstructed image. However, for an ultrasonic defect image, defect information is lost to a certain extent after multilayer convolution, so that the performance of the method in ultrasonic image reconstruction is not improved greatly.
How to accurately reconstruct a two-dimensional image (hereinafter referred to as an ultrasound image) formed by an ultrasound phased array into a two-dimensional image (hereinafter referred to as a reconstructed image) with defects in industrial detection is a problem to be solved at present. Research shows that the related method cannot be directly transplanted to the industrial ultrasonic phased array image reconstruction. Firstly, in industrial detection, the distribution and the appearance of defects inside an object are often random, and a supervised deep learning network cannot meet the requirements of open set practical application of industrial detection. Secondly, increasing the proportion of defects in the image by cropping enables the network to learn the image features, but this not only requires a huge cost for preprocessing the image, but also loses important characterization attributes of the ultrasound image for the defect location. The above methods cannot guarantee the accuracy of the reconstructed image information.
Disclosure of Invention
The technical problem to be solved by the invention is to provide an ultrasonic phased array image optimization reconstruction method and system based on a semi-supervised CycleGan network aiming at the defects in the prior art, and the method and system are used for solving the technical problem of high-precision defect reconstruction of ultrasonic phased array detection images.
The invention adopts the following technical scheme:
the ultrasonic phased array image optimization reconstruction method based on the semi-supervised CycleGan network comprises the following steps:
s1, generating training samples which correspond to each other in two domains of an ultrasonic image and a reconstructed image;
s2, overlapping the antagonism loss function, the cycle consistency loss function, the individual loss function and the real difference loss function to obtain a loss function;
and S3, building a semi-supervised CycleGan network structure based on the loss function obtained in the step S2, training the semi-supervised CycleGan network structure by using the training sample generated in the step S1, and inputting an ultrasonic image actually detected by the ultrasonic phased array into the trained semi-supervised CycleGan network structure after network training to obtain a reconstructed image of the detected defect.
Specifically, in step S1, performing ultrasonic imaging on the artificially designed defect in a two-dimensional plane, and forming a training sample by using an obtained ultrasonic image of the defect and an original defect image; or the actual measurement and the corresponding reconstruction calculation are carried out on the test block which is processed and contains accurate defect information, and a test block defect image and a corresponding ultrasonic image are respectively obtained from the processing information to form a training sample.
Specifically, in step S2, a loss function
Figure BDA0003915792810000021
The method specifically comprises the following steps:
Figure BDA0003915792810000031
wherein λ is cyc ,λ idt And λ aut Respectively, the adjustable hyper-parameters are,
Figure BDA0003915792810000032
and
Figure BDA0003915792810000033
in order to combat the loss function, it is,
Figure BDA0003915792810000034
in order to be a function of the cyclic consistency loss,
Figure BDA0003915792810000035
as a function of the individual losses,
Figure BDA0003915792810000036
is a true difference loss function.
Further, a penalty function is resisted
Figure BDA0003915792810000037
And
Figure BDA0003915792810000038
respectively as follows:
Figure BDA0003915792810000039
wherein GA (a) represents samples a (a to P) of domain A A ) Domain B image post-output via generator GA
Figure BDA00039157928100000310
GB (B) denotes the B samples of the domain B (B to P) B ) Domain A image output after passing through generator GB
Figure BDA00039157928100000311
DA (b) represents the class score of the image b by the discriminator DA, DB (a) represents the class score of the image a by the discriminator DB,
Figure BDA00039157928100000312
representing the probability distribution of the image a obeying the domain a,
Figure BDA00039157928100000313
representing the probability distribution of image B obeying domain B.
Further, a circular consistency loss function
Figure BDA00039157928100000314
Comprises the following steps:
Figure BDA00039157928100000315
wherein GA (a) represents samples a (a to P) of domain A A ) Domain B image post-output via generator GA
Figure BDA00039157928100000316
GB (B) denotes the B samples of the domain B (B to P) B ) Domain A image output after generator GB
Figure BDA00039157928100000317
Figure BDA00039157928100000318
Representing the probability distribution of the image a obeying the domain a,
Figure BDA00039157928100000319
representing the probability distribution of image B obeying domain B.
Further, individual loss function
Figure BDA00039157928100000320
Comprises the following steps:
Figure BDA00039157928100000321
wherein GA (a) represents samples a (a to P) of domain A A ) Domain B image post-output via generator GA
Figure BDA00039157928100000322
GB (B) denotes the B samples of the domain B (B to P) B ) Domain A image output after passing through generator GB
Figure BDA00039157928100000323
Figure BDA00039157928100000324
Representing the probability distribution that image a obeys domain a,
Figure BDA00039157928100000325
representing the probability distribution of image B obeying domain B.
Further, the true difference loss function
Figure BDA0003915792810000041
Comprises the following steps:
Figure BDA0003915792810000042
wherein GA (a) represents samples a (a to P) of domain A A ) Domain B image post-output via generator GA
Figure BDA0003915792810000043
GB (B) denotes the B samples of the domain B (B to P) B ) Domain A map output after generator GBImage (A)
Figure BDA0003915792810000044
Figure BDA0003915792810000045
Representing the probability distribution that image a obeys domain a,
Figure BDA0003915792810000046
representing the probability distribution of image B obeying domain B, MSE is a mean square error loss function.
Specifically, in step S3, the semi-supervised CycleGan network training process is as follows:
the ultrasonic image and the non-paired image of the defect image are simultaneously input into two different generators from two sides, and simultaneously output corresponding images conforming to the opposite domain, and the discriminator identifies the images output by the generators;
ultrasound image a as input side at
Figure BDA0003915792810000047
Stage, the image a distributed in domain A passes through generator GA and then outputs its defect image distributed in domain B
Figure BDA0003915792810000048
On the one hand, defective images
Figure BDA0003915792810000049
Similarity to domain B is resisted by discriminator DA against loss function
Figure BDA00039157928100000410
Evaluation of (2); on the other hand, in a loss function by true difference
Figure BDA00039157928100000411
Quantifying GA-generated defect images
Figure BDA00039157928100000412
The difference degree between the real defect image b corresponding to the ultrasonic image a;
then is on
Figure BDA00039157928100000413
Phase, domain B defect image of input generator GB
Figure BDA00039157928100000414
Outputting the corresponding domain A ultrasonic image
Figure BDA00039157928100000415
And with the input image a of the network by a cyclic consistency loss function
Figure BDA00039157928100000416
Performing domain similarity comparison;
defective image b as input side, successively
Figure BDA00039157928100000417
And with
Figure BDA00039157928100000418
Two stages, each of which is subjected to a discriminator DB penalty function
Figure BDA00039157928100000419
True difference loss function
Figure BDA00039157928100000420
And a cyclic consistency loss function
Figure BDA00039157928100000421
Of the system.
After finishing the bidirectional output of a batch of images, the semi-supervised CycleGan optimizes and adjusts the network parameters according to the calculated loss function items. When training is completed, the semi-supervised CycleGan network realizes the conversion from the ultrasonic image to the defect image and the conversion from the defect image to the ultrasonic image.
Further, the semi-supervised CycleGan network comprises a generator and a discriminator, wherein the generator comprises a Conv2d layer, a LeakyRelu layer, an InstanceNorm layer, a Relu layer, a TransConv2d layer and a Tanh layer; the discriminator adopts a pixel-by-pixel scoring structure built by multilayer convolution, and the pixel-by-pixel scoring structure comprises a Conv2d layer, a LeakyRelu layer and an InstanceNorm layer; the Conv2d layer is a two-dimensional convolution layer, and the TransConv2d layer is a two-dimensional transposed convolution layer.
In a second aspect, an embodiment of the present invention provides an ultrasonic phased array image optimization and reconstruction system based on a semi-supervised CycleGan network, including:
the sample module generates training samples which correspond to the two domains of the ultrasonic image and the reconstructed image one by one;
the function module is used for superposing the antagonism loss function, the cycle consistency loss function, the individual loss function and the real difference loss function to obtain a loss function;
and the reconstruction module is used for constructing a semi-supervised CycleGan network structure based on the loss function obtained by the function module, training the semi-supervised CycleGan network structure by using the training sample generated by the sample module, and utilizing the trained semi-supervised CycleGan network structure.
Compared with the prior art, the invention at least has the following beneficial effects:
the ultrasonic phased array image optimization reconstruction method based on the semi-supervised CycleGan network firstly generates a one-to-one corresponding training set required by a training model, secondly introduces a real difference loss function on the basis of the original unsupervised CycleGan network model, and carries out difference comparison on the different domain images respectively output by the generators GA and GB and the real different domain images in the training process, thereby enabling the network to carry out more targeted optimization on the defect appearance in the images. Finally, an image of ultrasonic phased array detection is input into the trained network, and the network can perform different-domain reconstruction aiming at the appearance of the defect on the basis of not losing defect positioning information, so that the influence of the artifact in the ultrasonic image on the appearance of the defect is eliminated.
Furthermore, a relatively objective detection environment can be set up in simulation software according to parameters of an actual detection instrument and a detected material, and defect characteristics of any position and morphology can be set. And the ultrasonic phased array is used for detecting the test block with the actual design defect, so that the defect image and the corresponding ultrasonic image can be more accurately acquired. No matter which method is adopted, the training set does not need to further cut the image or specifically mark the defect, so that the workload for manufacturing the training set is reduced. In addition, compared with the two methods, the cost of generating the data set by the simulation software is lower, and the flexibility of the sample is better.
Further, a loss function
Figure BDA0003915792810000051
By adding the sub-loss functions acting on all places in the network model, the accuracy of the image heterogeneous reconstruction is shown to be the result of the comprehensive action of all the sub-loss functions. Especially when the network model is reversely propagated through the pair
Figure BDA0003915792810000061
And the gradient of each sub-loss function can be solved by calculating the gradient, so that the code is convenient to realize.
Further, the countering loss function is an estimate by the discriminator of the authenticity of the distribution of the image output by the generator, wherein the authenticity of the distribution refers to whether the output image is proximate to the distribution of the expected domain. When the estimation is correct, the network model will improve the performance of the generator, and when the estimation is wrong, the network model will optimize the arbiter.
Further, the circular consistency loss function is a similarity comparison of a single-sided input with a corresponding output image. Because the domain of the image is converted for many times in the network model, the loss of the cycle consistency can ensure that only the style domain of the two images is changed on the basis of consistent content, thereby realizing the accurate reconstruction of the artifact defect in the ultrasonic image.
Further, the individual loss function is another parameter that evaluates the performance of the generator. Which calculates an estimate of the distribution of the image output by the generator in the same domain when the same domain image is input. The purpose is to ensure that the generator also outputs the in-field image when the in-field image is input, and to improve the stability of the generator from the other side.
Further, the real difference loss directly performs similarity evaluation on the different domain images output by the generators and the corresponding real different domain images in the training set. Whether the target contents in the images are consistent or not is emphasized, and the attention degree of the contents in the heterogeneous conversion task of the small target images is improved in a loss function mode.
Furthermore, a synchronous different-side sample input strategy is adopted to train the network model, firstly, only two different-domain images are randomly extracted from the training set in each batch, the training set amount depended on by network training is reduced, and the training efficiency of the network model is improved. Secondly, by configuring loss functions on each generator and each discriminator respectively and combining the loss functions evaluated by the whole network, the network can be optimized more pertinently, and the performance of the whole network can be ensured.
Furthermore, a generator and a discriminator are basic units for realizing image heterogeneous transformation by the network model, the generator can carry out heterogeneous reconstruction on the input image, and the discriminator carries out heterogeneous similarity estimation on the output image of the generator. The generator adopts a Unet network framework, which can not only extract high-dimensional features of the image, but also retain low-semantic information of the image. The discriminator adopts a multilayer convolution module structure which only performs dimensionality raising on a channel without changing the image size. The method can be used for scoring the reconstructed image pixel by pixel, so that the accuracy of image reconstruction is guaranteed at the pixel level. The network model optimizes the network parameters of the generator or the discriminator according to the scoring correctness of the discriminator, and improves the corresponding performance of the generator or the discriminator, thereby improving the performance of the generator and the discriminator after multiple iterations.
It is understood that the beneficial effects of the second aspect can be referred to the related description of the first aspect, and are not described herein again.
In conclusion, the method can realize the different-domain reconstruction of the small target image on the premise of not cutting the original image; for the ultrasonic image, the position information of the defect in the image is reserved, and the defect artifact in the image is eliminated, so that the appearance of the defect is more accurate; the simulation generated different domain images are used as a training set, and the defects in each image do not need to be specifically marked, so that the cost for generating the training set is greatly reduced.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
Fig. 1 is a schematic diagram of a training sample, in which (a) is a schematic diagram of an artificial defect design containing specific information, (b) is a defect image of a square hole, (c) is an ultrasound image generated by a simulation corresponding to the defect image of the square hole, (d) is a defect image of a circular hole, and (e) is an ultrasound image generated by a simulation corresponding to the defect image of the circular hole;
FIG. 2 is a schematic diagram of a semi-supervised CycleGan network training strategy, wherein (a) is a schematic diagram of a training process of the CycleGan network of the present invention, (B) is a flowchart of a training process from domain A to domain B, and (c) is a flowchart of a training process from domain B to domain A;
FIG. 3 is a schematic diagram of a network architecture of a generator;
FIG. 4 is a schematic diagram of a network structure of the arbiter;
FIG. 5 is a front-to-back comparison of a two-way reconstruction of an ultrasound phased array simulation image using the method of the present invention, wherein (a) is an input ultrasound image, (b) is a reconstructed defect image, (c) is a real defect image, (d) is an input defect image, (e) is a reconstructed ultrasound image, and (f) is a real ultrasound image;
fig. 6 is a comparison diagram before and after actual ultrasonic phased array image defect reconstruction by using the method, wherein, (a) is a defect image acquired by an ultrasonic phased array, and (b) is a defect image reconstructed by the network provided by the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
In the description of the present invention, it should be understood that the terms "comprises" and/or "comprising" indicate the presence of the stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and including such combinations, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
It should be understood that although the terms first, second, third, etc. may be used to describe preset ranges, etc. in embodiments of the present invention, these preset ranges should not be limited to these terms. These terms are only used to distinguish preset ranges from one another. For example, the first preset range may also be referred to as a second preset range, and similarly, the second preset range may also be referred to as the first preset range, without departing from the scope of the embodiments of the present invention.
The word "if" as used herein may be interpreted as "at 8230; \8230;" or "when 8230; \8230;" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrase "if determined" or "if detected (a stated condition or event)" may be interpreted as "upon determining" or "in response to determining" or "upon detecting (a stated condition or event)" or "in response to detecting (a stated condition or event)", depending on the context.
Various structural schematics according to the disclosed embodiments of the invention are shown in the drawings. The figures are not drawn to scale, wherein certain details are exaggerated and possibly omitted for clarity of presentation. The shapes of various regions, layers and their relative sizes and positional relationships shown in the drawings are merely exemplary, and deviations may occur in practice due to manufacturing tolerances or technical limitations, and a person skilled in the art may additionally design regions/layers having different shapes, sizes, relative positions, according to actual needs.
The invention provides an ultrasonic phased array image optimization reconstruction method based on a semi-supervised CycleGan network, which takes two image domains formed by a defect schematic diagram (hereinafter referred to as a defect image) designed manually and a corresponding ultrasonic image as a training sample set, realizes the mutual reconstruction of the images of the two domains through the proposed CycleGan semi-supervised neural network during training, and can output the reconstructed image only by inputting the ultrasonic image of the defect during testing; the maximum advantage is that the training sample size is extremely low, compared with a training set of thousands of images, the network model provided by the invention can meet the training requirement only by dozens of matched ultrasonic images and reconstruction images, and the corresponding feature learning of the ultrasonic images and the reconstruction images by the neural network is realized without any pretreatment on ultrasonic phased array imaging.
The invention relates to an ultrasonic phased array image optimization reconstruction method based on a semi-supervised CycleGan network, which comprises the following steps of:
s1, generating a training sample
The samples used for the network structure training proposed by the invention need to be respectively in one-to-one correspondence in two domains of an ultrasonic image and a reconstructed image, and are realized by two methods:
the first method is to perform ultrasonic imaging of a defect designed manually in a two-dimensional plane using simulation software, thereby acquiring an ultrasonic image of the defect and an original defect image.
The second method is to utilize ultrasonic phased array equipment to perform actual measurement and corresponding reconstruction calculation on the test block which is processed and contains accurate defect information, so as to respectively obtain a test block defect image and a corresponding ultrasonic image from the processing information.
Referring to fig. 1, a square hole with a side length of 1mm and a circular hole with a diameter of 1mm are respectively designed at two designated positions. And calculating by simulation software to obtain an ultrasonic image corresponding to the defect image. By the method, the defect image and the ultrasonic image in pairs can be designed and acquired at any position, so that a training sample is generated.
S2, designing a loss function
In the network model provided by the invention, four loss functions such as antagonism loss, cycle consistency loss, individual loss and real difference loss are respectively defined. The first three items are the same as the loss function calculation method defined in the standard CycleGan network. And superposing the four loss functions to obtain a final model loss function. The expectation of the four loss functions is different, and the network model is limited from different angles, but the four loss functions are not limited.
Specifically, the resistance loss is to identify the authenticity of an image output by a generator in a set of generators and discriminators in a CycleGan network by using the discriminators. In FIG. 2 (B), the domain A image a passes through the generator GA and the domain B image is output
Figure BDA0003915792810000101
For example, then the loss is resisted
Figure BDA0003915792810000102
Is represented as follows:
Figure BDA0003915792810000103
wherein GA (a) represents samples a (a to P) of domain A A ) Domain B image post-output via generator GA
Figure BDA0003915792810000104
Discriminator DA respectively pairTrue samples B (B-P) in Domain B B ) And false true samples
Figure BDA0003915792810000105
And (5) performing identification. In this process, it is desirable that the discriminator DA can discriminate accurately true and false samples of the same domain, so a maximization scheme is taken for the discriminator
Figure BDA0003915792810000106
At the same time, it is desirable that the false true samples of the generator are as similar in domain as the true samples as possible, so a minimization scheme is taken for the generator
Figure BDA0003915792810000107
Similarly, in FIG. 2 (c), the domain B image B passes through the generator GB and outputs the domain A image
Figure BDA0003915792810000108
Namely:
Figure BDA0003915792810000109
the cycle consistency loss comprises two sub items when the domain A is converted to the domain B and the domain B is converted to the domain A, and the two sub item structures simultaneously exert constraint on the network in the two-way training process. Taking the example of turning domain A to domain B and then to domain A in FIG. 2B, the cyclic consistency loss is shown in equation (3):
Figure BDA0003915792810000111
in equation (3), the domain A samples a first pass through the generator GA, and then pass through the generator GB, and the output after two conversions belongs to the domain A samples
Figure BDA0003915792810000112
Namely GB (GA (a)). Then by calculating the sample a and
Figure BDA0003915792810000113
the similarity of the image content is used for measuring the accuracy of the generator for reconstructing the image content.
Similarly, the cyclic consistency loss in the direction from domain B to domain A and then to domain B in FIG. 2 (c) is shown in equation (4):
Figure BDA0003915792810000114
and (3) accumulating the formula (3) and the formula (4) to obtain the cycle consistency loss of the CycleGan network, as shown in the formula (5):
Figure BDA0003915792810000115
the above loss functions are all for ensuring that the generator can accurately convert when the different domain image is input. When the same domain image is input, in order to ensure that the generator does not perform domain conversion on it, an individual loss is set, which is shown in equation (6):
Figure BDA0003915792810000116
in formula (6), b and a are input to the generators GA and GB, respectively, and similarity evaluation is performed with their own images, thereby ensuring that the images do not change.
Because the defect content in the ultrasonic image is less, on the basis that the discriminator judges whether the generated image is true or false, the different-domain same-content image of the input image is used as a truth label, and the real difference loss is directly compared with the generated image in a similarity way, so that the generator is guided to enhance the performance of the image, and the real difference loss is shown as a formula (7):
Figure BDA0003915792810000117
finally, the integral loss function of the network model is obtained by accumulating the above formulas (1) to (7), as shown in formula (8):
Figure BDA0003915792810000121
wherein λ is cyc ,λ idt And λ aut Respectively, adjustable hyper-parameters, for balancing the weights of the corresponding three loss function components in the overall function.
S3, building a semi-supervised CycleGan network and training;
the standard CycleGan network is an unsupervised neural network. Which implements a style conversion of an image by inputting non-paired images of two different domains. However, due to the particularity of the industrial ultrasound image, the industrial ultrasound image cannot be effectively reconstructed into a defect image. Therefore, the invention provides a semi-supervised CycleGan network structure and a corresponding training strategy thereof, wherein the training strategy is shown in figure 2.
As seen in fig. 2 (a), the training of the CycleGan network is bi-directional, i.e., the unpaired images of domain a and domain B are input to two different generators from both sides simultaneously, and the corresponding images conforming to the counterpart domains are output simultaneously. In this process, the discriminator discriminates the image output by the generator to improve the imaging performance of the generator. When the training is completed, the network can realize the conversion from the ultrasonic image (domain A) to the defect image (domain B) (the red in the figure 2 (a) is in the direction of the arrow from left to right), and can also realize the conversion from the defect image (domain B) to the ultrasonic image (domain A) (the blue in the figure 2 (a) is in the direction of the arrow from right to left). During testing and application, the reconstruction of the defect image can be realized only by utilizing the model data of the generator A and inputting the ultrasonic image. Fig. 2 (B) and 2 (c) are training flow diagrams of two specific domains, where a and B belong to domain a and domain B, respectively. Taking fig. 2 (b) as an example, after the input image a of the domain a passes through the first generator GA, on the one hand, it is necessary to use the discriminator DA to compare the image generated by the GA
Figure BDA0003915792810000122
Whether or not the domain B distribution is indeed met is determined, and on the other hand, it is necessaryPut it in the output image of the domain B
Figure BDA0003915792810000123
And evaluating the similarity of the image B corresponding to the image a in the original domain B to ensure the accuracy of the reconstruction of the image a in the domain B. Then will be
Figure BDA0003915792810000124
Input to a second generator GB, the output of which
Figure BDA0003915792810000125
Is from domain B
Figure BDA0003915792810000126
Results of reconstruction to Domain A
Figure BDA0003915792810000127
FIG. 2 (c) shows the same principle.
In the semi-supervised CycleGan network, a generator adopts a Unet network structure to realize coding and decoding tasks of image characteristics, and a discriminator adopts a pixel-by-pixel scoring structure built by multilayer convolution to distinguish the authenticity of an image. It should be noted that the embodiments of the generator proposed by the present invention include not only the network using the Unet network, but also other network structures for image feature extraction, and the adjustment of the specific layer configuration in the Unet according to the specific image characteristics should also fall within the scope covered by the claims of the present invention. Taking the training process from domain a to domain B as an example, the generator is shown in fig. 3.
In fig. 3, the Conv2d layer is a two-dimensional convolution layer, the TransConv2d layer is a two-dimensional transposed convolution layer, and the other layers are normally expressed. In the training process from the domain A to the domain B, the input ultrasonic image not only realizes the downward extraction of features through the convolution module layer by layer on the left side so as to obtain high semantic information, but also transversely and directly transmits low semantic information to the up-sampling module on the right side in the channel dimension so as to ensure the completeness of the low semantic information.
The image output by the generator needs to be input into a discriminator to verify the reconstruction performance of the generator, and the invention uses a multi-layer convolution module which only carries out dimensionality lifting on channels without changing the image size. Compared with other Markov classifiers (PatchGan) based on the increased receptive field, the MarkchGan based reconstruction method can score the reconstructed image pixel by pixel, so that the accuracy of image reconstruction is guaranteed on a pixel level. The network structure of the arbiter is shown in fig. 4.
In fig. 4, the size of the pixel-by-pixel scoring matrix is consistent with the input ultrasound image and only expands in the channel dimension during convolution, without compressing the feature size, thereby ensuring the accuracy of the scoring at the pixel level.
In addition, the network model provided by the invention adopts an Adam optimizer to optimize the network loss function.
For convenience of discussion, L1 loss and MSE loss are used in equations (1) to (7) to evaluate the similarity between features and images, but in practice, other loss functions for evaluating the similarity between images, including, but not limited to, L2 loss and SSIM loss, should also be covered by this patent.
In another embodiment of the present invention, an ultrasonic phased array image optimization and reconstruction system based on a semi-supervised CycleGan network is provided, and the system can be used for implementing the ultrasonic phased array image optimization and reconstruction method based on the semi-supervised CycleGan network.
Wherein,.
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The method provided by the invention is adopted to test and verify the simulated ultrasonic image and the defect-containing image obtained by actual detection.
The resolution of the simulated image and the resolution of the actually detected defect image are both 256 × 256, wherein the reconstruction result by using the simulated ultrasound image is shown in fig. 5 and table 1:
TABLE 1 comparison of ultrasound simulation image reconstruction results
Numbering Image name X-axis/pixel Y-axis/pixel Area/pixel
1 Input ultrasound image 51 64 318
2 Reconstructed defect image 52 64 44
3 Real defect image 51 63 32
4 Input defect image 102 115 39
5 Reconstructed ultrasound image 102 114 484
6 True ultrasound image 102 115 308
As can be seen from fig. 5 and table 1, fig. 5 (a), (b), (c) and numbers 1 to 3 show the performance of reconstruction from an input ultrasound image into a defect image. Compared with the real defect image, the reconstructed defect image has a 1-pixel shift on the X-axis/Y-axis, whereas the defect occupies 318 pixels in the ultrasound image, whereas the reconstructed defect occupies only 44 pixels in the image, and the corresponding real defect occupies 32 pixels in terms of defect area. Meanwhile, fig. 5 (d), (e), (f) and numbers 4 to 6 show the performance of reconstructing an ultrasound image from an input defect image. The defects of the reconstructed ultrasound image are only shifted by 1 pixel on the Y-axis. The above contrast illustrates that the present invention can ensure high accuracy reconstruction of defects from ultrasound images.
Finally, the method provided by the invention is adopted to verify the defect-containing image obtained by actual detection, and the result is shown in fig. 6 and table 2
TABLE 2 ultrasonic testing image reconstruction results
Defect numbering X-axis/pixel Y-axis/pixel Area/pixel
1-ultrasonic image 229 182 342
1-reconstruction of images 228 182 217
2-ultrasonic image 177 130 212
2-reconstruction of images 177 130 130
3-ultrasonic image 124 79 232
3-reconstruction of images 125 79 144
4-ultrasonic image 73 27 244
4-reconstruction of images 73 27 155
5-ultrasonic image 46 27 244
5-reconstructed image 46 27 154
As shown in fig. 6 and table 2, the defects numbered 1 to 5 can maintain reconstruction accuracy within 1 pixel in positioning after reconstruction, and 5 defects can be effectively reduced as seen from the defect area in the image, thereby further improving the appearance characterization of the defects in the image.
In conclusion, the ultrasonic phased array image optimization reconstruction method and system based on the semi-supervised CycleGan network can realize the different-domain reconstruction of the small target image on the premise of not cutting the original image. For the ultrasonic image, the position information of the defect in the image is reserved, the defect artifact in the image is eliminated, and the appearance of the defect is more accurate. In the method, the different domain images generated by simulation are used as the training set, and the defects in each image do not need to be specifically marked, so that the cost for generating the training set is greatly reduced.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above-mentioned contents are only for illustrating the technical idea of the present invention, and the protection scope of the present invention is not limited thereby, and any modification made on the basis of the technical idea of the present invention falls within the protection scope of the claims of the present invention.

Claims (10)

1. The ultrasonic phased array image optimization reconstruction method based on the semi-supervised CycleGan network is characterized by comprising the following steps of:
s1, generating training samples which correspond to each other in two domains of an ultrasonic image and a reconstructed image;
s2, overlapping the antagonism loss function, the cycle consistency loss function, the individual loss function and the real difference loss function to obtain a loss function;
and S3, building a semi-supervised CycloGan network structure based on the loss function obtained in the step S2, training the semi-supervised CycloGan network structure by using the training sample generated in the step S1, and inputting an ultrasonic image actually detected by the ultrasonic phased array into the trained semi-supervised CycloGan network structure after network training to obtain a reconstructed image of the detected defect.
2. The ultrasonic phased array image optimization reconstruction method based on the semi-supervised CycleGan network as recited in claim 1, wherein in step S1, the artificially designed defect is ultrasonically imaged in a two-dimensional plane, and an obtained ultrasonic image of the defect and an original defect image form a training sample; or the processed test block containing the accurate defect information is actually measured and correspondingly reconstructed and calculated, and a test block defect image and a corresponding ultrasonic image are respectively obtained from the processing information to form a training sample.
3. The ultrasonic phased array image optimization reconstruction method based on semi-supervised CycleGan network as claimed in claim 1, wherein in step S2, the loss function
Figure FDA0003915792800000011
The method specifically comprises the following steps:
Figure FDA0003915792800000012
wherein λ is cyc ,λ idt And λ aut Respectively, the adjustable hyper-parameters are the parameters,
Figure FDA0003915792800000013
and
Figure FDA0003915792800000014
in order to combat the loss-function,
Figure FDA0003915792800000015
in order to be a function of the cyclic consistency loss,
Figure FDA0003915792800000016
as a function of the individual losses,
Figure FDA0003915792800000017
as a function of the true variance loss.
4. The ultrasonic phased array image optimization reconstruction method based on semi-supervised CycleGan network as claimed in claim 3, wherein the penalty function is resisted
Figure FDA0003915792800000018
And
Figure FDA0003915792800000019
respectively as follows:
Figure FDA0003915792800000021
Figure FDA0003915792800000022
wherein GA (a) represents samples a (a to P) of domain A A ) Domain B image post-output via generator GA
Figure FDA0003915792800000023
GB (B) denotes the B samples of the domain B (B to P) B ) Domain A image output after passing through generator GB
Figure FDA0003915792800000024
DA (b) represents the class score of the image b by the discriminator DA, DB (a) represents the class score of the image a by the discriminator DB,
Figure FDA0003915792800000025
representing the probability distribution of the image a obeying the domain a,
Figure FDA0003915792800000026
representing the probability distribution of image B obeying domain B.
5. The ultrasonic phased array image optimization reconstruction method based on semi-supervised CycleGan network as claimed in claim 3, wherein the cyclic consistency loss function
Figure FDA0003915792800000027
Comprises the following steps:
Figure FDA0003915792800000028
wherein GA (a) represents samples a (a to P) of domain A A ) Domain B image post-output via generator GA
Figure FDA0003915792800000029
GB (B) denotes the B samples of the domain B (B to P) B ) Domain A image output after passing through generator GB
Figure FDA00039157928000000210
Figure FDA00039157928000000211
Representing the probability distribution of the image a obeying the domain a,
Figure FDA00039157928000000212
representing the probability distribution of image B obeying domain B.
6. The ultrasonic phased array image optimization reconstruction method based on semi-supervised CycleGan network as claimed in claim 3, wherein individual loss function
Figure FDA00039157928000000213
Comprises the following steps:
Figure FDA00039157928000000214
wherein GA (a) represents samples a (a to P) of domain A A ) Domain B image post-output via generator GA
Figure FDA00039157928000000215
GB (B) denotes the B samples of the domain B (B to P) B ) Domain A image output after generator GB
Figure FDA00039157928000000216
Figure FDA00039157928000000217
Representing the probability distribution of the image a obeying the domain a,
Figure FDA00039157928000000218
representing the probability distribution of image B obeying domain B.
7. The ultrasonic phased array image optimization reconstruction method based on semi-supervised CycleGan network as claimed in claim 3, wherein the true difference loss function
Figure FDA00039157928000000219
Comprises the following steps:
Figure FDA00039157928000000220
wherein GA (a) represents samples a (a to P) of domain A A ) Domain B image post-output via generator GA
Figure FDA00039157928000000221
GB (B) denotes the B samples of the domain B (B to P) B ) Domain A image output after passing through generator GB
Figure FDA00039157928000000222
Figure FDA00039157928000000223
Representing the probability distribution that image a obeys domain a,
Figure FDA0003915792800000031
representing the probability distribution of image B obeying domain B, MSE is a mean square error loss function.
8. The ultrasonic phased array image optimization and reconstruction method based on the semi-supervised CycleGan network as claimed in claim 1, wherein in the step S3, the training process of the semi-supervised CycleGan network is as follows:
the ultrasonic image and the non-paired image of the defect image are simultaneously input into two different generators from two sides, and simultaneously output corresponding images conforming to the opposite domain, and the discriminator identifies the images output by the generators;
the ultrasound image a is used as an input side
Figure FDA0003915792800000032
Stage, the image a distributed in domain A passes through generator GA and then outputs its defect image distributed in domain B
Figure FDA0003915792800000033
On the one hand, defective images
Figure FDA0003915792800000034
Similarity to domain B is resisted by discriminator DA against the loss function
Figure FDA0003915792800000035
Evaluation of (2); on the other hand, in a loss function by true difference
Figure FDA0003915792800000036
Quantifying GA-generated defect images
Figure FDA0003915792800000037
The difference degree of the real defect image b corresponding to the ultrasonic image a;
then is on
Figure FDA0003915792800000038
Phase, domain B Defect image input to Generator GB
Figure FDA0003915792800000039
Outputting the corresponding domain A ultrasonic image
Figure FDA00039157928000000310
And with the input image a of the network by a cyclic consistency loss function
Figure FDA00039157928000000311
Performing domain similarity comparison;
defective image b as input side, successively
Figure FDA00039157928000000312
And
Figure FDA00039157928000000313
two stages, each of which is subjected to a discriminator DB penalty function
Figure FDA00039157928000000314
True difference loss function
Figure FDA00039157928000000315
And a cyclic consistency loss function
Figure FDA00039157928000000316
The constraint of (2);
after finishing the bidirectional output of a batch of images, carrying out optimization adjustment on network parameters by the semi-supervised CycleGan according to the calculated loss function items; when training is completed, the semi-supervised CycleGan network realizes the conversion from the ultrasonic image to the defect image and the conversion from the defect image to the ultrasonic image.
9. The ultrasonic phased array image optimization reconstruction method based on the semi-supervised CycleGan network is characterized in that the semi-supervised CycleGan network comprises a generator and a discriminator, wherein the generator comprises a Conv2d layer, a LeakyRelu layer, an instanceNorm layer, a Relu layer, a TransConv2d layer and a Tanh layer; the discriminator adopts a pixel-by-pixel scoring structure built by multilayer convolution, and the pixel-by-pixel scoring structure comprises a Conv2d layer, a LeakyRelu layer and an InstanceNorm layer; the Conv2d layer is a two-dimensional convolutional layer, and the TransConv2d layer is a two-dimensional transpose convolutional layer.
10. An ultrasonic phased array image optimization and reconstruction system based on a semi-supervised CycleGan network is characterized by comprising:
the sample module generates training samples which correspond to the two domains of the ultrasonic image and the reconstructed image one by one;
the function module is used for superposing the antagonism loss function, the cyclic consistency loss function, the individual loss function and the real difference loss function to obtain a loss function;
and the reconstruction module is used for constructing a semi-supervised CycleGan network structure based on the loss function obtained by the function module, training the semi-supervised CycleGan network structure by using the training sample generated by the sample module, and utilizing the trained semi-supervised CycleGan network structure.
CN202211337748.1A 2022-10-28 2022-10-28 Ultrasonic phased array image optimization reconstruction method and system based on semi-supervised CycleGan network Pending CN115601572A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211337748.1A CN115601572A (en) 2022-10-28 2022-10-28 Ultrasonic phased array image optimization reconstruction method and system based on semi-supervised CycleGan network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211337748.1A CN115601572A (en) 2022-10-28 2022-10-28 Ultrasonic phased array image optimization reconstruction method and system based on semi-supervised CycleGan network

Publications (1)

Publication Number Publication Date
CN115601572A true CN115601572A (en) 2023-01-13

Family

ID=84851620

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211337748.1A Pending CN115601572A (en) 2022-10-28 2022-10-28 Ultrasonic phased array image optimization reconstruction method and system based on semi-supervised CycleGan network

Country Status (1)

Country Link
CN (1) CN115601572A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115830462A (en) * 2023-02-24 2023-03-21 中国人民解放军国防科技大学 SAR image reconstruction method and device based on cycle consistency countermeasure network
CN116563169A (en) * 2023-07-07 2023-08-08 成都理工大学 Ground penetrating radar image abnormal region enhancement method based on hybrid supervised learning

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115830462A (en) * 2023-02-24 2023-03-21 中国人民解放军国防科技大学 SAR image reconstruction method and device based on cycle consistency countermeasure network
CN116563169A (en) * 2023-07-07 2023-08-08 成都理工大学 Ground penetrating radar image abnormal region enhancement method based on hybrid supervised learning
CN116563169B (en) * 2023-07-07 2023-09-05 成都理工大学 Ground penetrating radar image abnormal region enhancement method based on hybrid supervised learning

Similar Documents

Publication Publication Date Title
CN107610194B (en) Magnetic resonance image super-resolution reconstruction method based on multi-scale fusion CNN
CN115601572A (en) Ultrasonic phased array image optimization reconstruction method and system based on semi-supervised CycleGan network
CN106663316A (en) Block sparse compressive sensing-based infrared image reconstruction method and system thereof
CN108399248A (en) A kind of time series data prediction technique, device and equipment
CN103810704B (en) Based on support vector machine and the SAR image change detection of discriminative random fields
CN110503635B (en) Hand bone X-ray film bone age assessment method based on heterogeneous data fusion network
CN114266939B (en) Brain extraction method based on ResTLU-Net model
CN110414481A (en) A kind of identification of 3D medical image and dividing method based on Unet and LSTM
CN109978888A (en) A kind of image partition method, device and computer readable storage medium
CN114387207A (en) Tire flaw detection method and model based on self-attention mechanism and dual-field self-adaptation
CN110543916A (en) Method and system for classifying missing multi-view data
CN110930378A (en) Emphysema image processing method and system based on low data demand
CN107392863A (en) SAR image change detection based on affine matrix fusion Spectral Clustering
CN114565594A (en) Image anomaly detection method based on soft mask contrast loss
CN112489168A (en) Image data set generation and production method, device, equipment and storage medium
CN111476307A (en) Lithium battery surface defect detection method based on depth field adaptation
Li et al. 3-D inspection method for industrial product assembly based on single X-ray projections
CN112766381B (en) Attribute-guided SAR image generation method under limited sample
CN116206203B (en) Oil spill detection method based on SAR and Dual-EndNet
CN108960326A (en) A kind of point cloud fast partition method and its system based on deep learning frame
CN116993639A (en) Visible light and infrared image fusion method based on structural re-parameterization
CN116863024A (en) Magnetic resonance image reconstruction method, system, electronic equipment and storage medium
CN116167936A (en) Mountain shadow removing method and device for flood monitoring
CN114913262A (en) Nuclear magnetic resonance imaging method and system based on joint optimization of sampling mode and reconstruction algorithm
CN105023016B (en) Target apperception method based on compressed sensing classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination