CN114549681A - Image generation method and device, electronic equipment and storage medium - Google Patents

Image generation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114549681A
CN114549681A CN202210175944.7A CN202210175944A CN114549681A CN 114549681 A CN114549681 A CN 114549681A CN 202210175944 A CN202210175944 A CN 202210175944A CN 114549681 A CN114549681 A CN 114549681A
Authority
CN
China
Prior art keywords
data
space data
sampled
undersampled
space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210175944.7A
Other languages
Chinese (zh)
Inventor
李睿
魏海宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN202210175944.7A priority Critical patent/CN114549681A/en
Publication of CN114549681A publication Critical patent/CN114549681A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/005Specific pre-processing for tomographic reconstruction, e.g. calibration, source positioning, rebinning, scatter correction, retrospective gating

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention belongs to the technical field of magnetic resonance data processing, and particularly relates to an image generation method and device, electronic equipment and a storage medium. An image generation method, comprising: acquiring undersampled K-space data of a plurality of different imaging sequences and fully sampled K-space data of one imaging sequence; preprocessing each undersampled K space data and the fully sampled K space data respectively to obtain target undersampled K space data corresponding to each undersampled K space data and target fully sampled K space data corresponding to the fully sampled K space data. The combination of the single-contrast fully-sampled K-space data and the multi-contrast down-sampled K-space data is beneficial to recovering lost signals and achieving a better reconstruction effect.

Description

Image generation method and device, electronic equipment and storage medium
Technical Field
The invention belongs to the technical field of magnetic resonance data processing, and particularly relates to an image generation method and device, electronic equipment and a storage medium.
Background
The K space imaging technology (MRI) is a highly integrated instrument which is commonly used in clinic at present and integrates the technologies of physics, chemistry and the like, and is a noninvasive imaging method capable of providing anatomical structure and pathological information. Through different parameter settings, the MRI sequence can generate a multi-contrast image, so that various tissue physiological information of a human body is reflected, and more comprehensive clinical analysis and decision are facilitated. At present, a commonly used multi-contrast image acquisition mode is sequence scanning one by one, and each sequence of k-space data is reconstructed respectively, however, the required time is also increased in proportion to the number of sequences, the diagnosis efficiency and the comfort of patients can be reduced by long-time acquisition, and meanwhile, the requirement of long scanning time cannot be met for some specific patients such as old people, children and the like. Image quality is also inevitably affected by respiration, blood flow, heartbeat, and patient motion during long-time scanning, and therefore, reducing long scan times while maintaining a certain resolution and signal-to-noise ratio is a key challenge facing multi-contrast K-space imaging. The K-space partial acquisition is one of the main methods for reducing the scanning time, however, the image signal to noise ratio directly reconstructed from undersampled data is low, and the artifact is serious, so the K-space reconstruction algorithm for reducing the acquisition image is one of the research hotspots.
With the continuous improvement of the traditional algorithm and the development of the artificial intelligence technology, some reconstruction algorithms based on deep learning are continuously proposed, but most of the methods are based on the reconstruction of single-contrast images, signals lost in the descending mining process have certain influence on the reconstruction effect, and the method is particularly obvious under the high-multiple descending mining.
Disclosure of Invention
In view of the above technical problems, the present invention provides an image generation method, an image generation device, an electronic device, and a storage medium. This application carries out data completion through inputting the data layer that the position corresponds among the different contrast K spatial data to the two-domain neural network, by the data layer of two-domain neural network multiple input to data generation data layer corresponding based on the completion is like, wherein unites the full sampling K spatial data of single contrast and the down sampling K spatial data of many contrasts, helps recovering the signal that loses, reaches better reconstruction effect.
In order to solve the above technical problem, the technical solution adopted by the present invention includes four aspects.
In a first aspect, an image generation method is provided, including: acquiring undersampled K-space data of a plurality of different imaging sequences and fully sampled K-space data of one imaging sequence; respectively preprocessing each undersampled K space data and the fully sampled K space data to obtain target undersampled K space data corresponding to each undersampled K space data and target fully sampled K space data corresponding to the fully sampled K space data; and inputting a data layer corresponding to the position in each target under-sampled K space data and the target full-sampled K space data into a two-domain neural network model as input data to obtain a layer image corresponding to each under-sampled K space data.
In some embodiments, the dual domain neural network model comprises: the method comprises the following steps of inputting data layers corresponding to positions in each target under-sampled K space data and the target full-sampled K space data into a dual-domain neural network model as input data to obtain layer images corresponding to each under-sampled K space data, wherein the data layers comprise a data completion domain model and an image enhancement domain model, and the method comprises the following steps: inputting a data layer corresponding to the position in each target under-sampled K space data and the target full-sampled K space data into the data completion domain model as input data to obtain a completion data layer corresponding to each target under-sampled K space data; and determining a layer image corresponding to each undersampled K-space data based on each of the supplemented data layers and the image enhancement domain model.
In some embodiments, the determining a layer image corresponding to each of the undersampled K-space data based on each of the supplemented data layers and the image enhancement domain model includes: carrying out inverse Fourier transform on each complementing data layer to obtain corresponding image domain data; and inputting each image domain data into the image enhancement domain model to obtain a noise-removed and artifact-removed layer image corresponding to each undersampled K space data.
In some embodiments, the preprocessing each of the under-sampled K-space data and the fully-sampled K-space data to obtain target under-sampled K-space data corresponding to each of the under-sampled K-space data and target fully-sampled K-space data corresponding to the fully-sampled K-space data includes: firstly, normalizing each undersampled K space data and the fully sampled K space data; and then carrying out registration operation on each piece of the normalized undersampled K-space data and the fully sampled K-space data so as to enable the spatial positions of the undersampled K-space data and the fully sampled K-space data among different imaging to be corresponding to each other, and thus obtaining target undersampled K-space data corresponding to each piece of the undersampled K-space data and target fully sampled K-space data corresponding to the fully sampled K-space data.
In some embodiments, the preprocessing each of the under-sampled K-space data and the fully-sampled K-space data to obtain target under-sampled K-space data corresponding to each of the under-sampled K-space data and target fully-sampled K-space data corresponding to the fully-sampled K-space data respectively further includes: after the registration operation, removing the data layer which is incomplete due to the registration operation in each piece of the under-sampled K-space data, so that the data layer in each piece of the target under-sampled K-space data meets the input requirement of the data complement domain model.
In some embodiments, in the data completion domain model, the method further comprises: acquiring fusion characteristics of the input data; and completing the data layers according to the fusion characteristics to obtain the completed data layers corresponding to the target undersampled K space data.
In some embodiments, the completing the data layers according to the fusion feature to obtain a completed data layer corresponding to each target under-sampled K-space data includes: generating standard data layers corresponding to the data layers according to the positions corresponding to the input data; and replacing the corresponding features in each standard data layer according to the fusion features to obtain a complete data layer corresponding to each target undersampled K space data.
In a second aspect, the present application provides an image generating apparatus comprising: a first acquisition module for acquiring under-sampled K-space data of a plurality of different imaging sequences and fully-sampled K-space data of one imaging sequence; the first execution module is used for respectively preprocessing each undersampled K space data and the fully sampled K space data to obtain target undersampled K space data corresponding to each undersampled K space data and target fully sampled K space data corresponding to the fully sampled K space data; and the second execution module is used for inputting the data layers corresponding to the positions of the target under-sampled K space data and the target full-sampled K space data into a two-domain neural network model as input data to obtain the layer images corresponding to the under-sampled K space data. .
A third aspect provides an electronic device comprising a storage storing a computer program and a processor implementing the steps of an image generation method when executing the computer program.
A fourth aspect provides a storage medium storing a computer program executable by one or more processors, the computer program being operable to implement the steps of the image generation method of any one of the first aspects.
The beneficial effects created by the invention are as follows: this application carries out data completion through inputting the data layer that the position corresponds among the different contrast K spatial data to the two-domain neural network, by the data layer of two-domain neural network multiple input to data generation data layer corresponding based on the completion is like, wherein unites the full sampling K spatial data of single contrast and the down sampling K spatial data of many contrasts, helps recovering the signal that loses, reaches better reconstruction effect.
Drawings
The scope of the present disclosure may be better understood by reading the following detailed description of exemplary embodiments in conjunction with the accompanying drawings. Wherein the included drawings are:
fig. 1 is an overall flowchart of an image generation method according to an embodiment of the present application;
fig. 2 is a block diagram of an image generating apparatus according to an embodiment of the present disclosure;
fig. 3 is a block diagram illustrating a dual-domain neural network model according to an embodiment of the present disclosure.
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
The following description will be added if a similar description of "first \ second \ third" appears in the application file, and in the following description, the terms "first \ second \ third" merely distinguish similar objects and do not represent a specific ordering for the objects, and it should be understood that "first \ second \ third" may be interchanged under certain circumstances in a specific order or sequence, so that the embodiments of the application described herein can be implemented in an order other than that shown or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Example 1:
to solve the problems in the background art, as shown in fig. 1, the present application provides an image generation method, which is applied to an electronic device, where the electronic device may be a server, a mobile terminal, a computer, a cloud platform, and the like. The functions realized by the device data processing provided by the embodiment of the application can be realized by calling a program code by a processor of the electronic device, wherein the program code can be stored in a computer storage medium, and the image generation method comprises the following steps:
step S1: undersampled K-space data for a plurality of different imaging sequences and fully sampled K-space data for one imaging sequence are acquired.
In magnetic resonance imaging, different tissue structures in the body exhibit different signal intensities, and the physician distinguishes between the tissues in the imaged image, primarily by contrast with surrounding regions. Therefore, in order to make the doctor better understand the situation inside the patient through the magnetic resonance image, the magnetic resonance image under different contrast needs to be established. Therefore, a plurality of sequences of K-space data acquisition are required in a patient, and in order to shorten acquisition time and relieve pain of the patient, the prior art means acquires a complete image by undersampling data and then reconstructing the undersampled data. However, the conventional magnetic resonance accelerated imaging reconstruction technology mainly aims at single contrast, and reduces the time of magnetic resonance acquisition by partially acquiring K-space data, but the pure down-sampling of K-space data can cause image aliasing and low signal-to-noise ratio. Therefore, the method and the device provide that full sampling is carried out on the K space data with one contrast to obtain the full sampling K space data of one imaging sequence. And performing down-sampling on other contrast K-space data to obtain under-sampled K-space data of a plurality of different imaging sequences.
Step S2: preprocessing each undersampled K space data and the fully sampled K space data respectively to obtain target undersampled K space data corresponding to each undersampled K space data and target fully sampled K space data corresponding to the fully sampled K space data.
Through processing the fully sampled K space data and each downsampled K space, the obtained target fully sampled K space data and each downsampled K space data can meet the input requirement of the neural network model in the application.
In some embodiments, the step S2 "preprocessing each of the under-sampled K-space data and the fully-sampled K-space data to obtain target under-sampled K-space data corresponding to each of the under-sampled K-space data and target fully-sampled K-space data corresponding to the fully-sampled K-space data" includes:
step S21: firstly, normalization processing is carried out on each undersampled K space data and the fully sampled K space data.
Step S22: and then carrying out registration operation on each piece of the normalized undersampled K-space data and the fully sampled K-space data so as to enable the spatial positions of the undersampled K-space data and the fully sampled K-space data among different imaging to be corresponding to each other, and thus obtaining target undersampled K-space data corresponding to each piece of the undersampled K-space data and target fully sampled K-space data corresponding to the fully sampled K-space data.
The pretreatment mainly comprises the following steps: normalization processing and registration operation. Wherein the normalization process for each K-space data may fix the range of the most value of each K-space data. And the registration operation will make the spatial positions of the respective K-space data correspond. In the actual magnetic resonance data acquisition, in order to better acquire the tissues of different parts, the magnetic resonance acquisition position is adjusted, and at the moment, due to the difference of magnetic resonance acquisition regions, the positions of acquired K space data are different, so that the sequence data needs to be registered, the spatial positions of the sequence data are corresponding, and the image generation of the spatial data is facilitated.
Of course, when the registration operation is performed, the image data is subjected to a scaling transformation and a displacement transformation, and some data layers of the undersampled image data may be incomplete during the data transformation, so in some embodiments, after step S22, the method further includes:
step S23: after the registration operation, removing the data layer which is incomplete due to the registration operation in each piece of the under-sampled K-space data, so that the data layer in each piece of the target under-sampled K-space data meets the input requirement of the data complement domain model.
The undersampled K-space data of the data layer with the defects removed can better meet the input requirement of a following neural network model.
Step S3: and inputting a data layer corresponding to the position in each target under-sampled K space data and the target full-sampled K space data into a two-domain neural network model as input data to obtain a layer image corresponding to each under-sampled K space data.
The double-domain neural network model can process a plurality of K space data with different contrasts at the same time, and can effectively shorten the process of reconstructing undersampled K space data. Although a plurality of pieces of K-space data with different contrasts can be processed simultaneously, in order to reduce the performance requirements on the processing equipment, the K-space data with different contrasts are processed according to the data layers in the K-space data in the present application.
The dual-domain neural network in the application comprises two parts: one part is a data completion domain model, and the other part is an image enhancement domain. The data complementing domain is mainly used for complementing each layer of data layer of the undersampled K space data. And the image enhancement domain is mainly used for converting the data layer after completion into a layer image.
In some embodiments, the step S3 "inputting the data layer of each of the target under-sampled K-space data corresponding to the position in the target full-sampled K-space data into the two-domain neural network model as input data to obtain the layer image corresponding to each of the under-sampled K-space data" includes:
step S31: and inputting a data layer corresponding to the position in each target under-sampled K space data and the target full-sampled K space data into the data completion domain model as input data to obtain a completion data layer corresponding to each target under-sampled K space data.
Step S32: and determining a layer image corresponding to each undersampled K-space data based on each of the supplemented data layers and the image enhancement domain model.
In order to ensure the accuracy of the final reconstructed image, the reconstruction process is divided into two steps by the double-domain neural network, and the first step is to obtain a complete data layer corresponding to the target under-sampled K-space data. And the second step is to obtain a layer image corresponding to the undersampled K space data according to the supplemented data layer.
In some embodiments, the step S32 of determining the layer image corresponding to each of the undersampled K-space data based on each of the supplemented data layers and the image enhancement domain model includes:
step S321: and carrying out inverse Fourier transform on each complementing data layer to obtain corresponding image domain data.
Step S322: and inputting each image domain data into the image enhancement domain model to obtain a noise-removed and artifact-removed layer image corresponding to each undersampled K space data.
Firstly, the complementing data layer is subjected to inverse Fourier transform to enable the complementing data layer to be changed into image domain data, and finally, the image domain data is processed through an image enhancement domain model, so that artifacts and noise in the image domain data are removed, and a layer image meeting requirements is obtained. The layer image is one layer of the final image generated for each K-space data, corresponding to the data layer of the K-space data.
In some embodiments, for the data completion domain model, the image generation method further comprises:
step S311: and acquiring the fusion characteristics of the input data.
Step S312: and completing the data layers according to the fusion characteristics to obtain the completed data layers corresponding to the target undersampled K space data.
According to the method, the full-sampling K space data is selected, the same position is required for the data layer of the input neural network model, and therefore the fusion characteristic can be obtained better and more accurately, and the fusion characteristic is obtained by calculation through a convolution algorithm. The fusion characteristics comprise the characteristics of each undersampled K space data layer and the characteristics of a fully sampled K space data layer, and the method can enable details and textures in each K space data layer to be shared, so that the obtained complete data layer can represent the actual condition in the body of the patient more accurately. The foregoing registration operation is intended to better determine the same location of the data layers in each K-space data.
In some embodiments, the step S312 "completing each data layer according to the fusion feature to obtain a completed data layer corresponding to each target under-sampled K-space data" includes:
step S3121: and generating standard data layers corresponding to the data layers according to the positions corresponding to the input data.
Step S3122: and replacing the corresponding features in each standard data layer according to the fusion features to obtain a complete data layer corresponding to each target undersampled K space data.
In the completion process, in order to retain the characteristics of each contrast, that is, the output completed data layer still conforms to the original contrast, it is necessary to generate a corresponding standard data layer according to the position corresponding to the input data and each data layer in the present application. The data of the standard data layer is normal human data at the same position as the input data. The standard data layers are multiple, each standard data layer corresponds to one input data layer, the corresponding relation between the standard data layer and the input data layers is mainly embodied in that the standard data layers and the input data layers belong to the same position, the standard data layers and the input data layers belong to the same contrast, and only in this way, the input completion data layers can be ensured to correspond to the K space data.
After the standard data layers are established, corresponding features in each standard data layer are replaced by utilizing the fusion features, so that the standard data layers are changed from the data layers which are generated automatically into the data layers which are acquired really and serve as the completion data layers corresponding to the undersampled K space data.
Therefore, the image generation method provided by the application can improve the recovery capability of the lost signal through the combination of the full-sampling data with single contrast and the plurality of under-sampling data, so that the reconstruction effect of the under-sampling K space data is better. In the reconstruction process, a plurality of data layers with different contrasts but the same position can realize the sharing of details and textures when completing, and the final reconstruction effect is improved. The image generation method is low in calculation complexity in the feed-forward process, and has better performance and generalization capability compared with the traditional algorithm and the existing deep learning algorithm. Therefore, the method and the device can reconstruct a better magnetic resonance image under high power down-sampling.
In some embodiments, when the dual-domain neural network model in the present application is trained, loss functions of multiple stages are obtained, and the dual-domain neural network model is updated according to the loss functions, so that a final output result of the dual-domain neural network model is more accurate.
In some embodiments, the loss functions include: a first loss function in the data completion field corresponding to the standard data layer. The first loss function is mainly used for judging whether the generated standard data layer is close to a true value or not.
Also includes: and the second loss function is mainly used for judging whether the final supplemented data layer is correct or not.
In the image enhancement domain of the present application, there are included: a third loss function and a fourth loss function. A third loss function corresponds to the image domain data for determining whether the image domain data is correct. The fourth loss function corresponds to a layer image for determining whether the generated layer image meets a criterion.
When the two-domain neural network model is trained, the corresponding process is updated according to the corresponding loss function, and the whole two-domain neural network model is updated through the sum of the four loss functions, so that the accuracy of the final output of the two-domain neural network and the image quality are improved.
Example 2:
based on the foregoing embodiments, the present application provides an image generating apparatus, where each module included in the apparatus and each unit included in each module may be implemented by a processor in a computer device; of course, the implementation can also be realized through a specific logic circuit; in the implementation process, the processor may be a Central Processing Unit (CPU), a Microprocessor Unit (MPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), or the like.
As shown in fig. 2, a second aspect provides an image generating apparatus comprising: the system comprises a first acquisition module 1, a first execution module 2 and a second execution module 3.
The first acquisition module 1 is configured to acquire undersampled K-space data of a plurality of different imaging sequences and fully sampled K-space data of one imaging sequence. The first execution module 2 is configured to respectively preprocess each of the under-sampled K-space data and the fully sampled K-space data to obtain target under-sampled K-space data corresponding to each of the under-sampled K-space data and target fully sampled K-space data corresponding to the fully sampled K-space data. The second execution module 3 is configured to input, as input data, a data layer corresponding to a position in each of the target under-sampled K-space data and the target full-sampled K-space data into the two-domain neural network model, so as to obtain a layer image corresponding to each of the under-sampled K-space data.
In some embodiments, the first execution model comprises: a third execution module and a fourth execution module. And the third execution module is used for carrying out normalization processing on each undersampled K-space data and the fully sampled K-space data. The fourth execution module is used for performing registration operation on each piece of the undersampled K-space data and the fully sampled K-space data after normalization so as to enable the spatial positions of the undersampled K-space data and the fully sampled K-space data between different imaging to be corresponding to obtain target undersampled K-space data corresponding to each piece of the undersampled K-space data and target fully sampled K-space data corresponding to the fully sampled K-space data.
In some embodiments, the image generation apparatus further comprises: and a fifth execution module. And the fifth execution module is used for removing a data layer which is incomplete due to the registration operation in each piece of the under-sampled K-space data after the registration operation, so that the data layer in each piece of the target under-sampled K-space data meets the input requirement of the data compensation domain model.
In some embodiments, the second execution module 3 comprises: a sixth execution module and a first determination module. And the sixth execution module is used for inputting a data layer corresponding to the position in each target under-sampled K-space data and the target full-sampled K-space data into the data completion domain model as input data to obtain a completion data layer corresponding to each target under-sampled K-space data. The first determining module is configured to determine, based on each of the supplemented data layers and the image enhancement domain model, a layer image corresponding to each of the undersampled K-space data.
In some embodiments, the first determining module comprises: the device comprises a first conversion module and a seventh execution module. The first conversion module is used for carrying out inverse Fourier transform on each complementing data layer to obtain corresponding image domain data. And the seventh execution module is used for inputting each image domain data into the image enhancement domain model to obtain a layer image which is subjected to noise removal and artifact removal and corresponds to each undersampled K space data.
In some embodiments, the image generation apparatus further comprises: the device comprises a second acquisition module and an eighth execution module. The second acquisition module is used for acquiring the fusion characteristics of the input data. And the eighth execution module is used for completing each data layer according to the fusion characteristics to obtain a completed data layer corresponding to each target under-sampled K space data.
In some embodiments, the eighth execution module comprises: the device comprises a first generation module and a ninth execution module. And the first generation module is used for generating standard data layers corresponding to the data layers according to the positions corresponding to the input data. And the ninth execution module is used for replacing the corresponding features in each standard data layer according to the fusion features to obtain a complete data layer corresponding to each target under-sampled K space data.
The respective modules in the image generating apparatus described above may be wholly or partially implemented by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or a processor independent from the device, or can be stored in a memory of the processing device in a software form, so that the processor can call and execute operations corresponding to the modules. It should be noted that, in the embodiment of the present application, the division of the module is schematic, and is only one logic function division, and there may be another division manner in actual implementation.
Example 3:
a third aspect provides an electronic device, comprising a storage and a processor, wherein the storage stores a computer program, and the processor implements the steps of a logistics order tracking method when executing the computer program.
Example 4:
a fourth aspect provides a storage medium storing a computer program executable by one or more processors, the computer program being operable to implement the steps of the logistics order tracking method of any one of the first aspect.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
Example 5:
as shown in fig. 3, the present application also discloses the composition of a two-domain neural network model. The dual-domain neural network includes: the device comprises an input module, a K space generator, a data consistency layer, a first discriminator, an inverse Fourier transformer, an image domain generator, a second discriminator and an output module.
The input module is used for inputting data. The K space generator is used for generating a corresponding standard data layer according to input data. The data consistency layer is used for acquiring the fusion characteristics of the input data and integrating the fusion characteristics into the standard data layer. The first discriminator is used for determining whether the standard data layer is close to the true value and determining whether the completion data layer is correct. The inverse Fourier transformer is used for performing inverse Fourier transform on the supplemented data layer and converting the supplemented data layer into image domain data. The image generation domain is used for denoising and artifact generating of image domain data to generate a final layer image. The second discriminator is used for determining whether the image domain data is correct and determining whether the generation layer image meets the standard.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application. The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or units may be electrical, mechanical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may be separately regarded as one unit, or two or more units may be integrated into one unit; the integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media that can store program codes, such as a removable Memory device, a Read Only Memory (ROM), a magnetic disk, or an optical disk.
Alternatively, the integrated units described above in the present application may be stored in a computer-readable storage medium if they are implemented in the form of software functional modules and sold or used as independent products. Based on such understanding, the technical solutions of the embodiments of the present application may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a controller to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code.
The above description is only for the embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. An image generation method, comprising:
acquiring undersampled K-space data of a plurality of different imaging sequences and fully sampled K-space data of one imaging sequence;
respectively preprocessing each undersampled K space data and the fully sampled K space data to obtain target undersampled K space data corresponding to each undersampled K space data and target fully sampled K space data corresponding to the fully sampled K space data;
and inputting a data layer corresponding to the position in each target under-sampled K space data and the target full-sampled K space data into a two-domain neural network model as input data to obtain a layer image corresponding to each under-sampled K space data.
2. An image generation method according to claim 1, wherein the two-domain neural network model comprises: the method comprises the following steps of inputting data layers corresponding to positions in each target under-sampled K space data and the target full-sampled K space data into a dual-domain neural network model as input data to obtain layer images corresponding to each under-sampled K space data, wherein the data layers comprise a data completion domain model and an image enhancement domain model, and the data layers comprise:
inputting a data layer corresponding to the position in each target under-sampled K space data and the target full-sampled K space data into the data completion domain model as input data to obtain a completion data layer corresponding to each target under-sampled K space data;
and determining a layer image corresponding to each undersampled K-space data based on each of the supplemented data layers and the image enhancement domain model.
3. The image generation method according to claim 2, wherein the determining a layer image corresponding to each of the undersampled K-space data based on each of the supplemented data layers and the image enhancement domain model includes:
carrying out inverse Fourier transform on each complementing data layer to obtain corresponding image domain data;
and inputting each image domain data into the image enhancement domain model to obtain a noise-removed and artifact-removed layer image corresponding to each undersampled K space data.
4. The image generation method according to claim 1, wherein the preprocessing each of the undersampled K-space data and the fully sampled K-space data to obtain target undersampled K-space data corresponding to each of the undersampled K-space data and target fully sampled K-space data corresponding to the fully sampled K-space data, respectively, includes:
firstly, normalizing each undersampled K space data and the fully sampled K space data;
and then carrying out registration operation on each piece of the normalized undersampled K-space data and the fully sampled K-space data so as to enable the spatial positions of the undersampled K-space data and the fully sampled K-space data among different imaging to be corresponding to each other, and thus obtaining target undersampled K-space data corresponding to each piece of the undersampled K-space data and target fully sampled K-space data corresponding to the fully sampled K-space data.
5. The image generation method according to claim 4, wherein the preprocessing is performed on each of the under-sampled K-space data and the fully-sampled K-space data to obtain target under-sampled K-space data corresponding to each of the under-sampled K-space data and target fully-sampled K-space data corresponding to the fully-sampled K-space data, respectively, further comprising:
after the registration operation, removing the data layer which is incomplete due to the registration operation in each piece of the under-sampled K-space data, so that the data layer in each piece of the target under-sampled K-space data meets the input requirement of the data complement domain model.
6. An image generation method according to claim 2, wherein in the data completion domain model, the method further comprises:
acquiring fusion characteristics of the input data;
and completing the data layers according to the fusion characteristics to obtain the completed data layers corresponding to the target undersampled K space data.
7. The image generation method according to claim 6, wherein the completing each data layer according to the fusion feature to obtain a completed data layer corresponding to each target under-sampled K-space data includes:
generating standard data layers corresponding to the data layers according to the positions corresponding to the input data;
and replacing the corresponding features in each standard data layer according to the fusion features to obtain a complete data layer corresponding to each target undersampled K space data.
8. An image generation apparatus, comprising:
a first acquisition module for acquiring under-sampled K-space data of a plurality of different imaging sequences and fully-sampled K-space data of one imaging sequence;
the first execution module is used for respectively preprocessing each undersampled K space data and the fully sampled K space data to obtain target undersampled K space data corresponding to each undersampled K space data and target fully sampled K space data corresponding to the fully sampled K space data;
and the second execution module is used for inputting the data layers corresponding to the positions of the target under-sampled K space data and the target full-sampled K space data into a two-domain neural network model as input data to obtain the layer images corresponding to the under-sampled K space data.
9. An electronic device, comprising:
a memory having stored thereon a computer program which, when executed by the processor, performs the steps of an image generation method as claimed in any one of claims 1 to 7.
10. A storage medium storing a computer program executable by one or more processors, the computer program being operable to implement the steps of an image generation method as claimed in any one of claims 1 to 7.
CN202210175944.7A 2022-02-25 2022-02-25 Image generation method and device, electronic equipment and storage medium Pending CN114549681A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210175944.7A CN114549681A (en) 2022-02-25 2022-02-25 Image generation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210175944.7A CN114549681A (en) 2022-02-25 2022-02-25 Image generation method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114549681A true CN114549681A (en) 2022-05-27

Family

ID=81679434

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210175944.7A Pending CN114549681A (en) 2022-02-25 2022-02-25 Image generation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114549681A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112116674A (en) * 2020-08-13 2020-12-22 香港大学 Image reconstruction method, device, terminal and storage medium
CN113470139A (en) * 2020-04-29 2021-10-01 浙江大学 CT image reconstruction method based on MRI
CN113592972A (en) * 2021-07-30 2021-11-02 哈尔滨工业大学(深圳) Magnetic resonance image reconstruction method and device based on multi-modal aggregation
CN113628300A (en) * 2021-09-14 2021-11-09 苏州工业园区智在天下科技有限公司 Method and device for creating neural network system and generating magnetic resonance image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113470139A (en) * 2020-04-29 2021-10-01 浙江大学 CT image reconstruction method based on MRI
CN112116674A (en) * 2020-08-13 2020-12-22 香港大学 Image reconstruction method, device, terminal and storage medium
CN113592972A (en) * 2021-07-30 2021-11-02 哈尔滨工业大学(深圳) Magnetic resonance image reconstruction method and device based on multi-modal aggregation
CN113628300A (en) * 2021-09-14 2021-11-09 苏州工业园区智在天下科技有限公司 Method and device for creating neural network system and generating magnetic resonance image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HAINING WEI ET AL.: "Undersampled Multi-Contrast MRI Reconstruction Based on Double-Domain Generative Adversarial Network", 《IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS》, vol. 26, no. 5, 14 January 2022 (2022-01-14) *

Similar Documents

Publication Publication Date Title
EP4148660B1 (en) Improving quality of medical images using multi-contrast and deep learning
Dalmaz et al. ResViT: residual vision transformers for multimodal medical image synthesis
Xiang et al. Deep-learning-based multi-modal fusion for fast MR reconstruction
US20200294288A1 (en) Systems and methods of computed tomography image reconstruction
CN110444277B (en) Multi-mode brain MRI image bidirectional conversion method based on multi-generation and multi-confrontation
Zhao et al. Deep learning of brain magnetic resonance images: A brief review
CN111368849B (en) Image processing method, image processing device, electronic equipment and storage medium
Guo et al. Over-and-under complete convolutional rnn for mri reconstruction
JP2020522068A (en) Machine learning of raw medical image data to support clinical judgment
CN113359077A (en) Magnetic resonance imaging method and related equipment
Arai et al. Significant dimension reduction of 3D brain MRI using 3D convolutional autoencoders
Meng et al. Accelerating T2 mapping of the brain by integrating deep learning priors with low‐rank and sparse modeling
CN106530236A (en) Medical image processing method and system
Yang et al. Generative Adversarial Networks (GAN) Powered Fast Magnetic Resonance Imaging--Mini Review, Comparison and Perspectives
Liu et al. DL‐MRI: A Unified Framework of Deep Learning‐Based MRI Super Resolution
Lim et al. Motion artifact correction in fetal MRI based on a Generative Adversarial network method
Wang et al. One for multiple: Physics-informed synthetic data boosts generalizable deep learning for fast MRI reconstruction
Al-Masni et al. A knowledge interaction learning for multi-echo MRI motion artifact correction towards better enhancement of SWI
Yang et al. Quasi-supervised learning for super-resolution PET
WO2023219963A1 (en) Deep learning-based enhancement of multispectral magnetic resonance imaging
Xie et al. Inpainting the metal artifact region in MRI images by using generative adversarial networks with gated convolution
US20220292641A1 (en) Dynamic imaging and motion artifact reduction through deep learning
CN114549681A (en) Image generation method and device, electronic equipment and storage medium
Yu et al. Cardiac LGE MRI segmentation with cross-modality image augmentation and improved U-Net
CN114494014A (en) Magnetic resonance image super-resolution reconstruction method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination